It works well enough; users can fetch data and make updates against the system. However, we're finding that, especially with ChatGPT, this is leading to a few problems:
1. Various actions the user wants to take involve hitting multiple endpoints in sequence. The LLM does not always follow directions to hit these endpoints correctly. As a result, a single tool needs to be responsible for some orchestration if it is to be reliable.
2. Exposing UUIDs of resources to the user is confusing, especially when ChatGPT asks whether it is okay to send that data in a tool call. We're thinking it would be better to identify a resource by a combination of fields that produce a unique, human-meaningful composite key (for example, first/last name versus an employee UUID).
3. OpenAI's lack of vertical UI support means widgets have to support multiple states and handle multiple tool calls. It is not currently possible on their app platform to instruct the LLM to produce a new widget from an existing one; you can only encourage it to make the correct calls in sequence.
The more I develop in this area, the more I think MCP tools should diverge from simply wrapping REST endpoints and instead implement more user-friendly and LLM-friendly tools with their own custom logic. Does this align with what others are finding, or is it better to keep the tools as thin wrappers around existing REST endpoints? Where do other engineers see this going?
jonahbenton•16m ago