We are already off to a wrong start, context has a meaning specific to LLMs, everyone who works with LLMs knows what it means: the context is the text that is fed as input at runtime to LLM, including the current message (user prompt) as well as the previous messages and responses by the LLM.
So we don't need to read any further and we can ignore this article, and MCPs by extension, YAGNI
As you yourself say, the context is the text that is fed as input at runtime to an LLM. This text could just always come from the user as a prompt, but that's a pretty lousy interface to try to cram everything that you might want the model to know about, and it puts the onus entirely on the user to figure out what might be relevant context. The premise of the Model Context Protocol (MCP) is overall sound: how do we give the "Model" access to load arbitrary details into "Context" from many different sources?
This is a real problem worth solving and it has everything to do with the technical meaning of the word "context" in this context. I'm not sure why you dismiss it so abruptly.
Agent LLMs are able to retrieve additional context and MCP servers give them specific, targeted tools to do so.
Yeah MCP is the worst documented technology I have ever encountered. I understand APIs for calling LLMs, I understand tool calling APIs. Yet I have read so much about MCP and have zero fucking clue except vague marketing speak. Or code that has zero explanation. What an amateur effort.
I've given up, I don't care about MCP. I'll use tool calling APIs as I currently do.
TZubiri•8h ago
Most of the material on MCP is either too specific or too in depth.
WTF is it?! (Other than a dependency by Anthropic)
mdaniel•8h ago
But to save you the click & read: it's OpenAPI for LLMs
shepherdjerred•6h ago
esafak•7h ago
jredwards•5h ago
repeekad•5h ago
jredwards•5h ago
aryehof•3h ago
jredwards•5h ago
I like this succinct explanation.
troupo•4h ago
You write a wrapper ("MCP server") over your docs/apis/databases/sites/scripts that exposes certain commands ("tools"), and you can instruct models to query your wrapper with these commands ("calling/invoking tools") and expect responses in a certain format that they can then use.
That is it.
Why vibe-coded? Because instead of bi-directional websockets the protocol uses unidirectional server-side events, so you need to send requests to a separate endpoint and then listen to the SSE hoping for an answer. There's also non-existent authentication.
nylonstrung•1h ago
Awful case of "not invented here" syndrome
I'm personally interested in if WebTransport could be the basis for something better
aryehof•3h ago
Conversely, it allows many different LLMs to get context via many different Applications using a standard prodocol.
It addresses an m*n problem.
kristopolous•2h ago
that's the missing piece in most of these description.
You send off a description of the tools, the model decides if it wants to use one, then you run it with the args, send it back to the context and loop.
fhd2•1h ago
Unless I'm missing something major, it's just marginally more convenient than just hooking up tool calls for, say, OpenAPI. The power is probably in the hype around it more than it's on technical merits.
nylonstrung•1h ago