And with the cynicism out of the way, what an insightful and refreshing article!
Web services started as the same open utopia. Once everyone was in they jacked up the prices so high it killed the initial consumer apps (eg. Google Maps and Reddit)
Nobody is giving access to their walled garden for the good of open-anything. It's what the VCs and stockholders demand and they're the ones putting up the cash to keep this AI hype funded in spite of it running at a loss.
Given they haven't put security into MCP yet, I guess they'll need to do that first before they move on reinventing API keys so they can charge for access and hailing that as the next reason the line will go up.
OTOH, that only effects those services, it won't stop people from leveraging MCP as, say, a generic local plugin model for non-AI apps.
I'm considering migrating, but time is limited and I'd love to avoid a dead-end if I can :p
Like this one you can bet every blogspam article is also trying to push something...
> The Part Where I Tell You I'm Building Something
MCP has very little utility outside of LLMs. The article begins by saying "but what if we remove the AI", then goes back on that thesis by saying "but if there was an AI between these things, then it becomes a universal plugin system". Which is true, but its missing the critical bit: The AI itself is the universal plugin system, not MCP. And, beyond that, its not even the AI: Its natural language. Language is the universal plugin system.
Its not unbelievable that there exists an alternate reality where the Anthropic researchers that invented MCP instead simply leveraged a slightly extended form of OpenAPI specs. The only functional difference is that MCP was a stdin/stdout format first, and added HTTPS later, but sister problems in this domain like LSP just skipped stdin/stdout and went straight to locally-hosted HTTPS anyway. What matters isn't MCP, OpenAPI, or anything like that; what matters is the LLM itself, its tool calling capability, and the tool calling harness. The catalogue of available tools can be in any format; and, truly, the LLM does not care what format its in.
1. No one asserted that MCP's key innovation was its format. In fact, what I strongly, almost explicitly implied with my previous comment is that MCP has made no key innovations beyond its marketing system (which does matter and has forced a lot of companies who would never have operated in this space to make an MCP system, and that is good. I like MCP.).
2. MCP is fundamentally bidirectional, not unidirectional. Its a schematized request-response protocol that defines a ton of the same primitives that OpenAPI defines, like available RPC methods, authorization, request parameters, and response schemas [1]. Of course, OpenAPI goes beyond that into HTTP-specific characteristics like status codes, and MCP goes beyond that into LLM-specific characteristics like a prompt catalogue.
3. I'm not aware of any problem domain outside of LLMs that is adopting MCP. In fact, its adoption even within the world of LLMs has been lackluster at best [2].
[1] https://modelcontextprotocol.io/docs/learn/architecture#data...
I've reread the article a couple times and I can't see where it says that you need to use an LLM to make it universal. I'm going to go so far as to say that it doesn't say that.
And why? Because it dumbs it down to the point that the LLM can understand it (often, in part, by removing security concerns). But these are LLMs! They're supposed to be smart! Isn't it a supposedly temporary failure that I can't just point the LLM at an OpenAPI spec (or use HATEOAS) and be good to go?
Will this be doable in the next few months with better models? If so, why bother with MCP? If this won't be doable in the next few months / years, then how smart do we really expect these LLMs to be?
The reason why MCP was invented is hard to reverse-engineer, but from what I've seen I suspect its mostly: it was stdio/stdout first, and there isn't really a great standard like OpenAPI in the local-first stdio/stdout world. Interesting, LSP (Microsoft) took the opposite approach; despite having far less reason to be hosted on the web, LSP is just localhost http calls. MCP could have done that, but they didn't, without good reason, which leads me to the second reason why I suspect it was invented: they didn't know, and thus couldn't learn from prior art that they didn't know. Most of these AI labs are not rigorous engineering shops, they're better characterized as: a bunch of kids with infinite money trying to reignite the spirit of the early internet startup boom. Many believe that what they're building will replace everything, so why even try to build on what already exists?
But why it was invented doesn't matter, because anyone can invent anything. The reason why MCP has gotten popular is because its AI. That's it. Its a marketing and SEO system that enables companies who are otherwise struggling to find a way to prioritize AI use-cases for their apps to say "look, we're AI now". You don't get the same market impact if all you need is that boring old OpenAPI schema you already have.
And again: I like MCP. What I stated sounds pessimistic, but its only intended to be realistic. There is almost zero technical reason for MCP to exist beyond that.
This is neither an accurate quote of the article, nor an accurate paraphrase of something the article says, nor an accurate description of something subtly implied by the article without directly being said.
The article, in fact, gives an example without AI in the middle.
As developers, we often want everything to be rich, verbose, and customizable — but the reality is that for most users (and now for AIs acting on their behalf), simplicity wins every time. It’s like designing a great UI: the fewer ways you can get lost, the more people (or models) can actually use it productively.
If MCP ends up nudging the ecosystem toward small, well-defined, composable capabilities, that’s a win far beyond just “AI integration.”
It fills a gap that exists in most service documentation: an easily discoverable page for developers (specifically, those who already know how to use their ecosystem of choice's HTTP APIs) that has a very short list of the service's most fundamental functionality with a simplified specification so they can go and play around with it.
It just begs for spam and fraud, with badly-behaving services advertising lowest-cost, highest-quality, totally amazing services. It feels like the web circa 1995… lots of implicit trust that isn’t sustainable.
Snark aside it's, autonomous agents were one of the more interesting justifications for web APIs back in the day. I don't think people necessarily envisioned LLMs but some sort of agent you could send off to grab data or book flights or whatever. Specs like WSDL do what MCP does today letting the autonomous system figure out how to use a service.
Oh well, schemas and the ability to do browser-native style transforms were lame! Lets transition to YAML so no one can say "no" to Norway.
The other issue is that you cannot think of MCP servers as universal pluggable systems that can fit into every use-case with minimal wrapping. Real world scenarios require pulling a lot of tricks. Caching can be done at higher or lower level depending on the use-case. Communication of different information from the MCP server also is different depending on the use-case (should we replace these long IDs for shorter IDs that are automatically translated to longer ones). Should we automatically tinyurl all the links to reduce hallucination? Which operations can be effectively solved with pure algorithms (compress 2-3 operations into one) because doing this with LLMs is not only error-prone but also not optimal (imagine using LLM to grep for strings in many files one by one using tool calls rather than using grep to search for strings - not the same)
There are so many things to consider. MCP is nice abstraction but it is not a silver bullet.
Speaking from experience with actual customers and real use-case.
Not only that, apparently we finally got Jini and Agent Tcl back!
https://www.usenix.org/conference/fourth-annual-usenix-tcltk...
https://www.eetimes.com/jini-basics-interrelating-with-java/
You asked for it.
My favorite example is the public Atlassian one — https://www.atlassian.com/blog/announcements/remote-mcp-serv...
Even with Claude or Gemini CLI (both with generous limits), I run out of context and resources fast.
With local LLMs via LM Studio? Forget it — almost any model will tap out before I can get even a simple question in.
That's a toolchain design approach that is independent of MCPs. A toolchain using MCP could do that and there would be no need for any special support in the protocol.
Many businesses are rushing to put out something that fits the MCP standard but not taking the time to produce something that lets an LLM achieve things with their tool.
Seems to help it during database coding. But this could also be done easily with a file.
I think they'll have a while where they can get away with this approach too. For a good while, most people will probably blame the AI or the model if it doesn't use Atlassian tools well. It'll probably be quite some time before people start to notice that Atlassian specifically doesn't work well, but the almost all their other tools do.
(More technical users might notice sooner—obviously, given the context of this thread—but I mean enough of a broader user base noticing to have reputational impact)
Luckily there other ways to get the data & do the interactions you need.
Eg. give the model your login token/cookies so it can fetch the websites and interact with them - or have it log in as you with Playwright MCP.
The "system integration" benefits only matter when you have an LLM in the loop making decisions about which tools to use.
It's done a good job for me with the tools I've written and connected to LM Studio. I suspect the issue with whether you get satisfactory results is less about MCP qua MCP and more about particular servers and tools.
Weirdly, I'm a little optimistic that it might work this time. AI is hot, which means that suddenly we don't care about IP anymore, and if AIs are the ones that are mostly using this protocol, providers will perhaps be in less of a rush to block everybody from doing cool things.
Web 2.0 failed because eventually people realized to make money they needed to serve ads, and to do that they needed to own the UI. Making it easy to exfiltrate data meant switching cost was low, too. Can’t have that when you’re trying to squeeze value out of users. Look at the evolution of the twitter API over the 2.0 era. That was entirely motivated by Twitter’s desperate need to make money through ads.
Only way we avoid that future is if we figure out new business models, but I doubt that will happen. Ads are too lucrative and people too resistant to pay for things.
You can’t guarantee they’ll be shown and interpreted correct by the downstream LLM, you can’t guarantee attribution later when a user makes a purchase, you can’t collect (as much) data on users for targeting, etc
The biggest ad networks today (Google, Meta) have strong first party data operations, strong first party attribution techniques, and strong targeting, either through intent (search) or profiles (meta).
MCP originated ads really only threaten Google (via intent based ads), and they’re quickly moving I to owning the UX of LLMs, securing their place in this value chain.
Where the goal was to have a site's data as machine readable so that could be mashed up into something new? Instead of making it easier to gather the big sites locked the bulk of their data down so it never gained widespread adoption
Web 2.0 is what we mostly have now -- social, user generated content and interaction
* Disclaimer: domain not usable for any purposes except on computers where you have root to install their alternative resolver
that is certainly what I think of when web 3 is mentioned.
While it does fall short of the implementation of the Semantic Web, in a way it makes it a possibility to fulfill its intent without the full buy-in of site owners. There still has to be some buy-in or allowance, otherwise it will be locked down in the near future as some comments are expecting.
First of all I think this kind of localhost/stdio MCP is kind of not 'The Way' besides for playing around. I've been working on SSE/remote/cloud-based MCP
Here's a fun example: https://x.com/firasd/status/1945233853414826060
("my landing page is in a text editor with iframe preview. I ask Claude to edit it—the doc gets transcluded into the AI chat, gets surgically edited. I reload. (All on mobile!)")
I'm working on a starter template like a DevTools MCP that people can deploy on Cloudflare and oAuth with their github account and it provides tools like url_fetch for people to use privately in their AI chats/tools. First I have to figure out the oAuth stuff myself but will make it a public repo after that and post on here
PS. I also think tool use is really underrated and stuff like MCP unlocks a kind of agentic behavior 99% of AI users have never seen.
This is an inadvertently great and amusing analogy, because it shows how people can assume something about an API's capabilities without actually trying it.
A 12V battery car battery can't power a pizza oven. It can barely boil water in a special rice cooker - very very slowly. And it risks depleting the battery and not being able to start your car.
Like a new API, people get excited about what it can do, but the reality is very different.
I have a portable fan with a power bank that charges over USB with a USB-micro plug. For some reason I can't fathom, it's an absolute power hog. I've killed my van's battery using it for just a few hours. (In theory it should be using at max 2.5W, but that's not the point).
Again, that shows how trying to extend an APIs design can have unexpected side effects which tax the system in ways that are hard to predict.
MCP has a lot of excitement around it now, but it will inevitably reach its limits. The lesson is to test your system under load before assuming anything.
cranberryturkey•12h ago
I built it and have added several modules, but I'm always looking for people to add more modules. See ./mcp_modules in the root folder.
fxwin•11h ago
Also doesn't really seem to relate to the OP beside the term 'MCP' in the title
cranberryturkey•10h ago
fxwin•8h ago
Unless I'm missing something, this has very little to nothing to do with MCP.
cranberryturkey•5h ago