I feel like some cycle phenomenon has been reached here..
The first protocols of the internet were very naive. Why'd you need to encrypt traffic? What do you mean exploit DNS, why would anyone do that?
Then people realised that the internet is a really, really wild place and that won't do.
I suddenly feel old, because this new AI tool era seems to have forgotten that lesson.
I feel it's like watching crypto learn by any% speedrunning why regulations and oversight might be a good in the first place (FTX and such).
I hope the next generation of AI tech/protocols are more robust, trust just doesn't cut it, or we'll see plenty of fingers being burnt at the stove.
Also, the security tooling is still nascent.
I guess the bit I'm more surprised about is why Chrome extensions are even allowed to make localhost connections without requesting user approval? Is the assumption that everything running locally must be safe? What am I missing here?
This would be a non-issue if some kind of simple origin-authenticated token exchange was built into the protocol itself.
It’s the “system administrator”’s job to make sure the MCP is running at the right privilege level with correct data access levels. The MCP server can’t stop somebody from running it as root the same way any other program can’t.
At the end of the day the MCP should be treated as an extension of the user. Whatever the user can do, so too can the MCP server. (I mean, this isn’t technically true.. you can run the MCP under its own account or inside some sandbox… this will probably start to happen soon enough)
Many other programs on the system aren't an extension of the user. And they can access ports.
How could it do authentication? Easily. The most basic option is for the server to put a secret token in your user folder, so only code with access to that token can talk to it.
On Linux it can be even simpler. Don't attach the server to a port, attach it to a socket file.
You'd think the last 30+ years of regret and hacky attempt to add auth to email and http (as just the top two to come to mind) hadn't happened.
Especially if you could ask another AI "I have access to an MCP running on a Victim computer with these tools. What can you do with them?" => "Well, start by reading .ssh/id_rsa and I'd look for any crypto wallets. Then you can move on to reading personal files for blackmailing or sniff passwords..." and just let it "do its thing" as an attacking agent in an automated way. It could be automated which creeps me out!
Granted it doesn’t separate between “resources”, “tools” and “prompts” but I think the line is blurry anyway.
And yes it can be used locally.
MCP is a tool calling protocol. Models are trained on it as a way to do stuff outside their sandbox. OpenAPI isn’t a tool calling protocol but more of a schema to describe interfaces.
You could write an MCP that exposes an OpenAPI compatible set of interfaces, but you couldn’t write an OpenAPI thing to call… well… anything. OpenAPI doesn’t cover the actual tool calling.
In addition, even if OpenAPI would work it’s massive and contains a ton of extra “stuff” that would overwhelm the models precious context window. Unless the OpenAPI schema was explicitly intended for LLM consumption, the results will be a mixed bag as the LLM will have to spend half its time making sense of the schema. A well designed MCP might take an OpenAPI endpoint suite and wrap it in thoughtful tool calls so the LLM doesn’t have to parse a giant schema doc (also… the LLM actually needs to make the HTTP call and guess how it will do that? Why though MCP of course!)
By contrast, MCP tools expose a slender LLM optimized interface that requires little “thought” to call.
Honestly though, comparing OpenAPI to MCP is a bit like comparing an xml schema to curl. They are completely different. MCP is for tool calling. It’s how you expose… well… anything from calling into your shell to looking something up in your database. The only similarity is that MCP exposes a schema to the model to tell it what kinds of tool calls the model can make. And did you read the spec I’d imagine said schema looks a wee bit like OpenAPI (wouldn’t know as I haven’t looked though).
Seriously. Go write an MCP for something you think would be cool. Like go write an MCP for Claude that connects to your logging and lets Claude search the logs in a more structured way. Make something like “find_request(request_id)” and then let your code do all the searching and have it return the relevant logs. Watch as the model doesn’t have to spend a billion tokens figuring out your database schema, how to grep, etc… good MCP’s do all the grunt work so the LLM can focus on your task and not spend tons of time bootstrapping. The entire exercise won’t even take a half day and you’ll have yourself a cool new tool that saves you time.
In short, MCP and OpenAPI are two entirely different concepts.
BTW, you should really run your MCP servers in a sandboxed environment, esp if they don't need to do things like `exec` or read from the filesystem. We do this with the https://mcp.run ecosystem by wrapping them in wasm. Because they are wasm you could also run them right in the chrome extension!
No, all HTTP clients set User-Agent.
Literally nothing about MCP makes this easier or worse
The permission itself doesn’t tell you anything about what powers it might grant. You need to know how all your local processes work to determine that, and most people have no idea.
It’s too generic for users to make reasonable decisions about. And that means that servers on localhost really should have authentication. Connecting client A to server B should be explicit.
npace12•22h ago
https://github.com/dnakov/little-rat
euazOn•22h ago
npace12•22h ago
binarymax•20h ago
npace12•19h ago