Our research lab discovered this novel threat back in July: https://invariantlabs.ai/blog/toxic-flow-analysis and built the tooling around it. This is an extremely common type of issue that many people don't realize (basically, when you are using multiple MCP servers that individually are safe, but together can cause issues).
ALAN
It's called Tron. It's a security
program itself, actually. Monitors
all the contacts between our system
and other systems... If it finds
anything going on that's not scheduled,
it shuts it down. I sent you a memo
on it.
DILLINGER
Mmm. Part of the Master Control Program?
ALAN
No, it'll run independently.
It can watchdog the MCP as well.
DILLINGER
Ah. Sounds good. Well, we should have
you running again in a couple of days,
I hope.In general, the only way to make sure MCPs are safe is to limit which connections are made in an enterprise setting
It would be silly to provide every employee access to GitHub, regardless of whether they need it. It’s just distracting and unnecessary risk. Yet people are over-provisioning MCPs like you would install apps on a phone.
Principle of least access applies here just as it does anywhere else.
This org has gone to some dubious lengths to make a name for themselves, including submitting backdoored packages to public npm repos which would exfiltrate your data and send to a Synk-controlled C&C. This included the environment, which would be sending them your username along with any envvars like git/aws/etc auth tokens.
This might give them some credibility in this space, maybe they stand a decent chance of scanning MCPs for backdoors based on their own experience in placing malicious code on other people's systems.
I didn't even know want an MCP server was until I noticed the annoying category in VSCode Extensions panel today. Only able to get rid of it by turning off a broad AI related flag in settings (fine by me, wish I knew it was there earlier). An hour later, I'm seeing this.
cyberax•3mo ago
I tried and failed after about 3 days of dealing with AI-slop-generated nonsense that has _never_ been worked. The MCP spec was created probably by brainless AI agents, so it defines two authentication methods: no authentication whatsoever, and OAuth that requires bleeding-edge features (dynamic client registration) not implemented by Google or Microsoft.
The easiest way for that right now is to ask users to download a random NodeJS package that runs locally on their machines with minimal confinement.
zingababba•3mo ago
https://github.com/modelcontextprotocol/modelcontextprotocol... https://github.com/modelcontextprotocol/modelcontextprotocol... https://github.com/modelcontextprotocol/modelcontextprotocol... https://aaronparecki.com/2025/05/12/27/enterprise-ready-mcp https://github.com/modelcontextprotocol/modelcontextprotocol... https://www.okta.com/newsroom/press-releases/okta-introduces... https://github.com/modelcontextprotocol/ext-auth/blob/main/s... https://github.com/modelcontextprotocol/modelcontextprotocol... https://github.com/modelcontextprotocol/modelcontextprotocol... https://github.com/modelcontextprotocol/modelcontextprotocol...
fasbiner•3mo ago
morkalork•3mo ago
mbreese•3mo ago
I think the only difference is the statefulness of the request. HTTP is stateless, but MCP has state? Is this right?
I haven’t seen many use cases for how to use the state effectively, but I thought that was the main difference over a plain REST API.
eric-burel•3mo ago
fasbiner•3mo ago
https://blog.fka.dev/blog/2025-06-06-why-mcp-deprecated-sse-...
There are a million "why don't you _just_ X?" hypothetical responses to all the real issues people have with streamable http as implemented in the spec, but you can't argue your way into a level of ecosystem support that doesn't exist. The exact same screwup with oAuth too, so we can see who is running the show and how they think.
It's hard to tell if there is some material business plan Anthropic has with these changes or if the people in charge of defining the spec are just kind of out of touch, have non-technical bosses, and have managed to politically disincentivize other engineers from pointing out basic realities.
jhugo•3mo ago
MCP can use SSE to support notifications (since the protocol embeds a lot of state, you need to be able to tell the client that the state has changed), elicitation (the MCP server asking the user to provide some additional information to complete a tool call) and will likely use it to support long-running tool calls.
Many of these features have unfortunately been specified in the protocol before clear needs for them have been described in detail, and before other alternative approaches to solving the same problems were considered.
eric-burel•3mo ago
data-ottawa•3mo ago
dboreham•3mo ago
jonfw•3mo ago
You can debate all day whether bringing your own tools is a good thing vs giving the LLM a generic shell tool and an API doc and letting it run curls. I like tools because it brings reproducibility.
MCP is really just a json RPC spec. json RPC can take place over a variety of transports and under a variety of auth mechanisms- MCP doesn't need to spec a transport or auth mechanism.
I totally agree with everybody that most MCP clients are half assed and remote MCP is not well supported, but that's a business problem
Every LLM tool today either runs locally (cursor, zed, IDEs, etc.) so can run MCP servers as local processses w/ no auth, or is run by an LLM provider where interoperability is not a business priority. So the remote MCP story has not been fleshed out
bootsmann•3mo ago
mbreese•3mo ago
Is it possible for the customer to provide their own bearer tokens (generated however) that the LLM can pass along to the MCP server? This was the closest to a workable security I’ve looked at. I don’t know if that is all that well supported by Chat GUI/web clients (user supplied tokens), but should be possible when calling an LLM through an API style call, right (if you add additional pass thru headers)?
eric-burel•3mo ago
arscan•3mo ago
In general, I’d say it’s not a good idea to pass bearer tokens to the LLM provider and keep that to the MCP client. But your client has to be interoperable with the MCP server at the auth level, which is flakey at the moment across the ecosystem of generic MCP clients and servers as noted.
cyberax•3mo ago
Nope. I assumed as much and even implemented the bearer token authentication in the MCP server that I wanted to expose.
Then I tried to connect it to ChatGPT, and it turns out to NOT be supported at all. Your options are either no authentication whatsoever or OAuth with dynamic client registration. Claude at least allows the static OAuth registration (you supply client_id and client_secret).
maxwellg•3mo ago
I'd love to hear more about the specific issues you're running into with the new version of the spec. (disclaimer - I work at an auth company! email in bio if you wanna chat)
cyberax•3mo ago
So far, I was not able to do it. And there are no examples that I can find. It's also all complicated by the total lack of logs from ChatGPT detailing the errors.
I'll probably get there eventually and publish a blog...
brabel•3mo ago
dorkypunk•3mo ago
https://modelcontextprotocol.io/specification/2025-06-18/ser...
embedding-shape•3mo ago
I'm guessing it has a the same shape as a normal message + IsError so on the handling side you don't really have to do anything special to handle it, just proceed as normal and send the results to the LLM so it can correct if needed.
throwaway314155•3mo ago
Even if you're doing local only - MCP tools can mostly be covered by simply asking Claude Code (or whatever) to use the bash equivalent.
cyberax•3mo ago
In other words, downloading random crap that runs unconfined and requires a shitty app like Claude Desktop.
BTW, Claude Desktop is ALSO an example of AI slop. It barely works, constantly just closing chats, taking 10 seconds to switch between conversations, etc.
In my case, I wanted to connect our CRM with ChatGPT and ask it to organize our customer notes. And also make this available as a service to users, so they won't have to be AI experts with subscriptions to Claude.
tsouth•3mo ago
Metadata and resource indicators are solving the rest of the problems that came with the change to OAuth spec.
jngiam1•3mo ago
The LLMs are also really bad at generating correct code for OAuth logic - there are too many conditions there, and the DCR dance is fairly complicated to get right.
Shameless plug: we're building a MCP gateway that takes in any MCP server and we do the heavy lifting to make it compatible with any client on the other end (Claude, ChatGPT - even with custom actions); as a nice bonus it gives you SSO/logs as well. https://mintmcp.com if you're interested.
oezi•3mo ago
jngiam1•3mo ago
cyberax•3mo ago
Have you considered adding a stand-alone service? Perhaps using the AGPL+commercial license.
jngiam1•3mo ago
oezi•3mo ago
whattheheckheck•3mo ago