That said, the original spec needed some rapid iteration. With https support finally in relatively good shape, I hope we'll be able to take a year to let the API dust settle. Spec updates every three months are really tough, especially when not versioned, thoroughly documented, or archived properly.
It’s interesting to see other tools struggling to keep up. ChatGPT supposedly will get proper MCP client support “any day now”, but I don’t see codex supporting it any time soon.
Aider is very much struggling to adapt as well, as their whole workflow of editing and navigating files is easily replaced by MCP servers (probably better as well, as it provides much effective ways of reducing noise vs signal), so it’ll be interesting to see how tools adapt.
I’d love for Claude Code (or any tool for that matter) to fully embrace the agentic way of coding, e.g. have multiple agents specialize in different topics and some “main” agent directing them all. Those workflows seem to be working really well.
People are going to continue doing that because these agentic tasks can take some time to run and checking in to approve a command so often becomes an annoyance.
I can’t see a way around that except to have some kind of sandboxing or a concept of untrusted or tainted input rather than treating all tokens as the same. Maybe a way of detecting if the response of a tool is within a threshold of acceptability within the definition of the MCP (which is easier with structured output), which is used to force a manual confirmation or straight up rejection if it’s deemed to be unusual or unsafe.
That said, I ditched codex for claude code... Sorry open ai. No MCP and no way to interact during execution is a huge drawback.
I think we are starting to see these remote agent environments where each agent session gets its own sandbox environment to run things in. I bet thats where this is going.
Quite ironic isn't it?
> Javascript community suddenly got automatic code creation agents, and went to town.
I've been working on an MCP server[0] that let's LLMs safely and securely generate and execute JavaScript in a sandbox including using `fetch` to make API calls. It includes a built in secrets manager to prevent exposing secrets to the LLM.I think this unlocks a lot of use cases that require code execution without compromising security. Biggest one is that you can now ask the LLM to make API calls securely because the JS is run in a C# interpreter with constraints for memory, time, and statement limits with hidden secrets (e.g. API keys).
The implementation is open source with sample client code in JS using Vercel AI SDK with a demo UI as well.
Couldnt AI help with that..?
One weird thing I found a few weeks ago, when I added my remote MCP to Claude's integration tab on the website, I was getting OAuth errors.
Turns out they are requiring a special "claudeai" scope. Once I added that to my server, I was able to use it remotely in claude desktop!
I couldn't find any docs or reasons online for them requesting this scope.
Also, I have been using remote mcps in claude code for weeks with the awesome mcp-remote proxy tool. It's nice to not need that any longer!
Then as I'm writing a book currently on MCP Servers with OAuth, Elicitations come out! I'm rushing to update this book and be the best source for every part of the latest spec, as I can already see lots of gaps in documentation on all these things.
Huge shout out to VS Code for being the best MCP Client, they have support for Elicitations in Insiders already and it works great from my testing.
For more curious and lazy people -- what are elicitations?
You can ask the client to fill in a dropdown or input.
Example they give is a restaurants table booking tool.
Imagine saying book a table at 5pm. But 5pm is taken.
You can “elicit” the user and have them select from a list of available times
Will be trying the MCP with the live call as well, i think it should work
For what it’s worth, I’ve been using WitsyAi: it’s fully free, open source, and serves as a universal desktop chat-client (with remote MCP calling). You just need to BYO API keys.
Remote MCPs are close to my heart; I’ve been building a “Heroku for remote MCP tools” over at Ninja[2] to make it easy for people to spin up and share MCP tools without the usual setup headaches.
Lately, I’ve also been helping folks get started with MCP development on Raspberry Pi. If you’re keen to dive in, feel free to reach out [3].
[2] https://ninja.ai
I like the fact this mcp-debug tool can present a REPL and act as a mcp server itself.
We've been developing our MCP servers by first testing the principle with the "meat robot" approach - we tell the LLM (sometimes just through the stock web interface, no coding agent) what we're able to provide and just give it what it asks for - when we find a "tool" that works well we automate it.
This feels like it's an easier way of trying that process - we're finding it's very important to build an MCP interface that works with what LLMs "want" to do. Without impedance matching it can be difficult to get the overall outcome you want (I suspect this is worse if there's not much training data out there that resembles your problem).
Do not get comfy with any of these protocols or companies. They can destroy your workflows and livelihood in an instant.
Not even close to true - VSCode and cursor both have MCP support, and INE VSCode’s is great.
Also, I’m curious about your claim to have spent 10 months building MCP servers, as the spec has only been out since the end of November - which is ~7 months.
10 months must have been hyperbole but I've moved two states and a country in that time, apologies.
Also- Claude desktop is honestly one of the more minimal implementations of an mcp client with a bunch of missing features. Just search for mcp clients. There are many you can use.
You hook up your GitHub repo and it’ll clone it, setup the env and then work on the task you give it
E.g. Codename Goose https://block.github.io/goose/docs/quickstart/
Which I think supports Gemini among with all the other major AI providers, plus MCP. I have heard anecdotally that it doesn't work as well as Claude Code, so maybe there are additional smarts required to make a top notch agent.
I've also heard that Claude is just the best LLM at tool use at the moment, so YMMV with other models.
You have to get a personal access token instead and pass it in the command when you do "claude mcp add" , I'm sorry I don't have the exact format in front of me right now but you should be able to get it working with that.
It's basically an adapted version of the GitHub remote MCP server instructions that uses the token
1. Our tickets are sometimes "unqualified" and don't have enough information for a human to work on them (let alone an AI agent). 2. Tickets can be created on accident or due to human error and would then result in time spent working on things that don't matter. 3. AI tends to write code that violates our own "unwritten rules" and we are still in the process of getting our rules written down so that our own agentic workflows work properly.
I could definitely see the value in this for certain types of updates, but unfortunately it wouldn't work for our system.
That being said, I tried to sign up, and it broke horribly, and it looks like it was knocked out in 5 minutes, so my desire to give it access to my production codebase is fairly minimal.
Now a lot of people use it to add context to their model. And also tool calls?
I am using continue.dev not Claude but I imagine this tech stack will be ported everywhere.
As a python Dev I don't quite yet understand though how and what service I should be running. Or be using. Tbh. Can anyone ELI5?
The blender one is also fun as a starting point, if you do any 3d modelling (or even if you don't).
They're also fun and easy to build.
Here's one I made - it wraps the vscode debugger: https://github.com/jasonjmcghee/claude-debugs-for-you
I've specifically tested it with continue.dev so it might serve as a useful example / template.
it allows publishing any text or claude artifact directly from the claude.
I have made it mostly for fun and as an experiment to try what is possible.
This now natively supports the latest streamable http transport protocol and the server is entirely remote, nothing is running on your local machine it's just an url (usually ending with /mcp, although not mandatory it's usually true and distinguishes from /sse servers).
So I don't really understand what's new in this announcement.
Maybe what's actually new is streamable HTTP and OAuth?
https://github.com/anthropics/claude-code/blob/main/CHANGELO...
1.0.27
Streamable HTTP MCP servers are now supported
Remote MCP servers (SSE and HTTP) now support OAuth
MCP resources can now be @-mentionedhttps://velvetshark.com/ai-company-logos-that-look-like-butt...
https://scienceleadership.org/blog/the_use_of_illustration_i...
What is the additional functionality?
Are they all very explicit requests? I was hoping plugging an MCP into it would just make it more capable automatically
- Create a basic Next.js project with app router. use context7
- Create a script to delete the rows where the city is "" given PostgreSQL credentials. use context7
I feel like that Claude Code has independently invoked MCP Fetch a couple of times in the course of hundreds of interactions, but yes — it looks like specifically invoking them is the norm, at least until LLMs get better at matching tools to requests.
You can get the brave web search MCP with a free tier API key and it's super fast, combine that with the fetch MCP and it grabs information online really quickly.
explore our MCP directory
https://www.anthropic.com/partners/mcpThe CloudFlare and Linear MCP servers (at a minimum) seem to use the same approach, the mcp-remote npm package, e.g.
https://github.com/cloudflare/mcp-server-cloudflare/tree/mai...
{
"mcpServers": {
"cloudflare": {
"command": "npx",
"args": ["mcp-remote", "https://builds.mcp.cloudflare.com/sse"]
}
}
}
But mcp-remote is clearly documented as experimental:> Note: this is a working proof-of-concept but should be considered experimental
https://www.npmjs.com/package/mcp-remote
I'm not sure how this could be considered anything other than professional negligence. I'm reminded of Kyle Kingsbury's CraftConf talk, Hope Springs Eternal: https://theburningmonk.com/2015/06/craftconf15-takeaways-fro...
Anthropic having created MCP should not be outdated though I agree ..
That MCP remote workaround is no longer necessary
I feel like the specific transports should be their own specs TBH. That would allow a lot a flex around this stuff and it would decouple transport changes from main protocol changes.
joshwarwick15•7mo ago
digitcatphd•7mo ago
movedx01•7mo ago
hirsin•7mo ago
Have you seen significant need for this? I've been trying to find data on things like "how many MCP clients are there really" - if it takes off where everything is going to be an MCP client && dynamically discovering what tools it needs beyond what it was originally set up for, sure.