2. Tell your AI to download web docs and use html2text + grep + head + tail to discover what it needs.
Seriously, it just works. No need for everyone to reinvent its own MCP server. No need to put your whole documentation every time in the prompt. Just improve what you have:
- Add OAuth2 to your API endpoints, and improve your docs.
- For CLIs, no need to copy paste your whole help into SKILLS, just improve your existing --help.
If things get easier to use and understand for non-AI, then it's also better for AI itself, don't duplicate work.
Point 1) above definitely needs some kind of protocol, but not a whole MCP server written in a language hosted somewhere. Just a json is enough that an AI client downloads and the user can select which group of method/endpoint is allowed (Read-only, Write, Admin). THIS DOES NOT need to go in the prompt, it's validating the http calls. What can go in prompt is just e.g. at most "Only read-only endpoints are allowed" to give a hint to the AI. That's it!
dmilicev2•1h ago
At least not at the moment, and perhaps it will stay that way. Its logical to think that LLMs will always be more expensive to run vs a simple web or shell script for a specialised purpose.
Arguably you can drop an API or a local script for that AI to consume, but I do see benefits of having it standardised for the industry as mcp if you want something to run as an infrastructure layer that’s AI agnostic.
Lethalman•1h ago
dmilicev2•7m ago