MCP solves this discovery problem, but it does it by rebuilding tool interaction from scratch: server processes, JSON-RPC transport, client-host handshakes. It got discovery right but threw out composability to get there. You can't pipe one MCP tool into another or run one in a cron job without a host process. Pulling a Confluence page, checking Jira for duplicates, and filing a ticket is three inference round-trips for work that should be a bash one-liner. I also seem to endlessly get asked to re-login to my MCPs, something `gh` CLI never asks me to do.
I think the industry took a wrong turn here. We didn't need a new execution model for tools, we needed to add one capability to the execution model we already had. That's what Model Tools Protocol (MTP) is: a spec for making any CLI self-describing so LLMs can discover and use it.
MTP does that with a single convention: your CLI responds to `--mtp-describe` with a JSON schema describing its commands, args, types, and examples. No server, no transport, no handshake. I wrote SDKs for Click (Python), Commander.js (TypeScript), Cobra (Go), and Clap (Rust) that introspect the types and help strings your framework already has, so adding `--mtp-describe` to an existing CLI is a single function call.
I don't think MCP should disappear, so there's a bidirectional bridge. `mtpcli serve` exposes any `--mtp-describe` CLI as an MCP server, and `mtpcli wrap` goes the other direction, turning MCP servers into pipeable CLIs. The ~2,500 MCP servers out there become composable CLI tools you can script and run in CI without an LLM in the loop.
The real payoff is composition: your custom CLI, a third-party MCP server, and jq in a single pipeline, no tokens burned. I'll post a concrete example in the comments.
Try it:
npm i -g @modeltoolsprotocol/mtpcli && mtpcli --mtp-describe
I know it's unlikely this will take off as I can't compete with the great might of Anthropic, but I very much welcome collaborators on this. PRs are welcome on the spec, additional SDKs, or anything else. Happy building!Spec and rationale: <https://github.com/modeltoolsprotocol/modeltoolsprotocol>
CLI tool: <https://github.com/modeltoolsprotocol/mtpcli>
SDKs: TypeScript (<https://github.com/modeltoolsprotocol/typescript-sdk>) | Python (<https://github.com/modeltoolsprotocol/python-sdk>) | Go (<https://github.com/modeltoolsprotocol/go-sdk>) | Rust (<https://github.com/modeltoolsprotocol/rust-sdk>)
nr378•2h ago
Say your team has an internal `infractl` CLI for managing your deploy infrastructure. No LLM has ever seen it in training data. You add `--mtp-describe` (one function call with any of the SDKs), then open Claude Code and type:
The first line runs `mtpcli`, which prints instructions teaching the LLM the `--mtp-describe` convention: how to discover tools, how schemas map to CLI invocations, how to compose with pipes. The second line causes the LLM to run `infractl --mtp-describe`, get back the full schema, and understand a tool it has never seen in training data. Now you say: And it composes your custom CLI with a third-party MCP server it's never touched before: Your tool, a Slack MCP server, and `jq`, in a pipeline the LLM wrote because it could discover every piece. That script can run in CI, or on a Raspberry Pi. No tokens burned, no inference round-trips. The composition primitives have been here for 50 years. Bash is all you need!