Model Context Shell lets AI agents compose MCP tool calls using something similar to Unix shell scripting. Instead of the agent making each tool call individually (loading all intermediate data into context), it can express a workflow as a pipeline that executes server-side.
Since the orchestration is deterministic and reproducible, you can also use it with Skills.
Tool orchestration runs outside the agent and LLM context, so the agent can extract only the relevant parts of data and load those into context. This means you can save tokens, but also you can work with data that is too big to load into context, and your agent can trigger a very large number of tool calls if needed.
Also, this is not just a tool that runs bash - it has its own execution engine. So no need for full system access.
Example query: "List all Pokemon over 50 kg that have the chlorophyll ability"
Instead of 7+ separate tool calls loading all Pokemon data into context, the agent builds a single pipeline that:
1. Fetches the ability data 2. Extracts Pokemon URLs 3. Fetched each Pokemon's details (7 tool calls) 4. Filters by weight and formats the results
At least in it's current iteration, it's packaged as an MCP server itself. So you can use it with any agent. I made this, and some other design choices, so you can try it right away.