LLM agents stop being “just chatbots” the moment they start using MCP tools.
But once you wire in multiple MCP servers into your workflow - MCP tools are quietly burning tokens and margin.
Here is a deep dive on the latest techniques from Anthropic, Google, and OpenAI that help to get back a significant portion of LLM input context when working with MCP tools
olegkozlov•1h ago
Here is a deep dive on the latest techniques from Anthropic, Google, and OpenAI that help to get back a significant portion of LLM input context when working with MCP tools