I’m the creator of LTP (Lazy Tool Protocol). I built this because I was frustrated with the hidden costs of building AI agents with many tools.
The current standard for tool-calling (like MCP) often requires loading every tool's full JSON schema into the AI's context at the start of a session. If you have dozens or hundreds of tools, you’re burning through tens of thousands of tokens before the agent even says "Hello."
LTP changes this by introducing a "Lazy Loading" pattern.
Instead of bloating the context, LTP uses a CLI-based bridge. The AI agent only fetches the specific tool definitions it needs, when it needs them.
What makes LTP 0.1.0 different:
Up to 93% Token Savings: In my latest benchmarks, LTP reduced token overhead from 300,000 to just 20,000 for a 100-call session.
The --schema Flag: By providing compact function signatures once at the start, the AI can understand hundreds of tools with minimal overhead, eliminating the need for repeated metadata calls.
Executable "Crafts": We’re moving beyond simple prompts. A "Craft" is a reusable package that combines precise AI instructions (CRAFT.md) with executable automation scripts (execute.ts). It’s like a library of expert skills for your AI.
Security-First: I know running local commands is a concern. LTP includes built-in whitelisting, sandbox path restrictions, and mandatory confirmation for dangerous operations like file deletions.
How it works: You give your AI a simple system prompt (provided in the repo) that teaches it to use the ltp CLI. The AI then uses ltp list --schema to see what’s available and ltp call to execute tools.
I'm currently running this on my VPS and using it for my own daily workflows.
Repo: https://github.com/JuN-B-official/ltp Url: https://ltp.jun-b.com Efficiency Analysis: https://ltp.jun-b.com/docs/effect
I’d love to get your feedback on the architecture and the efficiency of this approach!