I'm building Librarian (https://uselibrarian.dev/), an open-source (MIT) context management tool that stops AI agents from burning tokens by blindly re-reading their entire conversation history on every turn.
The Problem: If you're building agentic loops in frameworks like LangGraph or OpenClaw, you hit two walls fast:
Financial Cost: Token usage scales quadratically over long conversations. Passing the whole history every time gets incredibly expensive.
Context Rot: As the context window fills up, the LLM suffers from the "Lost in the Middle" effect. Response latency spikes, and reasoning accuracy drops.
The standard workaround is vector search (RAG) over past messages, but that completely loses temporal logic and conversational dependencies.
How Librarian Fixes This: We replaced brute-force context windowing with a lightweight reasoning pipeline:
Index: After a message, a smaller model asynchronously creates a compressed summary (~100 tokens), building an index of the conversation.
Select: When a new prompt arrives, Librarian reads the summary index and reasons about which specific historical messages are actually relevant to the current turn.
Hydrate: It fetches only those selected messages and passes them to the responder.
The Results: Instead of passing 2,000+ tokens of noise, you pass a highly curated context of ~800 tokens. In our 50-turn benchmarks, this reduces token costs by up to 85% while actually increasing answer accuracy (82% vs 78% for brute-force) because the distracting noise is removed. It currently works as a drop-in integration for LangGraph and OpenClaw.
I'd love for you to check out the benchmark suite, try the integrations, and tear the methodology apart. I'll be hanging out in the comments to answer questions, debug, or hear why this approach is terrible. Thanks!
Pinkert•56m ago
Currently, the open-source version of Librarian uses a general-purpose model to read the summary index and route the relevant messages. It works great for accuracy and drastically cuts token costs, but it does introduce a latency penalty for shorter conversations because it requires an initial LLM inference step before your actual agent can respond.
To solve this, we are currently training a heavily quantized, fine-tuned model specifically optimized only for this context-selection task. The goal is to push the selection latency below 1 second so the entire pipeline feels completely transparent. (We have a waitlist up for this hosted version on the site).
If anyone here has experience fine-tuning smaller models (like Llama 3 or Mistral) strictly for high-speed classification/routing over context indexes, I'd love to hear what pitfalls we should watch out for.