It uses a symbolic index, navigation layer, semantic and Pagerank-like ranking and some context reduction / compression techniques to avoid resending and rereading the same files and lookups.
I ran benchmarks mostly with Anthropic’s models, and saw roughly 20–40% token reduction depending on workflow on average, in some cases quite a bit beyond, sometimes less.
There’s also a Claude Code hook which offers access to the tools, but it's still a bit clunky.
It’s fully open-source, with an optional paid Pro / lifetime tier for support.