Semble is our solution for this. It combines static Model2Vec embeddings (using our latest static model: potion-code-16M) with BM25, fused via RRF and reranked with code-aware signals. Everything runs on CPU since there's no transformers involved. On our benchmark of ~1250 query/document pairs across 63 repos and 19 languages, it uses 98% fewer tokens than grep+read and reaches 99% of the retrieval quality of a 137M-parameter code-trained transformer, while being ~200x faster.
Main features:
- Token-efficient: 98% fewer tokens than grep+read
- Fast: ~250ms to index a typical repo on our benchmark, ~1.5ms per query on CPU (very large repos may take longer)
- Accurate: 0.854 NDCG@10, 99% of the best transformer setup we tested
- MCP server: drop-in for Claude Code, Cursor, Codex, OpenCode
- Zero config: no API keys, no GPU, no external services
Install in Claude Code with: claude mcp add semble -s user -- uvx --from "semble[mcp]" semble
Or check our README for other installation instructions, benchmarks, and methodology:
Semble: https://github.com/MinishLab/semble
Benchmarks: https://github.com/MinishLab/semble/tree/main/benchmarks
Model: https://huggingface.co/minishlab/potion-code-16M
Let us know if you have any feedback or questions!
esafranchik•3h ago
stephantul•3h ago
We’re interested in measuring it end to end and also optimizing, e.g. the prompt and tools, for this, but we just haven’t gotten around to it.
esafranchik•3h ago
1) How do you compare accuracy? by checking if the answer is in any of the returned grep/bm25/semble snippets?
2) How do you measure token use without the agent, prompt, and tools?
stephantul•3h ago
esafranchik•3h ago
e.g. agents often run `grep -m 5 "QUERY"` with different queries, instead of one big grep for all items.
stephantul•2h ago
I guess the point we’re trying to make is that you need fewer semble queries to achieve the same outcome, compared to grep+readfile calls.