Kremis is an embedded graph store (Rust, Apache 2.0) built around one constraint: every answer must trace back to a real data point.
How it works: - You ingest your data as EAV signals (entity, attribute, value) - Kremis stores them in an append-only graph (redb underneath) - Every query returns one label: FACT — path exists in your graph INFERENCE — derived from graph traversal UNKNOWN — "not in your data, I won't guess"
The LLM never writes to the graph. Only your verified signals do. The graph can't hallucinate — it either has the path or it doesn't.
There's a runnable demo (Python stdlib, no pip): examples/demo_honesty.py It ingests 10 signals, asks 6 questions, shows which LLM answers are grounded and which are fabricated.
It also has an HTTP API and MCP support (Claude can query your graph directly via tool calls).
Honest limitation: ingestion is EAV format, so you need to reshape your data first. Working on making this easier.
TyKolt•1h ago
The EAV constraint came from a real frustration: I kept getting confident wrong answers from RAG pipelines and had no way to tell which parts were fabricated. I wanted something that would just say "not in your data" instead of guessing.
The biggest open question I'm wrestling with is ingestion friction — EAV is precise but you have to reshape your data first. Curious whether that feels like a dealbreaker to anyone who looks at it, or if it's acceptable given the tradeoff.