Every query returns a DAG receipt showing exactly how the result was derived (nodes traversed, filters/constraints applied, outputs). The goal is to make agent decisions auditable and reproducible instead of prompt-dependent.
I built this because LLMs are strong at orchestration and general reasoning, but high-stakes decision logic often needs deterministic execution and explicit proof trails. Cruxible receipts are meant to be audited, replayed, and challenged.
There’s also a feedback loop: users can approve/correct/reject edges and update confidence/evidence, so domain knowledge and decision trails compound across sessions.
Demos included: - Drug interactions (DDinter + CYP450) - OFAC sanctions screening (ownership chains) - MITRE ATT&CK threat modeling
Known limitations: - Candidate edge generation is still basic (property matching, shared-neighbor analysis, AI suggestions) - No application/action layer yet (e.g., transaction blocking, clinical alerts) - --limit queries currently persist full receipts instead of pruning to returned rows (fix planned)
Repo: https://github.com/cruxible-ai/cruxible-core
Thank you for reading! Feedback I’d value most: 1. What limitation makes this less useful for your domain? 2. Any setup/usability issues you hit? 3. Structural criticism of the approach