Results: - LongMemEval: 100% (500/500) - first ever - LoCoMo: 75.32% J-Score (vs Mem0 68.44%) - 80x cheaper per turn - 13x faster
Built on RudraDB, my relationship-aware vector database with automatic relationship detection.
No LLM extraction calls. Pure embedding-based.
Solo developer. Looking for feedback.
bucket31•1mo ago
maheshvaikri99•1mo ago
Benchmark results and methodology here: https://github.com/AceIQ360/AceIQ360-Benchmark
The full system isn't open source yet - still deciding on licensing. But the benchmark repo has: - Complete results (500/500 on LongMemEval) - Raw logs showing each question/answer - Comparison with baselines
Happy to answer questions about the approach. The core insight: intelligent context organization beats raw context volume. No LLM calls for memory extraction - pure embedding-based retrieval using RudraDB (https://rudradb.com).
If you want to verify independently, I can provide API access.