Synrix is a local-first memory engine for AI agents. It uses a Binary Lattice structure instead of vectors — fixed-size nodes with prefix-semantic addressing. Lookups are O(k) where k is the number of results, not the size of the dataset. The demo that convinced me this was worth sharing: I told GPT-4 my name, that I like pugs and Ferraris, and a few facts about my project. Restarted the session completely.
The side without Synrix forgot everything. The side with Synrix recalled every single detail instantly. No retraining. No embeddings. No API call. Just prefix lookup in microseconds. Real numbers from my machine. Direct node lookup 19μs. Prefix queries over 10k nodes 28-80μs. Full agent context restored from cold start under 1ms. WAL recovery tested across 60 crash scenarios with zero data loss. Validated on Jetson Orin Nano at 192ns hot reads.
It runs entirely in-process. No server, no network, no GPU. Works on a factory floor, underground, on a robot that just lost power. Honest positioning: this is not a vector database replacement. For fuzzy similarity search over unstructured documents Qdrant and Chroma are the right tools. Synrix is specifically for structured agent memory where you control the naming — user preferences, learned facts, session context, task state. You know what you're looking for. Curious whether anyone has hit this problem in production and how you're currently solving it.
github.com/RYJOX-Technologies/Synrix-Memory-Engine