I built a local-first AI memory engine for agents and edge systems. It uses a Binary Lattice instead of vectors. Fixed-size nodes with arithmetic addressing, so lookups are O(1) by ID and O(k) by prefix where k is result count not corpus size. Scales to 50M+ nodes beyond RAM via mmap with no performance cliff.
Real numbers from my machine. Direct node lookup: 19us. Prefix queries over 10k nodes: 28-80us with zero embedding model. 280x faster than local vector DB at 10k nodes. Full agent context rebuilt from cold start in under 1ms. ACID durable via WAL tested across 60 crash scenarios with zero data loss. Validated on Jetson Orin Nano at 192ns hot reads.
The core idea is that most agent memory is structured not fuzzy. User preferences, learned facts, task stores, conversation history. You know what you're looking for. Prefix-semantic naming replaces vector similarity entirely for these workloads. No embedding model. No GPU. No cloud call.
The robotics use case is what I find most interesting. A robot learns its environment during operation. Which door sticks, which patient has a latex allergy, which corridor is slippery. Power cuts out. Robot reboots cold. Every memory restores in milliseconds via WAL recovery. No internet required. Works in a Faraday cage, underground, on a factory floor.
It is not a vector DB replacement. For fuzzy similarity search over unstructured documents Qdrant and Chroma are the right tools. Synrix is the memory layer for structured agent workloads where you control the naming.
Curious whether anyone has hit the structured vs fuzzy memory problem in production and how you solved it.
sophbotia•1h ago
This is a pretty awesome project, How did you manage to make it so fast and how easy is it to integrate?
JosephjackJR•1h ago
super easy; we are currently trialing it with people who have this issue, check out our website or github, we will happily let you try anything you want with it. Really want feedback ideally atm.
JosephjackJR•1h ago
Real numbers from my machine. Direct node lookup: 19us. Prefix queries over 10k nodes: 28-80us with zero embedding model. 280x faster than local vector DB at 10k nodes. Full agent context rebuilt from cold start in under 1ms. ACID durable via WAL tested across 60 crash scenarios with zero data loss. Validated on Jetson Orin Nano at 192ns hot reads.
The core idea is that most agent memory is structured not fuzzy. User preferences, learned facts, task stores, conversation history. You know what you're looking for. Prefix-semantic naming replaces vector similarity entirely for these workloads. No embedding model. No GPU. No cloud call.
The robotics use case is what I find most interesting. A robot learns its environment during operation. Which door sticks, which patient has a latex allergy, which corridor is slippery. Power cuts out. Robot reboots cold. Every memory restores in milliseconds via WAL recovery. No internet required. Works in a Faraday cage, underground, on a factory floor.
It is not a vector DB replacement. For fuzzy similarity search over unstructured documents Qdrant and Chroma are the right tools. Synrix is the memory layer for structured agent workloads where you control the naming.
Curious whether anyone has hit the structured vs fuzzy memory problem in production and how you solved it.