This makes it hard to enforce invariants, audit facts, or maintain a clear separation between reasoning and ground truth.
I built a minimal proof-of-concept showing a different approach: a deterministic symbolic memory layer accessible via MCP.
Instead of storing “memory inside the model”, knowledge is resolved just-in-time from an explicit symbolic layer.
The goal is not to replace RAG or assistant memory, but to provide a missing infrastructure layer: a controllable knowledge backbone for AI systems.
This repo demonstrates the minimal viable form of that idea.