Engram takes the opposite approach: store memories with rich metadata and invest intelligence at read time, when you actually know the query. TypeScript, SQLite, zero infrastructure.
Ran the LOCOMO benchmark (same one Mem0 used to claim SOTA):
Engram: 80.0% (10 conversations, 1,540 questions) Mem0 published: 66.9% 93.6% fewer tokens than full-context approaches
Works as an MCP server, REST API, or embedded SDK. Supports Gemini, OpenAI, Ollama, Groq, and any OpenAI-compatible provider.
npm install -g engram-sdk && engram init