I built Engram because every AI agent I worked with had amnesia. Between sessions, everything was gone. I needed something that could store what agents learn,
find it by meaning later, and do it without requiring a vector database, an OpenAI key, or any external service.
What it does: Store memories, search them semantically, recall relevant context automatically — backed by a single SQLite file.
What makes it different from Mem0:
- Zero external dependencies — embeddings run locally (MiniLM-L6, 384-dim). No API keys needed.
- Auto-linking — memories form a knowledge graph automatically
- Versioning, auto-deduplication, auto-forget
- Four-layer recall: static facts + semantic matches + high-importance + recent activity
- WebGL graph visualization built in
Stack: Bun, SQLite (FTS5), transformers.js, single TypeScript file (2,300 lines).
zanfiel•2h ago
What it does: Store memories, search them semantically, recall relevant context automatically — backed by a single SQLite file.
What makes it different from Mem0: - Zero external dependencies — embeddings run locally (MiniLM-L6, 384-dim). No API keys needed. - Auto-linking — memories form a knowledge graph automatically - Versioning, auto-deduplication, auto-forget - Four-layer recall: static facts + semantic matches + high-importance + recent activity - WebGL graph visualization built in
Stack: Bun, SQLite (FTS5), transformers.js, single TypeScript file (2,300 lines).
Deploy: docker compose up -d
Live demo: https://demo.engram.lol/gui (password: demo)
Happy to answer questions about the architecture.