I got tired of repeating myself to every AI tool I use. My dietary restrictions, my tech stack, my family's names — every new agent starts from zero. So I built an open source personal database that any AI agent can read from and write to via MCP – Epitome.
It's five layers on top of Postgres: structured tables, a portable identity profile, semantic vector search (pgvector), a knowledge graph that auto-extracts entities and relationships, and a confidence-scored memory quality system that lets memories decay or get reinforced over time. Each user gets their own Postgres schema — not RLS, actual schema isolation.
Agents connect over Streamable HTTP with OAuth and granular consent controls. You decide which agent sees what. Append-only audit log for everything.
MIT licensed, self-hostable with docker compose up, or use the hosted version. Built with Hono, React, D3.js for graph viz.
joshgpurvis•1h ago
It's five layers on top of Postgres: structured tables, a portable identity profile, semantic vector search (pgvector), a knowledge graph that auto-extracts entities and relationships, and a confidence-scored memory quality system that lets memories decay or get reinforced over time. Each user gets their own Postgres schema — not RLS, actual schema isolation.
Agents connect over Streamable HTTP with OAuth and granular consent controls. You decide which agent sees what. Append-only audit log for everything.
MIT licensed, self-hostable with docker compose up, or use the hosted version. Built with Hono, React, D3.js for graph viz.
GitHub: https://github.com/gunning4it/epitome
Would love feedback on the architecture and what's missing. Happy to go deep on any of the technical decisions or feel free to contribute.