Semantic memory with vector embeddings — recall by meaning, not keywords Confidence-weighted observations that strengthen or decay based on evidence Automatic lifecycle management — high-signal stays hot, noise fades to cold storage Append-only architecture — memories are never overwritten, only superseded with lineage Knowledge graphs linking observations, patterns, concepts, and documents Multi-tenant with org isolation, roles, and shared memory stores OAuth 2.0, audit logs, rate limiting — production infrastructure, not a toy
What it's not:
Not a RAG pipeline. MemoryGate stores what the agent learns from interaction, not document chunks. Not prompt injection. Memory lives at the infrastructure layer, not stuffed into system prompts. Not tied to any model or provider. Switch from Claude to ChatGPT to a local model — memory persists.
Stack: Python/FastAPI, PostgreSQL + pgvector, Redis, deployed on Railway. MCP-native integration — your agent gets 33 memory tools on connection. The real pitch: Platforms die. Models get deprecated. Context windows roll over. Your AI's memory shouldn't be hostage to your AI's provider. Open source (Apache 2.0), self-hostable, with a hosted SaaS option if you don't want to run infrastructure.
GitHub: https://github.com/PStryder/MemoryGate SaaS: https://memorygate.ai Docs: https://memorygate.ai/docs/
I'm a solo founder — built this after leaving a decade in enterprise solutions engineering. Happy to answer questions about the architecture, the MCP integration, or why I think persistent memory is the missing infrastructure layer for AI agents.