So I built this local memory layer that unifies memory across agents. Instead of a flat file, it builds a structured knowledge graph of "memory notes" inspired by the paper "A-MEM: Agentic Memory for LLM Agents" (https://arxiv.org/abs/2502.12110). The graph continuously evolves as more memories are committed, so older context stays organized rather than piling up.
It captures conversation turns and exposes an MCP service so any supported agent can query for information relevant to the current context. In practice that means less context rot and better long-term memory recall across all your agents. Right now it supports Claude Code, Codex, Gemini CLI, OpenCode, and OpenClaw.
Would love to hear any feedback.