I think that's great. But I also think that it gives room for improvement on how we think of context. Most time documentation is hidden in some architectural design review in a tool like notion, confluence, etc. Those are great for human retrieval but even then it is often time forgotten when we implement the code functionality. Another key issue is that as the code evolves, our documentation becomes stale.
We need a tool that follows the agentic approach we are starting to see where we have ever-evolving documentation, or memories, that our agents could utilize without another needle in a haystack problem.
For the past few weeks I have been building an open source MCP server that allows for the creation of "notes" that are specifically anchored to files that AI agents could retrieve, create, summarize, search, and ultimately clean up.
This has solved a lot of issues for me.
You get the correct context of why AI Agents did certain things, and gotchas that might have occurred not usually documented or commented on a regular basis.
It just works out-of-the-box without a crazy amount of lift initially.
It improves as your code evolves.
It is completely local as part of your github repository. No complicated vector databases. Just file anchors on files.
I would love to hear your thoughts if I am approaching the problem completely wrong, or have advice on how to improve the system.
You can find the project at https://github.com/a24z-ai/a24z-Memory.
gitgallery•1h ago
brandonin•38m ago
The second step is to add a rule that tells it to use the a24z memory mcp server. After that it should automatically do everything for you. I am having some trouble where it doesn't always call the tool, but I get around that just by adding "use a24z memory MCP" as part of my prompt.