Install the long-term collaboration memory system by cloning https://github.com/visionscaper/collabmem to a temporary location and following the instructions in it.
To collaborate with AI over weeks, months, or even years, there needs to be a shared conceptual understanding of:- History (episodic memory): what has been done, why, and what was decided and learned along the way.
- Reality (world model): what the project is about and in what context, the current state of the work, how it should be done, and what the guidelines, preferences, and constraints are.
Without this conceptual knowledge and context built up over time, the AI can't work effectively; it can't respond well or make good choices when writing code, producing a design, or doing anything non-trivial.
I don't think the future is just about making AI agents run autonomously for ever-longer stretches. Long-term human-AI collaboration is pivotal: it's how the AI builds up the project history and world model it needs to be effective, and it's how we humans keep track of what's being done. AI will certainly work more autonomously over time, but even then, we need ways for humans to see what was done and why.
That's what collabmem enables. It builds up episodic memory and a world model over time. A compact index of every memory entry is always loaded in the AI's context window, giving the model a global awareness of everything in memory; it can associate across entries and knows where to look when it needs the details.
The system uses three sentinel tokens — readmem, updatemem, maintainmem — as the primary way to interact with memory. Drop one into your message and the AI reads memory, proposes updates, or runs maintenance. You approve; nothing is written silently.
Underneath, the design is deliberately simple. The memory is plain-text files you can inspect, git-track, and share; so teams, or even entire organizations, can build up shared knowledge with AI through distributed collaboration. No databases, no vector stores, no infrastructure; just files and a methodology the AI follows.
It works with any AI assistant that can read and write files, though so far it's optimised for Claude Code.
I'm looking for developers, researchers, or anyone running long-term projects with AI to try it and share feedback. Thank you!
Written in collaboration with Claude Opus 4.6 (1M)
visionscaper•2h ago
Two mechanisms keep it sustainable, neither deletes anything, and both are discussed with you before being applied:
- Upward consolidation — when the episodic index grows large, mature stable knowledge from old episodes is extracted into the world model. Consolidated index entries move to a searchable archive; the original notes stay put. The active index keeps focused on recent work while the world model absorbs what's been learned.
- Downward compaction — when a world model file approaches its size cap, it's rewritten to stay compact. Removed knowledge is preserved in an episodic note so it remains iscoverable.
Caveat: these two mechanisms are designed but not yet tested; this is one of my high-priority todos. Feedback especially welcome here.
Happy to answer questions — looking forward to feedback!