My background is in psychology, specifically personality psychology and how personality builds on top of memory. If we let an agent gather memories, it allows the agent to show features that look like a "personality". The basic architecture is to let the memory system form more complex memories from the initial facts that get remembered. To build this consolidation, memories are forgotten, and memories that are retrieved often get stronger. There's also a structure that builds associations, used both in retrieval and in spotting memories that are candidates for consolidation.
This is what consolidation here looks like:
- *Weighted bidirectional associations* between memory pairs, built from co-retrieval. Recall expands one hop along strong edges, so related memories surface even when they don't directly match the query. - *Reconsolidation ("coloring")* — when a newer memory contradicts an older one, the older memory is rewritten in light of the new context. - *Per-recall Hebbian strengthening* based on which injected memory the agent actually leaned on in the reply. - *Content-addressed identity* (SHA-256), so the same fact from two sources collapses to one object at write time.
Today it ships as an OpenClaw plugin (v0.5.5, MIT). The core is agent-agnostic by design and can be extended to other agents.
Repo: https://github.com/jarimustonen/formative-memory Docs: docs/architecture.md and docs/how-memory-works.md in-repo npm: formative-memory
Honest scope: first time I'm posting this publicly. We've been running it across four bots for a few weeks. I'd love feedback.