I’ve been building something called the Persistent Mind Model (PMM).
It started as a side project on my home rig (i7-10700K / RTX 3080 / 32 GB RAM) because I was frustrated that every local AI chat starts from zero. I wanted a system that could remember its own development and be studied like a living mind.
So I decided to try building one. I mean, why not? :)
Basically, if I could describe the Persistent Mind Model, I would describe it as an event-sourced cognitive architecture for language models.
It’s model-agnostic, meaning you can use local models through Ollama or connect to OpenAI’s API, and the system keeps the same “identity” regardless of backend.
Everything it says or decides, reflections, commitments, personality drifts, is stored as hash-chained events in a local SQLite ledger. That ledger becomes the model’s memory and identity. It is reproducible, auditable, and portable.
A few things that make it different from the usual LLM setup:
Model-agnostic: works with OpenAI, Ollama, or other backends, and you can swap models without losing identity.
Emergent memory: episodic, semantic, and working memory appear naturally from structure, not extra code.
Architectural honesty: a validator loop catches hallucinations in real time and logs corrections.
Deterministic growth: replaying the same ledger reproduces the same “mind.” Move the DB to another machine and it picks up where it left off.
Fully local: runs on your own system, no accounts or cloud services required, but you can connect them if you want. Right now it’s wired up for OpenAI and Ollama Cloud, with plans to add xAI, Google, and Anthropic.
This version is a complete rewrite of what I posted here a few months ago: https://news.ycombinator.com/item?id=45055443
It’s released under a dual license (free non-commercial, paid commercial) so anyone can experiment locally and maintain their own AI persona as they see fit.
A bit of a heads-up: I’m not a professional engineer. I’m just a curious, self-taught builder who wanted to see if an AI could remember itself, and somehow ended up building this thing that actually works.
I’d love feedback from anyone interested in interpretable AI, cognitive architectures, or model-agnostic systems.
Repo: https://github.com/scottonanski/persistent-mind-model-v1.0
I really hope a few people check it out. It’s pretty wild to watch it develop over time.