I ended up building a file-based memory architecture where memory, rules, and state live explicitly outside the model. It’s modular (notes, OCR, training logs, etc.) and I’ve been using it daily for about three months as a general-purpose assistant. It doesn’t require fine-tuning or custom infrastructure — just files and an existing LLM backend.
I’m curious how others here handle long-term state: files, databases, event sourcing, vector layers, or something else?
For context: private, non-commercial use is free.
http://metamemoryworks.com Architecture + repos linked