¹ https://github.com/obra/episodic-memory ² https://claudefa.st/blog/guide/mechanics/auto-dream
An observation from 30 sessions ago and a guess from one offhand remark just sit at the same level. So I started tagging beliefs with confidence scores and timestamps, and decaying ones that haven't been reinforced. The most useful piece ended up being a contradictions log where conflicting observations both stay on the record. Default status: unresolved.
Tiered loading is smart for retrival. Curious if you've thought about the confidence problem on top of it, like when something in warm memory goes stale or conflicts with something newer.
In my opinion, this should happen inside the LLM dorectly. Trying to scaffold it on top of the next token predictor isnt going to be fruitful enough. It wont get us the robot butlers we need.
But obviously thays really hard. That needs proper ML research, not primpt engineering
The other thing is that even if the model handles memory internally, you probably still want the beliefs to be inspectable and editable by the user. A hidden internal model of who you are is exactly the problem I was trying to solve. Transparency might need to stay in the scaffold layer regardless.
Cog doesn't use confidence scores (yet — you're making me think about it), but the nightly pipeline is basically a proxy for the same thing. The /reflect pass runs twice a day and does consistency sweeps — it reads canonical files and checks that every referencing file still agrees. When facts drift (and they do, constantly), it catches and fixes them. The reinforcement signal is implicit: things that keep coming up in conversations get promoted to hot memory, things that go quiet eventually get archived to "glacier" (cold storage, still retrievable but not loaded by default).
The closest thing to your contradictions log is probably the observations layer — raw timestamped events that never get edited or deleted. Threads (synthesis files) get rewritten freely, but the observations underneath are append-only. So when the AI's understanding changes, the old observations are still there as a paper trail.
Where I think you're ahead is making confidence explicit. My system handles staleness through freshness (timestamps, "as of" dates on entities, pipeline frequency) but doesn't distinguish between "I'm very sure about this" and "I inferred this once." That's a real gap. Would love to see what you're building — is it public?
I'd also add that memory is best organized when it's "directed" (purpose-driven). You've already started asking questions where the answers become the memories (at least, you mention this in your description). So, it's really helpful to also define the structure of the answer, or a sequence of questions that lead to a specific conclusion. That way, the memories will be useful instead of turning into chaos.
kixiQu•1h ago