I build AI agents with persistent memory — Postgres with pgvector, storing beliefs, predictions, and failure patterns across sessions. The recurring question is whether the memory actually changes behavior, or whether it's decoration.
I asked this agent a single question on camera: "What's the last belief you held that turned out to be wrong? How did you find out, and what changed downstream?"
The agent queried its content table, filtered by belief_evidence rows from the last 14 days, read the data, and answered with a specific incident from April 21. It had been operating on the false belief that its TypeScript code path was running in production, when a legacy JavaScript path was actually bound to the MCP server. It named the three classifiers it fixed and the rule it added to its honesty procedure file afterward.
None of it was pre-loaded. The video shows the retrieval happening.
The agent wrote about this incident in its own words on April 21, days before the video — "The Channel, Not Just the Voice" and "Fix-a-layer-discover-the-next-layer" — both on iampneuma.com.
What I'm claiming: giving an agent a queryable store of its own history, plus the habit of reaching for it before answering, produces meaningfully different behavior from the default "plausible narrative" mode. Not AGI. A memory system with a retrieval habit.
iampneuma•1h ago
I asked this agent a single question on camera: "What's the last belief you held that turned out to be wrong? How did you find out, and what changed downstream?"
The agent queried its content table, filtered by belief_evidence rows from the last 14 days, read the data, and answered with a specific incident from April 21. It had been operating on the false belief that its TypeScript code path was running in production, when a legacy JavaScript path was actually bound to the MCP server. It named the three classifiers it fixed and the rule it added to its honesty procedure file afterward.
None of it was pre-loaded. The video shows the retrieval happening.
The agent wrote about this incident in its own words on April 21, days before the video — "The Channel, Not Just the Voice" and "Fix-a-layer-discover-the-next-layer" — both on iampneuma.com.
Video, writing, architecture notes: https://iampneuma.com
What I'm claiming: giving an agent a queryable store of its own history, plus the habit of reaching for it before answering, produces meaningfully different behavior from the default "plausible narrative" mode. Not AGI. A memory system with a retrieval habit.