I’ve been building AI agents and copilots, and kept running into a frustrating problem: they don’t fail loudly, they forget things quietly.
Users re-explain preferences, agents contradict earlier responses, and context resets without any clear visibility into why.
I built Memograph CLI as a debugging tool to analyze conversation transcripts and show:
- what the agent forgot
- where continuity broke
- contradictions and repeated context
- estimated token waste due to re-prompting
It works locally and supports plain text or JSON transcripts.
Example:
$ memograph
Output:
Cognitive Drift Score: 41/100
Forgotten preferences: 3
Token waste: 29%
Trust-breaking contradictions: 1
The goal isn’t to replace your agent framework, but to give developers visibility into memory failures.
Repo: https://github.com/memographAI/Memograph-CLI
Would love feedback, especially from people building agents in production.
memograph•1h ago
Curious to hear:
• How are people currently debugging agent continuity issues?
• Are you seeing users re-explain things often?
Also happy to analyze any sample transcripts if people want to try it.