Quick recap of the idea: AI coding agents forget everything between sessions. Worse, if you switch machines or tools, even in-session memory is gone. Hmem fixes this with a 5-level hierarchy — agents load only L1 summaries on startup (~20 tokens), then drill deeper on demand. Like how you remember "I was in Paris once" before you recall the specific café.
What's new in v2: The tree structure is now properly addressable. Every node gets a compound ID (L0003.2.1), so you can update or append to any branch without touching siblings. update_memory and append_memory work in-place — no delete-and-recreate.
Obsolete entries are never deleted, just archived. They stay searchable and teach future agents what not to do. A summary line shows what's hidden.
Access-count promotion with logarithmic age decay. Frequently-used entries surface automatically — but newer entries aren't buried just because older ones have more history.
Session cache with Fibonacci decay. Bulk reads suppress already-seen entries so you don't get the same context dumped every call. Two modes: discover (newest-heavy, good for session start) and essentials (importance-heavy, kicks in after context compression).
TUI viewer for browsing .hmem files — mirrors exactly what the agent sees at session start, including all markers and scoring.
Curator role — a dedicated agent that runs periodically, audits all memory files, merges duplicates, marks stale entries, prunes low-value content. Also accesible via skill "hmem-self-curate". Still MIT, still npx hmem-mcp init. GitHub: https://github.com/Bumblebiber/hmem
Bumblebiber•1h ago