I built an MCP server that solves this. It gives Claude access to all your past Claude Code sessions.
UI also enables Claude powered AI summarization of past conversations for more concise insights.
Now I can ask: "What authentication approach did I use in that API project?" and Claude searches my session history directly.
5 MCP tools: list_sessions, search_sessions, get_session, get_session_content, search_content.
Also has a desktop UI (Electron + React) to browse sessions visually.
Built with Go + SQLite. Open source (AGPL-3.0). Tested on Mac and Linux.
GitHub: https://github.com/tad-hq/universal-session-viewer
Looking for feedback from other Claude Code users.
Still a WIP, I have been using it daily in my workflows. Contributions much appreciated.
delaminator•1d ago
it saves all the input chat, all the output chat, which tools were used and what they were used on.
https://github.com/lawless-m/Devolver
I use about five different computers. It all gets logged to one of them devlog-receiver
and it serves a web page where you can search through all of your sessions across all of your machines. I use DuckDB full text search.
Sure, I don't have an MCP part. So that bit's different.
tad-hq•1d ago
The key difference is you're doing full-text search on raw conversations. With my MCP approach, Claude gets both the raw history and AI-generated summaries.
So when I ask "In project X, what security trade-off did we make on feature Y" Claude reads the conversation summary, understands, and tells me immediately, rather than sorting through and matching keywords.
The MCP piece unlocks agent reasoning over your entire history, not just text retrieval. Haiku Analysis allows faster, more holistic understanding.
Different tools for different use cases!
delaminator•1d ago
Tbh I’m just seeing where it goes. I did the “dump the conversation” part as stage 1, added “ingest them centrally” … ok what next “ok, search them”.
I haven’t had to actually use it yet. But it is interesting to see which projects got the most prompts, which used the most tokens.
It was all prompted (no pun) because I wanted to show a non-programming colleague how the whole “build by prompting” thing works but more than just typing a couple of demo prompts.
tad-hq•1d ago
Your token analysis feature sounds useful for tracking usage patterns and workflow efficiency, I have thought about it before. A lot of the direction agentic code is going in is optimizing tool call usage with proper context engineering so I definitely see the value there.
delaminator•1d ago
tool call counts was broken, which is why some are zero
tad-hq•16h ago
delaminator•14h ago