Try it: https://llm-dag-ui.vercel.app (screenshot in repo)
The idea: conversations with LLMs often hit dead ends or go in directions you want to backtrack from. What if you could branch off from any message and explore a different path, while keeping the original intact?
How it works:
- Drag from any message node to create a new branch.
- Each branch only has context of its direct ancestors – it doesn't know about sibling branches or other parts of the tree.
- Delete a node and all its children disappear with it.
Useful when you want to try three different approaches to a problem from the same starting point, or test how Claude responds to different phrasings. This is a concept demo, not a polished product.
It uses BYOK (bring your own Anthropic API key), stored only in your browser's localStorage. The Express proxy just controls which Claude model is used; your key passes through but is never logged or stored. Nothing persists between sessions.
I think this is closer to how LLM conversations should work. The linear chat paradigm made sense for messaging, but exploration is rarely linear.
Code: https://github.com/dgrims3/LLM-DAG-UI
Would love feedback on the interaction model.