I use LLMs a lot for research and idea iteration where I want to remain in the loop to learn/absorb. Linear chats felt wrong for how I think: I want to build strong context, spin off side questions (“side quests”), then merge useful results back into the main line without polluting it.
Concretely: each workspace is a versioned reasoning tree with explicit branch/merge semantics.
Highlights
- Branch from an assistant reply (entire reply or a highlighted section; can run in the background)
- Edit an earlier user message and continue exploring on a new branch
- Merge useful results back into another branch
- Live graph view of the reasoning DAG to jump between nodes
- Per-branch model/provider settings (incl. “thinking” content where available)
- Quote replies for quick line-by-line markup
- Share workspaces with other users by email
- Postgres backend (Supabase or local adapter); an older local git backend is mostly deprecated now
- Electron desktop shell for local workflows
- Code: https://github.com/benjaminfh/researchtree
Note: the hosted app currently requires signup (workspaces are persisted + shareable). If you prefer not to create an account, the repo supports running locally (including an Electron build: npm run desktop:package with a local Postgres instance which is quick to spin up using, e.g., https://postgresapp.com/).
Feedback I’m hoping for: does branch/merge for context match how you actually work with LLMs? Where does the branching/merging UX feel awkward?
This started as an experiment in two parts: - are trees the right data structure for context for humans? For agents? (See also: Pi, which is awesome!) - how far I could get with a PRD (PM_DOCS/ARCHIVE/PRD.md if you're curious) + Codex doing all of the code and me reviewing and thinking about architecture only. Answer: very far.