frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Git for LLMs – A context management interface

https://twigg.ai
58•jborland•10h ago•16 comments

Show HN: OpenSnowcat – A fork of Snowplow to keep open analytics alive

https://opensnowcat.io/
49•joaocorreia•6h ago•12 comments

Show HN: I built a tech news aggregator that works the way my brain does

https://deadstack.net/recent
134•dreadsword•7h ago•75 comments

Show HN: Deta Surf – An open source and local-first AI notebook

https://github.com/deta/surf
116•mxek•13h ago•38 comments

Show HN: Nostr Web – decentralized website hosting on Nostr

https://nweb.shugur.com
86•karihass•11h ago•20 comments

Show HN: Open-source TypeScript SDK for sending and operating iMessages

https://github.com/sg-hq/imessage-kit
2•RyanZhuuuu•2h ago•2 comments

Show HN: ScreenAsk – Free Screen Recording Links for Customer Support

https://screenask.com
13•ladybro•8h ago•0 comments

Show HN: I built Kumi – a typed, array-oriented dataflow compiler in Ruby

https://kumi-play-web.fly.dev/
4•goldenCeasar•3h ago•0 comments

Show HN: FlowLens – MCP server for debugging with Claude Code

https://magentic.ai/flowlens/
4•mzidan101•3h ago•0 comments

Show HN: Hacker News sans AI content

https://tokyo-synth-1243_4mn1lfqabzpz.vibesdiy.app/
4•neom•4h ago•1 comments

Show HN: Tommy – Turn ESP32 devices into through-wall motion sensors

https://www.tommysense.com
68•mike2872•8h ago•54 comments

Show HN: Cuq – Formal Verification of Rust GPU Kernels

https://github.com/neelsomani/cuq
91•nsomani•1d ago•60 comments

Show HN: Silly Morse code chat app using WebSockets

https://noamtamir.github.io/morwse/
73•noamikotamir•5d ago•30 comments

Show HN: Desponsorize – Gray out Amazon sponsored search results

https://github.com/candacelabs/desponsorize
2•kaashmonee•5h ago•0 comments

Show HN: Play abstract strategy board games online with friends or against bots

https://abstractboardgames.com/
171•abstractbg•1w ago•79 comments

Show HN: Coyote – Wildly Real-Time AI

https://getcoyote.app
6•michalwarda•8h ago•10 comments

Show HN: BesiegeField – LLM Agents Learn to Build Machines in a Physics Sandbox

https://besiegefield.github.io/
2•zepist•6h ago•0 comments

Show HN: Cadence – A guitar theory app

https://cadenceguitar.com/
191•apizon•1w ago•86 comments

Show HN: Story Keeper – AI agents with narrative continuity instead of memory

https://github.com/neurobloomai/pact-ax
2•neurobloom•6h ago•0 comments

Show HN: Emojiwhat – Unicode and TikTok emojis to copy and paste

https://emojiwhat.com/
2•kyrylo•6h ago•0 comments

Show HN: Pg_textsearch – BM25 Ranking for Postgres

https://docs.tigerdata.com/use-timescale/latest/extensions/pg-textsearch/
5•tjgreen•8h ago•0 comments

Show HN: Distil-NPC: a family of models for non-playable characters in games

https://github.com/distil-labs/Distil-NPCs
4•party-horse123•8h ago•0 comments

Show HN: Modshim – A new alternative to monkey-patching in Python

https://github.com/joouha/modshim
108•joouha•1w ago•31 comments

Show HN: hist: An overengineered solution to `sort|uniq -c` with 25x throughput

https://github.com/noamteyssier/hist-rs
3•noamteyssier•9h ago•3 comments

Show HN: 401K Traditional vs. Roth Calculator

https://401k.pages.dev/
2•vjain014•9h ago•1 comments

Show HN: Katakate – Dozens of VMs per node for safe code exec

https://github.com/Katakate/k7
120•gbxk•2d ago•51 comments

Show HN: How Software Fails-A book about complex system failures(sample chapter)

2•enginyoyen•9h ago•0 comments

Show HN: A browser for Mac that connects to private web apps over SSH

https://outerloop.sh
2•mrcslws•9h ago•0 comments

Show HN: I built an SVG-generation tool

https://scalablevector.graphics/
6•ninapanickssery•9h ago•1 comments

Show HN: Create interactive diagrams with pop-up content

https://vexlio.com/features/interactive-diagrams-with-popups/
43•ttd•1d ago•5 comments
Open in hackernews

Show HN: Git for LLMs – A context management interface

https://twigg.ai
58•jborland•10h ago
Hi HN, we’re Jamie and Matti, co-founders of Twigg.

During our master’s we continually found the same pain points cropping up when using LLMs. The linear nature of typical LLMs interfaces - like ChatGPT and Claude - made it really easy to get lost without any easy way to visualise or navigate your project.

Worst of all, none of them are well suited for long term projects. We found ourselves spending days using the same chat, only for it to eventually break. Transferring context from one chat to another is also cumbersome. We decided to build something more intuitive to the ways humans think.

We started with two simple ideas. Enabling chat branching for exploring tangents, and an interactive tree diagram to allow for easy visualisation and navigation of your project.

Twigg has developed into an interface for context management - like “Git for LLMs”. We believe the input to a model - or the context - is fundamental to its performance. To extract the maximum potential of an LLM, we believe the users need complete control over exactly what context is provided to the model, which you can do using simple features like cut, copy and delete to manipulate your tree.

Through Twigg, you can access a variety of LLMs from all the major providers, like ChatGPT, Gemini, Claude, and Grok. Aside from a standard tiered subscription model (free, plus, pro), we also offer a Bring Your Own Key (BYOK) service, where you can plug and play with your own API keys.

Our target audience are technical users who use LLMs for large projects on a regular basis. If this sounds like you, please try out Twigg, you can sign up for free at https://twigg.ai/. We would love to get your feedback!

Comments

djgrant•10h ago
This is an interesting idea. Have you considered allowing different models for different chat nodes? My current very primitive solution is to have AI studio on one side of my screen and ChatGPT on the other, and me in the middle playing them off each other.
jborland•10h ago
Yes, you can switch models any time for different chat nodes. So you can have different LLM review each others work, as an example. We currently have support for all the major models from ChatGPT, Gemini, Claude and Grok. Hope this helps
confusus•5h ago
Really cool! I’d want something like this for Claude code or other terminal based tools. Basically when working on code sometimes I already interrupt and resume the same session in multiple terminals so I can explore different pathways at the same time without the parallel sessions polluting one another. Currently this is really clunky in Claude Code.

Anyway, great project! Cheers.

jborland•5h ago
Thanks! I totally agree, we want to add CLI agent integration! I often use Gemini CLI (as it's free), and it's so frustrating not being able to easily explore different tangents.

Would you prefer a terminal Claude-Code style integration, or would browser based CLI integration work too?

captainkrtek•4h ago
Imo I’d prefer terminal for this as well. Ie; if I could keep context specific to a branch, or even within a branch switch contexts.
jborland•4h ago
Thanks for the feedback. We will add in CLI integration soon!

Could you please explain what you mean by "within branch" context switches?

The way Twigg works is you can choose exactly what prompt/output pairs (we call them nodes) are sent to the model. You can move 'nodes' from one branch to another. For example, if you do a bug fix in one branch, you can add the corrected solution as context to another branch by moving the node, whilst ignoring the irrelevant context spent trying to fix the bug.

This way you can specify exactly what context is in each branch.

boomskats•4h ago
Ha! This looks really nice, and I'm right there with you on the context development UX being clunky to navigate.

A couple of weeks ago I built something very very similar, only for Obsidian, using the Obsidian Canvas and OpenRouter as my baseline components. Works really nicely - handles image uploads, autolayout with dagre.js, system prompts, context export to flat files, etc. Think you've inspired me to actually publish the repo :)

jborland•3h ago
That's great to hear! Best of luck with it, let me know how it goes.

I definitely think that there is a lot of work to do with context management UX. For us, we use react flow for our graph, and we manage the context and its tree structure ourselves so it's completely model agnostic. The same goes for our RAG system, so we can plug and play with any model! Is that similar for you?

heliostatic•30m ago
Would love to see that--haven't found a great LLM interface for obsidian yet.
Edmond•3h ago
we implemented a similar idea some time back and it has proven quite useful: https://blog.codesolvent.com/2025/01/applying-forkjoin-model...

In Solvent, the main utility is allowing forked-off use of the same session without context pollution.

For instance a coding assistant session can be used to generate a checklist as a fork and then followed by the core task of writing code. This allows the human user to see the related flows (checklist gen,requirements gen,coding...etc) in chronological order without context pollution.

jborland•3h ago
Great to hear others are thinking along similar lines!

Context pollution is a serious problem - I love that you use that term as well.

Have you had good feedback for your fork-off implementation?

Edmond•3h ago
Feel to "borrow" the term "context pollution" :)

Yes it has proven quite a useful feature. Primarily for the reason stated above, allowing users to get a full log of what's going on in the same session that the core task is taking place.

We also use it extensively to facilitate back-and-forth conversation with the agents, for instance a lot of our human-in-loop capabilities rely on the forking functionality...the scope of its utility has been frankly surprising :)

cootsnuck•1h ago
Yea, this really needed to happen. Idk if this specific branching type of interface will stand the test of time, but I'm glad to see people finally braving beyond the basic chat interface (which I think many of us forget was only ever meant to be a demo...yet it remains default and dominant).
kanodiaayush•1h ago
I tried it, I have tried a very similar but still different use case. I wonder if you have thoughts around how much of this is our own context management vs context management for the LLM. Ideally, I don't want to do any work for the LLM; it should be able to figure out from chat what 'branch' of the tree I'm exploring, and then the artifact is purely for one's own use.
mdebeer•7m ago
Hi, matti here.

Very interesting you bring this up. It was quite a big point of discussion whilst jamie and I were building.

One of the big issues we faced with LLMs is that their attention gets diluted when you have a long chat history. This means that for large amounts of context, they often can't pick out the details your prompt relates to. I'm sure you've noticed this once your chat gets very long.

Instead of trying to develop an automatic system to descide what context your prompt should use (I.e which branch you're on), we opted to make organising your tree a very deliberate action. This gives you a lot more control over what the model sees, and ultimately how good the responses. As a bonus, if a model if playing up, you can go in and change the context it has by moving a node or two about.

Really good point though, and thanks for asking about it. I'd love to hear if you have any thoughts on ways you could get around it automatically.

visarga•1m ago
I am using a graph based format which is stored as text file. It is as simple as possible: each node is a line, prefixed with node id, and containing inline node references.

[1] *Node Title* - node text with [2] references inlined.

This works well because the model can load and edit the map line by line, and overwrite nodes as it needs to. Can also use grep to look up nodes. The graph format allows any kind of expansion, it is a superset of trees, lists or any other structure. It is also easy to generate for LLMs because they are accustomed with this citation format from papers. The trick is that links are generated at the same time with text itself. On a small project I would have <50 nodes, for complex topics I would grow to 500 nodes.