frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

How the U.S. military can trim its carbon footprint

https://attheu.utah.edu/research/heres-how-the-u-s-military-can-trim-its-massive-carbon-footprint/
1•geox•19m ago•0 comments

Building a Sleeper Computer from an SGI Indy

https://buu342.me/blog/projects/SGIIndySleeper.html
1•marcodiego•21m ago•0 comments

Metis Agent Starter Kit – Build production AI agents in minutes, not weeks

1•cjohnsonpr•26m ago•0 comments

Adding Nvidia GPU Boost to Proxmox K8s Using Pulumi and Kubespray

https://medium.com/@madhankumaravelu93/adding-nvidia-gpu-boost-to-proxmox-k8s-using-pulumi-and-kubespray-d5d9d3dace94
1•madhank93•27m ago•0 comments

Wake word for opening Claude and/or Cursor

https://github.com/Traves-Theberge/Wake-Word
1•Traves-Theberge•35m ago•1 comments

Massistant Chinese Mobile Forensic Tooling

https://www.lookout.com/threat-intelligence/article/massistant-chinese-mobile-forensics
1•libpcap•36m ago•0 comments

Intel to boost gross margins – new products must deliver 50% gross profit

https://www.tomshardware.com/tech-industry/semiconductors/intel-draws-a-line-in-the-sand-to-boost-gross-margins-new-products-must-deliver-50-percent-to-get-the-green-light
12•walterbell•41m ago•2 comments

The UI Framework for Perfectionists

https://www.chainlift.io/liftkit
3•noncoml•48m ago•0 comments

Parallelizing the Physics Solver

https://www.youtube.com/watch?v=Kvsvd67XUKw
2•todsacerdoti•52m ago•0 comments

Ask HN: What would convince you to take AI seriously?

6•atleastoptimal•1h ago•11 comments

Ask HN: How do you digest fact that you are not successful by 40

5•hubmusic•1h ago•16 comments

I Used Arch, BTW: macOS, Day 1

https://yberreby.com/posts/i-used-arch-btw-macos-day-1/
2•yberreby•1h ago•4 comments

Beyond Meat Fights for Survival

https://foodinstitute.com/focus/beyond-meat-fights-for-survival/
3•airstrike•1h ago•0 comments

Exploring Task Performance with Interpretable Models via Sparse Auto-Encoders

https://arxiv.org/abs/2507.06427
1•PaulHoule•1h ago•0 comments

Optimizations That Aren't

https://zeux.io/2010/11/29/optimizations-that-arent/
1•daniel_alp•1h ago•0 comments

Actual, Current, Real-World Cuts to NASA Planetary R&A

https://research.ssl.berkeley.edu/~mikewong/blog_14.php#roses25
2•mrexroad•1h ago•0 comments

Asymmetry of Verification and Verifier's Law

https://www.jasonwei.net/blog/asymmetry-of-verification-and-verifiers-law
1•polrjoy•1h ago•0 comments

The unspoken truth about the baby bust

https://www.ft.com/content/5c5e8a56-e557-4741-a94e-c6e06cc1108b
2•toomuchtodo•1h ago•1 comments

There is no "Three Mile Island" event coming for software

https://surfingcomplexity.blog/2022/10/08/there-is-no-three-mile-island-event-coming-for-software/
4•azhenley•1h ago•0 comments

Tech's Top Venture Firm Tried to Stay Above Politics. A Partner Created a Furor

https://www.nytimes.com/2025/07/19/technology/sequoia-capital-shaun-maguire-mamdani.html
4•aspenmayer•1h ago•2 comments

Assembling a Retro Chip Tester Pro

https://celso.io/posts/2025/07/19/retro-chip-tester/
2•celso•1h ago•0 comments

The CIA's 'Minerva' Secret

https://nsarchive.gwu.edu/briefing-book/chile-cyber-vault-intelligence-southern-cone/2020-02-11/cias-minerva-secret
1•exiguus•1h ago•1 comments

Most accurate clock requires a 2-mile laser beam

https://www.popsci.com/technology/most-accurate-optical-atomic-clock/
2•Bluestein•1h ago•0 comments

Zen 6 Magnus Leak: AMD's APU for PS6?

https://www.youtube.com/watch?v=GKdRXEgV82g
2•doener•1h ago•0 comments

Show HN: I made a scan app in the skeuomorphic style of iOS 6.0

https://apps.apple.com/us/app/scan-convert-to-pdf/id6727013863
3•JulienLacr0ix•2h ago•0 comments

Shining a Light on the World of Tiny Proteins

https://www.nytimes.com/2025/06/12/science/genes-dna-microproteins.html
1•bookofjoe•2h ago•1 comments

Tear It Down, They Said. He Just Kept Building

https://www.nytimes.com/2025/07/19/world/asia/china-demolition-house.html
1•mbjorkegren•2h ago•0 comments

Detour: A detour through the Linux dynamic linker

https://github.com/graphitemaster/detour
1•birdculture•2h ago•0 comments

When Everything Is Vibing

https://www.computerworld.com/article/4022711/when-everything-is-vibing.html
1•fariszr•2h ago•0 comments

Mount Thor: The mountain with Earth's longest vertical drop

https://www.livescience.com/planet-earth/geology/mount-thor-the-mountain-with-earths-longest-vertical-drop
2•bikenaga•2h ago•0 comments
Open in hackernews

Ask HN: Where is Git for my Claude Code conversations?

2•lil-lugger•3h ago
I’m a graphic designer who has dropped everything to build a SaaS app with a bunch of different ai coding tools and now exclusively Claude Code. I’ve been working on this project long enough that I have to re-explain business logic over and over, or I find the agent has done stuff I missed like added a new db column and I don’t know why so I revert it - but then realise it was the right thing. I keep hitting this same frustrating wall.

Then it hit me: we’re tracking version history wrong in the AI era. Or I guess, we’re not tracking it at all.

Let’s consider prompting Claude Code as its own programming language. When I write “modify out user authentication system to handle edge case X and constraint Y,” that’s not just a request - that’s source code. The actual JavaScript it outputs is the ‘compiled’ result.

But, we’re then only tracking the git history of the code base at the “assembly” level. It’s like if you could only git fetch your compiled binaries and were always throwing away all your original C code. Every time you wanted to modify your program, you’d have to reverse-engineer what you originally meant to write from the assembly output.

Why don’t we track the original inputs? The conversation contains the real logic: * Requirements and constraints * Edge cases discovered through iteration * Why we rejected certain approaches * Business context that shaped decisions

Right now, all of that reasoning just… disappears. We’re both (Me and Claude) left with just the code and have to guess at the intent.

The scale problem is real - you can’t just dump entire conversation threads into version control. A single coding session might be 50k tokens of back-and-forth. But most of that is noise. The signal is in specific moments: the user prompt, and the agent reasoning that led to each code modification.

What if we tracked it line by line? Claude Code already works line by line - when it edits code, it rewrites entire lines. We could tag every line of code with conversation IDs. Store all Claude code conversations as JSON where each prompt and agent reasoning gets its own ID. When a prompt makes the agent think and produce a tool call to edit lines -> those lines of code have its own metadata of conversation ids that directly relate to why it was written.

Imagine browsing your codebase and seeing that line 23 has 4 conversation tags. Click to expand and see: “I want to make this page only allow users who… [P_001], The user wants me to X, I should change the server file… [P_047], make sure we also include… [P_089], All tests passed but we still haven’t solved x edge case…. [P_156].”

You can trace the entire decision history behind every single line.

I’m sure there are implementation challenges I’m not seeing as a coding newcomer. We’ve figured out how to make AI write code that works - but we’re losing the most valuable part (the why) and only keeping the output.

Has anyone experimented with conversational version control? Are there technical reasons this wouldn’t work that I’m missing?

Comments

zahlman•3h ago
> Why don’t we track the original inputs?

My suggestion is that you try using actual, literal Git for this, and then evaluate for yourself. Git doesn't care about programming language syntax. It cares about being able to feed text files to a diff algorithm.

Aside from saving versioned conversations as text files, if you have e.g. an entire commit generated from a prompt, you can include the prompt in the commit message.

lil-lugger•3h ago
That’s fine but you would have to find the commit message to understand the context, whereas this is closer to an extended git blame per line with more metadata because if you store the conversation history locally you can link that line to its conversation context and that lives with the code not with the git commit.

At the moment I use just extensive docs to track decisions and business logic but its static and is constantly going stale.