frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
1•alephnerd•10s ago•0 comments

Bithumb mistakenly hands out $195M in Bitcoin to users in 'Random Box' giveaway

https://koreajoongangdaily.joins.com/news/2026-02-07/business/finance/Crypto-exchange-Bithumb-mis...
1•giuliomagnifico•11s ago•0 comments

Beyond Agentic Coding

https://haskellforall.com/2026/02/beyond-agentic-coding
1•todsacerdoti•1m ago•0 comments

OpenClaw ClawHub Broken Windows Theory – If basic sorting isn't working what is?

https://www.loom.com/embed/e26a750c0c754312b032e2290630853d
1•kaicianflone•3m ago•0 comments

OpenBSD Copyright Policy

https://www.openbsd.org/policy.html
1•Panino•4m ago•0 comments

OpenClaw Creator: Why 80% of Apps Will Disappear

https://www.youtube.com/watch?v=4uzGDAoNOZc
1•schwentkerr•8m ago•0 comments

What Happens When Technical Debt Vanishes?

https://ieeexplore.ieee.org/document/11316905
1•blenderob•9m ago•0 comments

AI Is Finally Eating Software's Total Market: Here's What's Next

https://vinvashishta.substack.com/p/ai-is-finally-eating-softwares-total
2•gmays•9m ago•0 comments

Computer Science from the Bottom Up

https://www.bottomupcs.com/
2•gurjeet•10m ago•0 comments

Show HN: I built a toy compiler as a young dev

https://vire-lang.web.app
1•xeouz•11m ago•0 comments

You don't need Mac mini to run OpenClaw

https://runclaw.sh
1•rutagandasalim•12m ago•0 comments

Learning to Reason in 13 Parameters

https://arxiv.org/abs/2602.04118
1•nicholascarolan•14m ago•0 comments

Convergent Discovery of Critical Phenomena Mathematics Across Disciplines

https://arxiv.org/abs/2601.22389
1•energyscholar•14m ago•1 comments

Ask HN: Will GPU and RAM prices ever go down?

1•alentred•15m ago•0 comments

From hunger to luxury: The story behind the most expensive rice (2025)

https://www.cnn.com/travel/japan-expensive-rice-kinmemai-premium-intl-hnk-dst
2•mooreds•16m ago•0 comments

Substack makes money from hosting Nazi newsletters

https://www.theguardian.com/media/2026/feb/07/revealed-how-substack-makes-money-from-hosting-nazi...
5•mindracer•17m ago•2 comments

A New Crypto Winter Is Here and Even the Biggest Bulls Aren't Certain Why

https://www.wsj.com/finance/currencies/a-new-crypto-winter-is-here-and-even-the-biggest-bulls-are...
1•thm•17m ago•0 comments

Moltbook was peak AI theater

https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/
1•Brajeshwar•17m ago•0 comments

Why Claude Cowork is a math problem Indian IT can't solve

https://restofworld.org/2026/indian-it-ai-stock-crash-claude-cowork/
1•Brajeshwar•18m ago•0 comments

Show HN: Built an space travel calculator with vanilla JavaScript v2

https://www.cosmicodometer.space/
2•captainnemo729•18m ago•0 comments

Why a 175-Year-Old Glassmaker Is Suddenly an AI Superstar

https://www.wsj.com/tech/corning-fiber-optics-ai-e045ba3b
1•Brajeshwar•18m ago•0 comments

Micro-Front Ends in 2026: Architecture Win or Enterprise Tax?

https://iocombats.com/blogs/micro-frontends-in-2026
2•ghazikhan205•20m ago•0 comments

These White-Collar Workers Actually Made the Switch to a Trade

https://www.wsj.com/lifestyle/careers/white-collar-mid-career-trades-caca4b5f
1•impish9208•20m ago•1 comments

The Wonder Drug That's Plaguing Sports

https://www.nytimes.com/2026/02/02/us/ostarine-olympics-doping.html
1•mooreds•21m ago•0 comments

Show HN: Which chef knife steels are good? Data from 540 Reddit tread

https://new.knife.day/blog/reddit-steel-sentiment-analysis
1•p-s-v•21m ago•0 comments

Federated Credential Management (FedCM)

https://ciamweekly.substack.com/p/federated-credential-management-fedcm
1•mooreds•21m ago•0 comments

Token-to-Credit Conversion: Avoiding Floating-Point Errors in AI Billing Systems

https://app.writtte.com/read/kZ8Kj6R
1•lasgawe•22m ago•1 comments

The Story of Heroku (2022)

https://leerob.com/heroku
1•tosh•22m ago•0 comments

Obey the Testing Goat

https://www.obeythetestinggoat.com/
1•mkl95•23m ago•0 comments

Claude Opus 4.6 extends LLM pareto frontier

https://michaelshi.me/pareto/
1•mikeshi42•23m ago•0 comments
Open in hackernews

Ask HN: Are diffs still useful for AI-assisted code changes?

7•nuky•3w ago
I’m wondering whether traditional diffs are becoming less suitable for AI-assisted development..

Lately I’ve been feeling frustrated during reviews when an AI generates a large number of changes. Even if the diff is "small", it can be very hard to understand what actually changed in behavior or structure.

I started experimenting with a different approach: comparing two snapshots of the code (baseline and current) instead of raw line diffs. Each snapshot captures a rough API shape and a behavior signal derived from the AST. The goal isn’t deep semantic analysis, but something fast that can signal whether anything meaningful actually changed.

It’s intentionally shallow and non-judgmental — just signals, not verdicts.

At the same time, I see more and more LLM-based tools helping with PR reviews. Probabilistic changes reviewed by probabilistic tools feels a bit dangerous to me.

Curious how others here think about this: – Do diffs still work well for AI-generated changes? – How do you review large AI-assisted refactors today?

Comments

nuky•3w ago
Just to clarify - this isn’t about replacing diffs or selling a tool

I ran into this problem while reviewing AI-gen refactors and started thinking about whether we’re still reviewing the right things. Mostly curious how others approach this.

DiabloD3•3w ago
You know there are other kinds of diffs, right?

Its common to change git's diff to things like difftastic, so formatting slop doesn't trigger false diff lines.

You're probably better off, FWIW, just avoiding LLMs. LLMs cannot produce working code, and they're the wrong tool for this. They're just predicting tokens around other tokens, they do not ascribe meaning to them, just statistical likelihood.

LLM weights themselves would be far more useful if we used them to indicate statistical likelihood (ie, perplexity) of the code that has been written; ie, strange looking code is likely to be buggy, but nobody has written this tool yet.

nuky•3w ago
Yeah difftastic and similar tools help a lot with formatting noise really.

My question is slightly orthogonal though: even with a cleaner diff, I still find it hard to quickly tell whether public API or behavior changed, or whether logic just moved around.

Not really about LLMs as reviewers — more about whether there are useful deterministic signals above line-level diff.

veunes•3w ago
The tools exist, they're just rarely used in web dev. Look into ApiDiff or tools using Tree-sitter to compare function signatures. In the Rust/Go ecosystem, there are tools that scream in CI if the public contract changes. We need to bring that rigor into everyday AI-assisted dev. A diff should say "Function X now accepts null", not "line 42 changed"
nuky•3w ago
It was precisely because this was going too far that I thought the consequences of the active adoption of LLM tools could be made visible. I'm not saying LLM is completely bad—after all, and not all tools, even non-LLM ones, are 100% deterministic. At the same time, reckless and uncontrolled use of LLM is increasingly gaining ground not only in coding but even in code analyze/review.
uhfraid•3w ago
> How do you review large AI-assisted refactors today?

just like any other patch, by reading it

nuky•3w ago
fair — that’s what I do as well)
veunes•3w ago
Reading works when you generate 50 lines a day. When AI generates 5,000 lines of refactoring in 30 seconds, linear reading becomes a bottleneck. Human attention doesn't scale like GPUs. Trying to "just read" machine-generated code is a sure path to burnout and missed vulnerabilities. We need change summarization tools, not just syntax highlighting
nuky•3w ago
This is exactly the gap I'm worried about. human review still matters, but linear reading breaks down once the diff is mostly machine-generated noise. Summarizing what actually changed before reading feels like the only way to keep reviews sustainable.
uhfraid•2w ago
Whether you or someone/something else wrote it is irrelevant

You’re expected to have self-reviewed and understand the changes made before requesting review. You must to be able to answer questions reviewers have about it. Someone must read the code. If not, why require a human review at all?

Not meeting this expectation = user ban in both kernel and chromium

ccoreilly•3w ago
There‘s many approaches being discussed and it will depend on the size of the task. You could just review a plan and assume the output is correct but you need at least behavioural tests to understand what was built fulfilled the requirements. You can split the plan further and further until the changes are small enough to be reviewable. Where I don’t see the benefit is in asking an agent to generate test as it tends to generate many useless unit tests that make reviewing more cumbersome. Writing the tests yourself (or defining them and letting an agent write the code) and not letting implementation agents change the tests is also something worth trying.

The truth is we’re all still experimenting and shovels of all sizes and forms are being built.

nuky•3w ago
That matches my experience too - tests and plans are still the backbone.

What I keep running into is the step before reading tests or code: when a change is large or mechanical, I’m mostly trying to answer "did behavior or API actually change, or is this mostly reshaping?" so I know how deep to go etc.

Agree we’re all still experimenting here.

csomar•3w ago
I'm working on a similar tool (https://codeinput.com/products/merge-conflicts/online-diff), specifically focusing on how to use the diff results. For semantic parsing, I think the best option available right now is Tree-sitter (https://tree-sitter.github.io/tree-sitter), which has decent WASM support. If this interests you, feel free to shoot me an email. I'm always looking to connect with other devs who want to discuss this stuff.
nuky•3w ago
Oh yeah tree-sitter it's a great foundation for semantic structure.

What I'm exploring is more about what we do with that structure once someone/smth starts generating thousands of changed lines: how to compress change into signals we can actually reason about.

Thank you for sharing. I'm actually trying your tool right now - it looks really interesting. Happy to exchange thoughts.

csomar•3w ago
Feel free to shoot me an email (your email is not visible on your profile).
veunes•3w ago
I totally get the fear regarding probabilistic changes being reviewed by probabilistic tools. It's a trap. If we trust AI to write the code and then another AI to review it, we end up with perfectly functioning software that does precisely the wrong thing.

Diffs are still necessary, but they should act as a filter. If a diff is too complex for a human to parse in 5 minutes, it’s bad code, even if it runs. We need to force AI to write "atomically" and clearly; otherwise we're building legacy code that's unmaintainable without that same AI

nuky•3w ago
Agreed - that trap is very real. The open question for me is what we do when atomic, 5min readable diffs are the right goal but not realistically achievable always. My gut says we need better deterministic signals to reduce noise before human review. Not to replace it.