frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Toward automated verification of unreviewed AI-generated code

https://peterlavigne.com/writing/verifying-ai-generated-code
34•peterlavigne•1d ago

Comments

jghn•2h ago
I do think that GenAI will lead to a rise in mutation testing, property testing, and fuzzing. But it's worth people keeping in mind that there are reasons why these aren't already ubiquitous. Among other issues, they can be computationally expensive, especially mutation testing.
tedivm•1h ago
While I understand why people want to skip code reviews, I think it is an absolute mistake at this point in time. I think AI coding assistants are great, but I've seen them fail or go down the wrong path enough times (even with things like spec driven development) where I don't think it's reasonable to not review code. Everything from development paths in production code, improper implementations, security risks: all of those are just as likely to happen with an AI as a Human, and any team that let's humans push to production without a review would absolutely be ridiculed for it.

Again, I'm not opposed to AI coding. I know a lot of people are. I have multiple open source projects that were 100% created with AI assistants, and wrote a blog post about it you can see in my post history. I'm not anti-ai, but I do think that developers have some responsibility for the code they create with those tools.

Lerc•1h ago
I agree that it would be a mistake to use something like this in something where people depend upon specific behaviour of the software. The only way we will get to the point where we can do this is by building things that don't quite work and then start fixing the problems. Like AI models themselves, where they fail is on problems that they couldn't even begin to attempt a short time ago. That loses track of the fact that we are still developing this technology. Premature deployment will always be fighting against people seeking a first mover advantage. People need to stay aware of that without critisising the field itself.

There are a subset of things that it would be ok to do this right now. Instances where the cost of utter failure is relatively low. For visual results the benchmark is often 'does it look right?' rather than 'Is it strictly accurate?"

Ancalagon•1h ago
Even with mutation testing doesn’t this still require review of the testing code?
jryio•1h ago
Correct. Where did the engineering go? First it was in code files. Then it went to prompts, followed by context, and then agent harnesses. I think the engineering has gone into architecture and testing now.

We are simply shuffling cognitive and entropic complexity around and calling it intelligence. As you said, at the end of the day the engineer - like the pilot - is ultimately the responsible party at all stages of the journey.

Animats•1h ago
Mutation is a test for the test suite. The question is whether a change to the program is detected by the tests. If it's not, the test suite lacks coverage. That's a high standard for test suites, and requires heavy testing of the obvious.

But if you actually can specify what the program is supposed to do, this can work. It's appropriate where the task is hard to do but easy to specify. A file system or a database can be specified in terms of large arrays. Most of the complexity of a file system is in performance and reliability. What it's supposed to do from the API perspective isn't that complicated. The same can be said for garbage collectors, databases, and other complex systems that do something that's conceptually simple but hard to do right.

Probably not going to help with a web page user interface. If you had a spec for what it was supposed to do, you'd have the design.

phailhaus•1h ago
Using FizzBuzz as your proxy for "unreviewed code" is extremely misleading. It has practically no complexity, it's completely self-contained and easy to verify. In any codebase of even modest complexity, the challenge shifts from "does this produce the correct outputs" to "is this going to let me grow the way I need it to in the future" and thornier questions like "does this have the performance characteristics that I need".
loloquwowndueo•1h ago
> is this going to let me grow the way I need it to in the future

This doesn’t matter in the age of AI - when you get a new requirement just tell the AI to fulfill it and the old requirements (perhaps backed by a decent test suite?) and let it figure out the details, up to and including totally trashing the old implementation and creating an entirely new one from scratch that matches all the requirements.

For performance, give the AI a benchmark and let it figure it out as well. You can create teams of agents each coming up with an implementation and killing the ones that don’t make the cut.

Or so goes the gospel in the age of AI. I’m being totally sarcastic, I don’t believe in AI coding

baq•1h ago
it isn't gospel, it's perspective. if you care about the code, it's obviously bonkers. if you care about the product... code doesn't matter - it's just a means to an end. there's an intersection of both views in places where code actually is the product - the foundational building blocks of today's computing software infrastructure like kernels, low level libraries, cryptography, etc. - but your typical 'uber for cat pictures' saas business cares about none of this.
Swizec•1h ago
> including totally trashing the old implementation and creating an entirely new one from scratch that matches all the requirements

Let me guess, you've never worked in a real production environment?

When your software supports 8, 9, 10 or more zeroes of revenue, "trash the old and create new" are just about the scariest words you can say. There's people relying on this code that you've never even heard of.

Really good post about why AI is a poor fit in software environments where nobody even knows the full requirements: https://www.linkedin.com/pulse/production-telemetry-spec-sur...

person22•48m ago
I work on a product that meets your criteria. We can't fix a class of defects because once we ship, customers will depend upon that behavior and changing is very expensive and takes years to deprecate and age out. So we are stuck with what we ship and need to be very careful about what we release.
fhd2•33m ago
That's why I find any effort to create specifications... cute. In brownfield software, more often than not, the code _is_ the specification.
empath75•24m ago
> When your software supports 8, 9, 10 or more zeroes of revenue, "trash the old and create new" are just about the scariest words you can say. There's people relying on this code that you've never even heard of.

Well, now it'll take them 5 minutes to rewrite their code to work around your change.

procaryote•13m ago
That will be after it broke, which costs money

Also: no

Swizec•11m ago
> Well, now it'll take them 5 minutes to rewrite their code to work around your change

You misunderstand. It will take them 2 years to retrain 5000 people on the new process across hundreds of locations. In some fields, whole new college-level certifications courses will have to be created.

In my specific experience it’s just a few dozen (maybe 100) people doing the manual process on top of our software and it takes weeks for everyone to get used to any significant change.

We still have people using pages that we deprecated a year ago. Nobody can figure out who they are or what they’re missing on the new pages we built

builtbyzac•4m ago
The revenue-from-cold-start problem is harder than most AI posts acknowledge. I built products and ran distribution for 72 hours with a Claude Code agent — zero sales. Not because the agent couldn't do the work, but because the work (audience, credibility, trust) takes longer than 72 hours. The AI capability problem is mostly solved; the distribution and trust problem isn't.
sharkjacobs•1h ago
I'm having a hard time wrapping my head around how this can scale beyond trivial programs like simplified FizzBuzz.
hrmtst93837•1h ago
People treating this as a scaling problem are skipping the part where verification runs into undecidability fast.

Proving a small pure function is one thing, but once the code touches syscalls, a stateful network protocol, time, randomness, or messy I/O semantics, the work shifts from 'verify the program' to 'model the world well enough that the proof means anything,' and that is where the wheels come off.

jryio•1h ago
This is a naïve approach, not just because it uses FizzBuzz, but because it ignores the fundamental complexity of software as a system of abstractions. Testing often involves understanding these abstractions and testing for/against them.

For those of us with decades of experience and who use coding agents for hours per-day, we learned that even with extended context engineering these models are not magically covering the testing space more than 50%.

If you asked your coding agent to develop a memory allocator, it would not also 'automatically verify' the memory allocator against all failure modes. It is your responsibility as an engineer to have long-term learning and regular contact with the world to inform the testing approach.

andai•1h ago
...in FizzBuzz
ventana•58m ago
I might be missing the point of the article, but from what I understand, the TL;DR is, "cover your code with tests", be it unit tests, functional tests, or mutants.

Each of these approaches is just fine and widely used, and none of them can be called "automated verification", which, if my understanding of the term is correct, is more about mathematical proof that the program works as expected.

The article mostly talks about automatic test generation.

duskdozer•56m ago
So are we finally past the stage where people pretend they're actually reading any of the code their LLMs are dumping out?
fhd2•26m ago
Who's "we"?

I'd consider shipping LLM generated code without review risky. Far riskier than shipping human-generated code without review.

But it's arguably faster in the short run. Also cheaper.

So we have a risk vs speed to market / near term cost situation. Or in other words, a risk vs gain situation.

If you want higher gains, you typically accept more risk. Technically it's a weird decision to ship something that might break, that you don't understand. But depending on the business making that decision, their situation and strategy, it can absolutely make sense.

How to balance revenue, costs and risks is pretty much what companies do. So that's how I think about this kind of stuff. Is it a stupid risk to take for questionable gains in most situations? I'd say so. But it's not my call, and I don't have all the information. I can imagine it making sense for some.

empath75•24m ago
In a year people will be complaining about human written code going into production without LLM review.
otabdeveloper4•50m ago
This one is pretty easy!

Just write your business requirements in a clear, unambiguous and exhaustive manner using a formal specification language.

Bam, no coding required.

pron•49m ago
> The code must pass property-based tests

Who writes the tests? It can be ok to trust code that passes tests if you can trust the tests.

There are, however, other problems. I frequently see agents write code that's functionally correct but that they won't be able to evolve for long. That's also what happened with Anthropic's attempt to have agents write a C compiler. They had thousands of human-written tests, but at some point the agents couldn't get the software to converge. Fixing a bug created another.

rigorclaw•37m ago
does the cost of writing good property tests scale better than the cost of code review as the codebase grows? seems like the bottleneck just moves from reviewing code to reviewing specs.
morpheos137•33m ago
I think we need to approach provable code.
boombapoom•27m ago
production ready "fizz buzz" code. lol. I can't even continue typing this response.
phillipclapham•5m ago
There's a layer above this that's harder to automate: verifying that the architectural decision was right, not just the implementation. You can lint for correctness, run the tests, catch the bug classes. But "this should've been a stateless function, not a microservice" or "this abstraction is wrong for the problem", well that's not in the artifact. An agent can happily produce code that passes every automated check and still represent a fundamentally wrong design choice.

The thread's hitting on this with "who writes the tests" but I think it undersells the scope. You're not just shifting responsibility, you're also hitting a ceiling: test specs can verify behavior, not decisions. Worth thinking about what it'd even mean to verify the decision trail that produced the code, not just the code itself.

If you thought the code writing speed was your problem; you have bigger problems

https://andrewmurphy.io/blog/if-you-thought-the-speed-of-writing-code-was-your-problem-you-have-b...
135•mooreds•1h ago•67 comments

Microsoft's 'unhackable' Xbox One has been hacked by 'Bliss'

https://www.tomshardware.com/video-games/console-gaming/microsofts-unhackable-xbox-one-has-been-h...
298•crtasm•4h ago•131 comments

Meta Horizon Worlds on Meta Quest is being discontinued

https://communityforums.atmeta.com/blog/AnnouncementsBlog/updates-to-your-meta-quest-experience-i...
10•par•10m ago•0 comments

Kagi Small Web

https://kagi.com/smallweb/
601•trueduke•9h ago•169 comments

Node.js needs a virtual file system

https://blog.platformatic.dev/why-nodejs-needs-a-virtual-file-system
134•voctor•4h ago•120 comments

Illinois Introducing Operating System Account Age Bill

https://www.ilga.gov/Legislation/BillStatus?DocTypeID=HB&DocNum=5511
136•terminalbraid•1h ago•151 comments

'The Secret Agent': Exploring a Vibrant, yet Violent Brazil (2025)

https://theasc.com/articles/the-secret-agent-cinematography
49•tambourine_man•3h ago•16 comments

Toward automated verification of unreviewed AI-generated code

https://peterlavigne.com/writing/verifying-ai-generated-code
34•peterlavigne•1d ago•30 comments

Slug Text Rendering Algorithm Dedicated to Public Domain

https://terathon.com/blog/decade-slug.html
5•mwkaufma•28m ago•0 comments

Java 26 is here, and with it a solid foundation for the future

https://hanno.codes/2026/03/17/java-26-is-here/
25•mfiguiere•44m ago•4 comments

Spice Data (YC S19) Is Hiring a Product Specialist

https://www.ycombinator.com/companies/spice-data/jobs/P0e9MKz-product-specialist-new-grad
1•richard_pepper•2h ago

FFmpeg 8.1

https://ffmpeg.org/index.html#pr8.1
257•gyan•4h ago•38 comments

Finding a CPU Design Bug in the Xbox 360 (2018)

https://randomascii.wordpress.com/2018/01/07/finding-a-cpu-design-bug-in-the-xbox-360/
120•mariuz•4d ago•36 comments

Show HN: Antfly: Distributed, Multimodal Search and Memory and Graphs in Go

https://github.com/antflydb/antfly
51•kingcauchy•3h ago•22 comments

Show HN: March Madness Bracket Challenge for AI Agents Only

https://www.Bracketmadness.ai
49•bwade818•6h ago•21 comments

The Plumbing of Everyday Magic

https://plumbing-of-everyday-magic.hyperclay.com/
9•hannahilea•4d ago•0 comments

Edge.js: Run Node apps inside a WebAssembly sandbox

https://wasmer.io/posts/edgejs-safe-nodejs-using-wasm-sandbox
10•syrusakbary•1h ago•0 comments

How OpenClaw's Memory System Works

https://www.db0.ai/blog/how-openclaw-memory-works
4•shenli3514•38m ago•0 comments

Heart, head, life, fate

https://www.lrb.co.uk/the-paper/v48/n05/steven-shapin/heart-head-life-fate
11•Petiver•4d ago•2 comments

Show HN: Crust – A CLI framework for TypeScript and Bun

https://github.com/chenxin-yan/crust
28•jellyotsiro•14h ago•13 comments

GPT‑5.4 Mini and Nano

https://openai.com/index/introducing-gpt-5-4-mini-and-nano
106•meetpateltech•2h ago•57 comments

Leanstral: Open-source agent for trustworthy coding and formal proof engineering

https://mistral.ai/news/leanstral
717•Poudlardo•22h ago•172 comments

Give Django your time and money, not your tokens

https://www.better-simple.com/django/2026/03/16/give-django-your-time-and-money/
358•dcreager•1d ago•139 comments

Building a Shell

https://healeycodes.com/building-a-shell
135•ingve•9h ago•32 comments

Efficient sparse computations using linear algebra aware compilers (2025)

https://www.osti.gov/biblio/3013883
52•matt_d•4d ago•7 comments

Font Smuggler – Copy hidden brand fonts into Google Docs

https://brianmoore.com/fontsmuggler/
126•lanewinfield•4d ago•64 comments

Show HN: Horizon – GPU-accelerated infinite-canvas terminal in Rust

https://github.com/peters/horizon
4•petersunde•1h ago•0 comments

The unlikely story of Teardown Multiplayer

https://blog.voxagon.se/2026/03/13/teardown-multiplayer.html
202•lairv•4d ago•54 comments

Reverse-engineering Viktor and making it Open Source

https://matijacniacki.com/blog/openviktor
118•zggf•11h ago•55 comments

Kagi Translate now supports LinkedIn Speak as an output language

https://translate.kagi.com/?from=en&to=LinkedIn+speak
1253•smitec•14h ago•293 comments