Nope, its about AI code reviewing AI, and how that's a good thing.
Its like everyone suddenly forgot the old adage: "code is a liability".
"We write code twice as fast!" just means "we create liability twice as fast!". It's not a good thing, at all.
(Submitted title was "Software needs an independent auditor")
Before running LLM-generated code through yet more LLMs, you can run it through traditional static analysis (linters, SAST, auto-formatters). They aren’t flashy but they produce the same results 100% of the time.
Consistency is critical if you want to pass/fail a build on the results. Nobody wants a flaky code reviewer robot, just like flaky tests are the worst.
I imagine code review will evolve into a three tier pyramid:
1. Static analysis (instant, consistent) — e.g using Qlty CLI (https://github.com/qltysh/qlty) as a Claude Code or Git hook
2. LLMs — Has the advantage of being able to catch semantic issues
3. Human
We make sure commits pass each level in succession before moving on to the next.
The 3. is interesting too - my suspicion is that ~70% of PRs are too minor to need human review as the models get better, but the top 30% will because there will be opinion on what is and isn't the right way to do that complex change.
o11c•5h ago
mooreds•3h ago
So, consider this hearsay that it works.
fastest963•14m ago
fragmede•1m ago