With our incentive structures, it doesn't seem like there's a great way to prevent this decline in quality. It's been hard for me to quantify _why_ "slop" is bad, but my gut feelings are that:
1. The codebase becomes unreadable to human engineers.
2. Having more bad examples in the codebase creates a negative feedback loop for future LLM changes. And maybe this is the new norm, but ->
3. Once enough slop gets in, future incidents/SEVs become increasingly more difficult to resolve.
(3) feels like the only reason that has tangible business impact. Even if it did occur, I don't know if it would be possible to tie the slow response/loss in revenue to AI slop.I’ve seen other posts lamenting the ills of vibe coding, but is there a concrete way to justify code quality in the era of LLMs? My thoughts are that it might be useful to track some code quality metric like cyclomatic complexity, and see if there’s any correlation with regressions over time, but that feels kind of thin (and retroactive).
dvrp•20h ago
You can tell it’s AI for a surprisingly high amount of LLM outputs. If you feel it’s regurgitating what you or your team already know, it’s slop.
This gets tricky of course, but it’s a tricky question. Though, I don’t think objective metrics work (in your case Cyclomatic Complexity), because information is relative by nature. What’s slop to someone is high-quality code or new information to someone else.