The issues I keep noticing: - More "almost correct" code that causes subtle bugs - The codebase has less consistent architecture - More copy-pasted boilerplate that should be refactored
I know, maybe we shouldn't care about the overall quality and it's only AI that will look into the code further. But that's a somewhat distant variant of the future. For now, we should deal with speed/quality balance ourselves, with AI agents in help.
So, I'm curious, what's your approach for teams that are making AI tools work without sacrificing quality? Is there anything new you're doing, like special review processes, new metrics, training, or team guidelines?
mentalgear•17h ago
Yet, it could be as easy as having a specialised model which is a code quality checker, refactor-er or QA tester.
Also, claimify (MS research) could be interesting for isolating claims about what the code should do, and then following up on writing granular unit test coverage.
raydenvm•17h ago