Usually, when someone does a lot of work, which we used to be able to measure in lines of code, it would seem unfair to criticize them afterwards. A good development process with ticket discussions would ensure that someone doesn't do a lot of work before there is agreement on the general approach. But now, with AI, this script no longer works, partially because it's "too easy" to do it before even deciding this.
So I'm asking myself and now HN: is it OK to point out when an entire PR as such is garbage and should simply be discarded? How can I tell how much "brain juice" a co-worker has spent on it and how attached they might be to it by now if I don't even know whether they even know the code they submitted or not?
I have to admit that I hate reviewing huge PRs and the problem with AI generated code is that often it would have been much better to find and use an existing open-source library to get the task done rather than (re-)generate a lot of code for it. But how will I know this until I've actually taken the time to review and understand the big, new proposed contributions? And even if I now do spend the time to actually understand the code and implied approach, how will I know which part of it reflects their genuine opinion and intellect (which I'd be hesitant to criticize) and what is AI-fluff I can rip apart without stepping on their toes? If the answer is "let's have a meeting", then I'd say the process has failed.
Not sure there is a right answer here, but I would love to hear people's take on this.
Webstir•10h ago