If you are in a culture like this, you may as well just ship slop.
Management wants to break stuff, that is on them.
It really shouldn't be possible to have arguments in a PR over formatting.
There is very little accountability in the upper echelons these days (if there ever was) and less each day in our current "leadership" climate.
This implies that managers will do both of the following in response to the aforementioned breakage:
1. Understand that their own managerial policies are the root cause.
2. Not use you as a scapegoat.
And yet, if you had managers that were mentally and emotionally capable enough to do both of the above, you wouldn't be in this position to begin with.
Just because management asks doesn't mean they're siding with Mike.
1. I tried to ship crap and complained to my manager for being blocked. I was young, dumb, in a bad place and generally an asshole. 2. I was the manager being told that some unreasonable idiot from X blocked their progress. I was the unreasonable manager demanding my people to be unblocked. I was without context, had a very bad prior relationship with the other party and an asshole - because no prior bad faith acts were actually behind the block - it was shitty code. 3. I was the manager being asked to help with unblocking. I asked to understand the issue and to actually try to - based on the facts - find a way towards a solution. My report had to refactor. 4. I was the one being asked. Luckily I had prior experience and did this time manage to not become the asshole.
I am glad I had the environments to learn.
Edit: Format.
Otherwise, agree-ish. There should be business practices in place for responsible AI use to avoid coworkers having to suffer from bad usage.
The suffering is self inflicted for this particular person. The organization doesn’t value code review.
> I don’t blame Mike, I blame the system that forced him to do this.
Bending over backwards not to be the meanie is pointless. You're trying to stop him because the system doesn't really reward this kind of behavior, and you'll do Mike a favor if you help him understand that.
At the end, I spent a lot of time sitting down with Mike to explain this kinds of things, but I wasn't effective.
Also, now LLMs empower Mike to make a 1600 line PR daily, and me needing to distinguish between "lazyslopped" PRs or actual PRs.
> Mike comes up with 1600 lines of code in a day instead of in a sprint
It seems like you do have an idea of at least one thing that AI changes.
So now instead of reviewing 1600 lines of bad code every 2 weeks, you must review 1600 lines of bad code every day (while being told 1600 lines of bad code every day is an improvement because just how much more bad code he's "efficiently" producing! Scale and volume is the change.
Which is extremely relevant, as it dramatically increases the probability that other people will have to care about it.
This thinking that we must avoid blaming individuals for their own actions and instead divert all blame to an abstract system is getting a little out of control. The blameless post-mortem culture was a welcome change from toxic companies who were scapegoating hapless engineers for every little event, but it's starting to seem like the pendulum has swung too far the other way. Now I keep running into situations where one person's personal, intentional choices are clearly at the root of a situation but everyone is doing logical backflips to try to blame a "system" instead of acknowledging the obvious.
This can get really toxic when teams start doing the whole blameless dance during every conversation, but managers are silently moving to PIP or lay off the person who everyone knows is to blame for repeated problems. In my opinion, it's better to come out and be honest about what's happening than to do the blameless performance in public for feel-good points.
why? does it matter? do you ask the same questions for people that don't use AI? I don't like using AI for code because I don't like the code it generates and having to go over and over until I like it, but I don't care how people write code. I review the code that's on the PR and if there's I don't understand/agree, I comment on the PR
other than the 1600 lines PR that's hard to view, it feels that the author just want to be in the way and control everything other people are doing
Using AI adds a non-deterministic layer in between, and a lot of code now is there that you probably didn't need.
The prompt is helpful to figure out what is needed and what isn't.
Also, we should not be submitting huge PRs in general. It is difficult to be thorough in such cases. Changes will be less well understood and more bugs will sneak their way into the code base.
It makes a lot more sense to review and workshop that into a better prompt than to refactor the derived code when there are foundational problems with the prompt.
Also, we do do this for human-generated code. It's just a far more tedious process of detective work since you often have to go the opposite direction and derive someone's understanding from the code. Especially for low effort PRs.
Ideally every PR would come with an intro that sells the PR and explains the high level approach. That way you can review the code with someone's objectives in mind, and you know when deviations from the objective are incidental bugs rather than misunderstandings.
…yes? If someone dumps a PR on me without any rationale I definitely want to understand their thought process about how they landed on this solution!
Also love the points during review! Transparency is key to understanding critical thinking when integrating LLM-assisted coding tools.
1. Companies that push and value slop velocity do not have all these bureaucratic merge policies. They change them or relax them, and a manager would just accept it without needing to ping the author.
2. If the author was on the high paladin horse of valuing the craft he would not be working in such a place. Or he would be half assing slop too while concentrating on writing proper code for his own projects like most of us do when we end in bs jobs.
It doesn't take a company policy for an ai-enabled engineer to start absolutely spewing slop. But it's instantly felt by whatever process exists downstream.
I think there's still a significant quantity of engineers who value the output of AI, but at the same time put the effort in to avoid situations like what the author is describing. Reviewing code, writing/generating appropriate tests (and reviewing those too). The secret is those are the good ones. These are the ones you SHOULD promote, laud, and set as examples. The rest should be made examples of and be held accountable.
Id hope my usages of AI are along these lines. I'm sure I'm failing at some of the points, and always trying to improve.
> I don't blame Mike
You should blame Mike.
The worst part is, this isn’t me speculatively catastrophizing. I’m just observing how my own organization’s culture has changed over the past couple of years.
It’s hitting the less senior team members hardest, too. They are generally less skilled at reading code and therefore less able to keep up with the rapid growth in code volume. They are also more likely to get assigned the (ever growing volume of) defect tickets so the more senior members can keep on vibecoding their way to glory.
This is outrageous regardless of AI. Clearly there are process and technical barriers that failed in order to even make this possible. How does one commit a huge chunk of new code to an approved PR and not trigger a re-review?
But more importantly, in what world does a human think it is okay to be sneaky like this? Being able to communicate and trust one another is essential to the large scale collaboration we participate in as professional engineers. Violating that trust erodes all ability to maintain an effective team.
As an example there was a case where some buttons needed a special highlight based on some flag, something that could be done in 4-5 lines of code or so (this was in Unreal Engine, the UI is drawn each frame), but the PR was adding a value indicating if the button would need to be highlighted when the button was created and this value was passed around all over the place in the up to the point where the data that was used to create the UI with the buttons would be. And since the UI could change that flag, the code also recreated the UI whenever the flag changed. And because the UI had state such as scrolling, selected items, etc, whenever the UI was recreated, it saved the current state and restored it after it was recreated (together with adding storage for the state). Ultimately, it worked, but it was too much code for what it needed to do.
The kicker was that the modifications to pass around the value for the flag's state wasn't even necessary (even ignoring the fact that the flag could have been checked directly during drawing) because a struct with configuration settings was already passed through the same paths and the value could have been added to said struct. Not that it would have saved the need to save/restore the UI state though.
That’s not just a process error. At some point you just have to feed back to the right person that someone isn’t up to the task.
Oh you should definitely blame Mike for this. It’s like blaming the system when someone in the kitchen spits in the food of customer. Working with people like this is horrible because you know they don’t mind to lie cheat deceive.
If it has your name on it, you're accountable for it to your peers. If it has our name on it as a company, we're accountable for it to our users. AI doesn't change that.
If you use AI as a Heads-up Display you can't make a giant scroll of every text change you accepted.
Both Mike and the manager are cargo-culting the PR process too. Code review is what you do when you believe it's worth losing velocity in order for code to pass through the bottleneck of two human brains instead of one.
LLMs are about gaining velocity by passing less code through human brains.
Now creating a 1600 loc PR is about ten minutes, reviewing it at the very least an hour. Mike submits a bunch of PRs, the rest of the team tries to review it to prevent the slop from causing an outage at night or blowing up the app. Mike is a hero, he really embraced AI, he leveraged it to get 100x productivity.
This works for a while, until everyone realizes that Mike gets the praise, they get reprimanded for not shipping their features fast enough. After a couple of these sour experiences, other developers will follow suit, embrace the slop. Now there is nobody that stops the train wreck. The ones who really cared, left, the ones who cared at least a little gave up, and churn out slop.
xyzsparetimexyz•1h ago
dshacker•1h ago
What really bugs me is that today, it is easier than ever to do this (even the LLM can do this!) and people still don't do it.
tjr•1h ago
hxugufjfjf•1h ago
emeraldd•1h ago
dog4hire•43m ago
I automatically block PRs with LLM-generated summaries, commit messages, documentation, etc.