It's hard to respond to that. I'm genuinely stumped. As I explain in the post, this is me trying to write something re-usable to share with people who do that to a team lead.
I mean just stop there. It isn't a good MR.
If I give a fun extreme example it doesn't mean there's no subtle problem. Where do you draw the line? Maybe I guessed 40 but it would really be 80. Maybe they send in 400 lines. 200?
Also 1000 is maybe fine if it's a one shot script that just works one time on non-prod.
That must be frustrating for OSS maintainers, especially when contributing them can meaningfully move the needle on getting jobs, clients, etc.
Definitely makes sense to have rules in place to help dissuade it, but this brave new world isn't going away.
I'm old enough to remember the global melt down in 2000ish and 2008. Oh and 1991 in the UK - lol, that's when I graduated. Take your money out of AI and stuff it under the mattress (gold if you are a magpie or blue stocks).
I have actually just spent out on a fairly handy GPU to stuff into one of our backup boxes at work. It has gobs of RAM and a fairly useful pair of CPUs and sits idle during the day.
AI via LLM is a thing but it isn't worth silly money and I think that a wind of change is on the way.
This is a hardware driven bubble that will mutate into the next big computing hype. Server farms have to be doing something because they cost money. So whatever the next big hype is, big-tech will jump on the bandwagon.
Hype it 'til you make it.
Which makes me wonder what the point of even taking PRs is, the reviewer could just run the AI themselves and do the same review but not have to go through the process of leaving comments and waiting for the submitter to resolve them.
I'm imagining a funny possible outcome of this: Code linters/formatters get abandoned so personal style quirks can shine through, making code look visibly "not AI". If the quirks are consistent then it could also hint against it being faked.
OSS maintainers may need some kind of response like the one I've written here that can be strategically dropped on the worst "bad AI" contributions. I certainly wrote this for myself to make my job easier, anyway.
For a couple of bucks I can drown my own repos in low quality slop, so I don't need some well wishers to do it for me.
As a matter of fact I have yet to see an OSS maintainer that accepts AI generated slop MRs.
https://news.itsfoss.com/curl-ai-slop/
https://www.theregister.com/2024/12/10/ai_slop_bug_reports/
https://biggo.com/news/202508220113_Ghostty_Requires_AI_Disc...
If you can't code, don't code and try to contribute to OSS projects. Claude or whatever trash one uses to make this outburst of garbage is not a substitute for skills. Don't waste mainteners' time and attention.
It's not 100% automated. The worst I've seen so far is 98% AI generated code from a real person. They write and submit the MR comment.
Just give feedback or decline the PR
Most projects have some contributing.md file or something similar with their specific guidelines. One should really start from there.
Also a fire and forget merge request is probably the least welcome way to contribute to something.
This is a repo of good beginner OSS projects to contribute to
Good to note: contributing to code projects also occurs in closed source, big orgs. The same tools are used. My cases for example are from government and our projects usually aren't open source.
Just say that you don't want my code, better yet just silently reject it.
I don't want a moral referendum about how my code shall be the mana by which all future reviewers and practitioners of the art shall sup and become enlightened. Group education isn't my job as someone submitting a PR to fix some trivial shit. Sometimes it doesn't need to be smart, sometimes it doesn't need to be a learning experience by which we all grow.
Throw out the garbage, keep the good stuff, and appreciate the attention to the project. Be happy that someone wants to help.
The garbage in the case of an AI generated PR, is all of it. I will happily reject all of your slop and every future contribution from you if you can't follow the project's contribution guidelines.
If you don't like that, that's what forking is for.
> Just say that you don't want my code, better yet just silently reject it.
Not only I don't want it but I have some ideas what you can do with it and wher this code can go. Also, the code is not yours. I can generate the same amount of garbage as you myself using the same tools, and it will also not be mine, yet I stop myself from doing it, because more garbage is the last thing this world needs.
> Be happy that someone wants to help
How full you must be of yourself to consider pointing an LLM towards a repo as helping.
Must have been very difficult to point Claude towards a repo and trash code goes brrr, something that every person with a pulse can do.
And I shudder at the the entitlement to think that OSS maintainers have to thank you for your godly prompt and 0 amount of effort.
I promise you that you have merged PRs with AI generated code and/or comments. You just couldn't tell because the contributor wasn't a lazy idiot and actually thought about how to use the tools at their disposal to do good work like a professional.
I swear if we left things to you people, we'd all still be programming in assembler. Copilot generates most of my commit message drafts now. I end up accepting about half of them without needing any modifications. Sometimes they're shit. That's why I'm the developer and author. I always make sure whatever PR I submit in my name is something I'm proud to stand behind. But sorry that you don't want me on your project for that sin of the tools I chose to use for my work.
It's bold to call yourself an author when you merely edited something that somebody else wrote for you. It's like me going to work and all my PRs being done by 3 North Koreans in a trench coat. I am not the author then, and neither are you when using an LLM.
> sin of the tools I chose to use for my work.
Your editor is a tool, your terminal is a tool, agentic LLMs are somebody else doing your work instead of you (badly, very very badly).
Maybe not everyone is so enthusiastic to reduce their skill level while simultaneously lining the pockets of our corporate overlords? Have you considered that you are in the wrong, and that's why you are irrationally triggered by people questioning your stance?
The science is clear: you start cognitively declining the more you use LLMs:
https://publichealthpolicyjournal.com/mit-study-finds-artifi...
The current research on experienced developers (be mindful this study is on experienced developers so if your mileage varies, take a second look at the word experienced) shows that their performance also declines by roughly 19% while using LLMs.
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
So, you are deskilling yourself while at the same time producing less than before and wasting money and burning resources, for what?
I already can see the opportunity for a business for those of us who didn't get our brains rotted by the overuse of LLMs, to fix the messes created by "engineers" using LLMs.
And citing up votes? Without controlling for how votes are also a function of time since posting and depth of reply thread?
It doesn't sound like I'd like your dev culture. This is also explicitly part of the work objectives of me and my team.
> Sometimes it doesn't need to be smart
I prefer code that is dirt simple and stupid actually.
> Just say that you don't want my code, better yet just silently reject it.
These decision points are orthogonal, in that the author identifies a social contract wherein a contributor must have an understanding of the change-set they submit in order for it to be a viable candidate. Determining if the submitted change-set is applicable/appropriate/correct and how to provide feedback to the contributor is a subsequent activity.
I hope to get automated CR bots in my org working soon. But with 2025 capabilities it should definitely only be brief feedback that people can choose to ignore.
Like an average style checker.
The author isn't even condemning all AI generated MRs. Only ones meeting a few conditions.
I'm curious to hear what rationale you partly disagree with.
Ok, maybe I’m in a bubble, and my job is only coding-adjacent, but I’ve literally never heard a PR called an MR until today. Is this a new thing?
I'm more familiar with that term as I use Gitlab more than Github.
Not only do I use GitLab more often in my org, but I genuinely think the term itself is more precise. I can be a bit of a stickler for vocabulary.
1. I wonder if it would be more effective or land better if it didn’t mention AI at all. You’re not rejecting because of the tools they used, you’re rejecting because it’s a poor request.
2. I’d suggest an addition along the lines of “by the way, since I’m not seeing anything here that would make me think that there might just be some misunderstandings, this request makes me trust you, individually, a little bit less than I did before, and that will be reflected in how I address future requests from you. Happy to chat about it though. Please remember that trust is what makes all of this work.”
Look at this - https://github.com/n4si/kubernetes/pull/1 - devin-ai-integration wants to merge 10,000 commits, 5000+ files changed. This is not some random user of Devin AI, this is a part of paid promotional video, demonstrating "a nice PR" (quote from https://youtu.be/OIomeLQmf-4?t=219), solving "real life problem" (sic). Now let's say, user pays $$$ according to formula of cognitive load added (both for human and CI). A single PR like this would cost as much as "oops, Netlify just sent me a $100k bill", but this time without a refund.
volkk•1d ago