Should just be an instant perma-ban (along with closure, obviously).
A lot of the time open source PRs are very strategic pieces of code that do not introduce regressions, an LLM does not necessarily know or care, and someone vibe coding might not know the projects expectations. I guess instead of / aside from a Code of Conduct, we need a sort of "Expectation of Code" type of document that covers the projects expectations.
Are you talking about some agent that is specific for writing FOSS code or something? Otherwise I don't see why we'd want all agents to act like this.
As always, it's the responsibility of the contributor to understand both the code base and contributing process, before they attempt to contribute. If they don't, then you might receive push-back, or have your contribution deleted, and that's pretty much expected, as you're essentially spamming if you don't understand what you're trying to "help".
That someone understands this before contributing, is part of understanding how FOSS works when it's about collaborating on projects. Some projects have very strict guidelines, others very lax, and it's up to you to figure out what exactly they expect from contributors.
I know there will probably be a whole host of people from non-English-speaking countries who will complain that they are only using AI to translate because English is not their first (or maybe even second) language. To those I will just say: I would much rather read your non-native English, knowing you put thought and care into what you wrote, rather than reading an AIs (poor) interpretation of what you hoped to convey.
(But also, for a majority of people old fashioned Google Translate works great).
(Edit: it's actually a explicit carveout)
"LLM Code Contributions to Official Projects" would read exactly the same if it just said "Code Contributions to Official Projects": Write concise PRs, test your code, explain your changes and handle review feedback. None of this is different whether the code is written manually or with an LLM. Just looks like a long virtue signaling post.
The point, and the problem, is volume. Doing it manually has always imposed a de facto volume limit which LLMs have effectively removed. Which I understand to be the problem these types of posts and policies are designed to address.
love the "AI" in quotes
1) we accept good quality LLM code
2) we DO NOT accept LLM generated human interaction, including PR explanation
3) your PR must explain well enough the change in the description
Which summed together are far more than "no shitty code". It's rather no shitty code that YOU understand
there is no such thing as LLM code. code is code, the same standards have always applied no matter who or what wrote it. if you paid an indian guy to type out the PR for you 10 years ago, but it was submitted under your name, its still your responsibility.
The quality of "does the submitter understand the code" is not reflected in the text of the diff itself, yet is extremely important for good contributions.
I can see how frustrating it is to wade through those and they are distracting and taking time away from them actually getting things fixed up.
1. Fully human-written explanation of the issue with all the info I can add
2. As an attachment to the bug (not a PR), explicitly noted as such, an AI slop fix and a note that it makes my symptom go away.
I've been on the receiving end of one bug report in this format and I thought it was pretty helpful. Even though the AI fix was garbage, the fact that the patch made the bug go away was useful signal.
One more reason to support the project!!
Sort of related, Plex doesn't have a desktop music app, and the PlexAmp iOS app is good but meh. So I spent the weekend vibe coding my own Plex music apps (macOS and iOs), and I have been absolutely blown away at what I was able to make. I'm sure code quality is terrible, and I'm not sure if a human would be able to jump in there and do anything, but they are already the apps I'm using day-to-day for music.
That said I understand calling it out specifically. I like how they wrote this.
Related:
> https://news.ycombinator.com/item?id=46313297
> https://simonwillison.net/2025/Dec/18/code-proven-to-work/
> Your job is to deliver code you have proven to work
>I'm of the opinion if people can tell you are using an LLM you are using it wrong.
They continued:
>It's still expected that you fully understand any patch you submit. I think if you use an LLM to help you nobody would complain or really notice, but if you blindly submit an LLM authored patch without understanding how it works people will get frustrated with you very quickly.
<https://lists.wikimedia.org/hyperkitty/list/wikitech-l@lists...>
hamdingers•1h ago
I would like to see this more. As a heavy user of LLMs I still write 100% of my own communication. Do not send me something an LLM wrote, if I wanted to read LLM outputs, I would ask an LLM.
giancarlostoro•1h ago
gonzalohm•1h ago
I only use LLM to write text/communication because that's the part I don't like about my work
adastra22•1h ago
But that is translation, not “please generate a pull request message for these changes.”
Gigachad•1h ago
embedding-shape•50m ago
Using Google Translate probably means you're using a language model in the end anyways behind the scenes. Initially, the Transformer was researched and published as an improvement for machine translation, which eventually led to LLMs. Using them for translation is pretty much exactly what they excel at :)
habinero•44m ago
I've done this kind of thing, even if I think it's likely they speak English. (I speak zero Japanese here.) It's just polite and you never know who's going to be reading it first.
> Google翻訳を使用しました。問題が発生した場合はお詫び申し上げます。貴社のウェブサイトにコンピュータセキュリティ上の問題が見つかりました。詳細は下記をご覧ください。ありがとうございます。
> I have found a computer security issue on your website. Here are details. Thank you.
mort96•1h ago
Same with grammar fixes. If you don't know the language, why are you submitting grammar changes??
MarsIronPI•1h ago
mort96•55m ago
I have read text where people who aren't very good at the language try to "fix it up" by feeding it through a chat bot. It's horrible. It's incredibly obvious that they didn't write the text, the tone is totally off, it's full of obnoxious ChatGPT-isms, etc.
Just do your best. It's fine. Don't subject your collaborators to shitty chat bot output.
pessimizer•44m ago
If you think that every language level is always sufficient for every task (a fluency truther?), then you should agree that somebody who writes an email in a language that they are not confident in, puts it through an LLM, and decides the results better explain the idea they were trying to convey than they had managed to do is always correct in that assessment. Why are you second guessing them and indirectly criticizing their language skills?
mort96•32m ago
I have no idea what you're talking about with regard to being a "fluency truther", I think you're putting words into my mouth.
habinero•17m ago
The times I've had to communicate IRL in a language I don't speak well, I do my best to speak slowly and enunciate and trust they'll try their best to figure it out. It's usually pretty obvious what you're asking lol. (Also a lot of people just reply with "Can I help you?" in English lol)
I've occasionally had to email sites in languages I don't speak (to tell them about malware or whatever) and I write up a message in the simplest, most basic English I can. I run that through machine translation that starts out with "This was generated by Google Translate" and include both in the email.
Just do your best to communicate intent and meaning, and don't worry about sounding like an idiot.
denkmoon•1h ago
SchemaLoad•1h ago
ChadNauseam•15m ago
newsclues•7m ago
Kerrick•56m ago
gllmariuty•33m ago
like in that joke with the mechanic which demands $100 for hitting the car once with his wrench