*"Automated" as in bots and "AI submissions" as in ai-generated code
If the problem is too many submissions, that would suggest there needs to be structures in place to manage that.
Perhaps projects receiving lage quanties of updates need triage teams. I suspect most of the submissions are done in good faith.
I can see some people choosing to avoid AI due to the possibility of legal issues. I'm doubtful of the likelihood of such problems, but some people favour eliminating all possibly over minimizing likelihood. The philosopher in me feels like people who think they have eliminated the possibility of something just haven't thought about it enough.
This ignores the fact that many open source projects do not have the resources to dedicate to a large number of contributions. A side effect of LLM generated code is probably going to be a lot of code. I think this is going to be an issue that is not dependent on the overall quality of the code.
With AI you're going to get job hunters automating PRs for big name projects so they can stick the contributions in their resume.
I use the term algorithmic because I think it is stronger than "AI lol". I note they use terms like AI code generator in the policy, which might be just as strong but looks to me as unlikely to becoming a useful legal term (its hardly "a man on the Clapham omnibus").
They finish with this, rather reasonable flourish:
"The policy we set now must be for today, and be open to revision. It's best to start strict and safe, then relax."
No doubt they do get a load of slop but they seem to want to close the legal angles down first and attribution seems a fair place to start off. This play book looks way better than curl's.
1. My C# code compiled just fine and ran even, but it was convinced that I was missing a closing brace on a lambda near where the exception was occurring. The diff was ... Putting the existing brace on a new line. Confidently stated that was the problem and declared it fixed.
2. It did figure out that an unexpected type was being seen, and implemented a pathway that allowed for it to get to the next error, but didn't look into why that type had gotten there; that was the actual bug, not the unhandled type. So it "fixed" it, but just kicked the can down the road.
3. When figuring out the issue, it just looked at the stack trace. That was it. It was running the compiler itself; it could've just embedded some debug code (like I did) and work out what the actual issue was, but it didn't even try. The exception was just a NotSupportedException with no extra details to work off of, so adding just a crumb of context would let you solve the issue.
Now, is this the simplest emulator you could throw AI at? No, not at all. But neither is qemu. I'm thoroughly unconvinced that current tools could provide real value on codebases like these. I'm bullish on them for the future, and I use GenAI constantly, but this ain't a viable use case today.
Why is it archaic if it works? I get there might be other ways to do patch sharing and discussion but what exactly is your problem with email as a transport?
You might as well describe voice and ears as archaic!
In addition to a policy to reject contributions from AI, I think it may make sense to point out places where AI generated content can be used. For example - how much of QEMU project's (copious) CI setup is really stuff that is critical content to protect? What about ever-more interesting test cases or environments that could be enabled? Something like "contribute those things here instead, and make judicious use of AI there, with these kinds of guard rails..."
I think that particular brand of risk makes sense for this particular project, and the authors don't seem particularly negative toward GenAI as a concept, just going through a "one way door" with it.
Better code and "AI assist coding" are not exclusive of each other.
My comment didn't say anything about the output of AI being fair use or not, rather that fair use (no matter where you are getting material from) ipso facto doesn't mean that copy paste is considered okay.
Every employer I ever had discouraged copy and paste from anywhere as a blanket rule.
At least, that had been the norm, before the LLM takeover. Obviously, organizations that use AI now for writing code are plagiarizing left and right.
In addition to the Structure, Sequence and Organization claims, the original filing included a claim for copyright violation on 9 identical lines of code in rangeCheck(). This claim was dropped after the judge asked Oracle to reduce the number of claims, which forced Oracle to pare down to their strongest claims.
I think way many developers use StackOverflow suggests otherwise.
They redistribute the material under the CC BY-SA 4.0 license. https://creativecommons.org/licenses/by-sa/4.0/
This allows visitors to use the material, with attribution. One can, of course, use the ideas in a SO answer to develop one's own solution.
QEMU is (mostly) GPL 2.0 licensed, meaning (most) code contributions need to be GPL 2.0 compatible [0]. Let's say, hypothetically, there's a code contribution added by some patch involving gen AI code which is derived/memorised/copied from non-GPL compatible code [1]. Then, hypothetically, a legal case sets precedent that gen AI FOSS code must re-apply the license of the original derived/memorised/copied code. QEMU maintainers would probably need to roll back all those incompatible code contributions. After some time, those code contributions could have ended up with downstream callers which also need to be rewritten (even in CI code).
It might be possible to first say "only CI code which is clearly labelled as 'DO NOT RE-USE: AI' or some such". But the maintainers would still need to go through and rewrite those parts of the CI code if this hypothetical plays out. Plus it adds extra work to reviews and merge processes etc.
it's just less work and less drama for everyone involved to say "no thank you (for now)".
----
caveat: IANAL, and licensing is not my specific expertise (but i would quite like it to be one day)
[0]: https://github.com/qemu/qemu/blob/master/LICENSE
[1]: e.g. No license / MPL / Apache / Aritistic / Creative Commons https://www.gnu.org/licenses/license-list.html#NonFreeSoftwa...
We do that in corporate environments too. "I don't like this" -> "let me see what lawyers say" -> "a-ha, you can't do it because legal says it's a risk".
I'm very old man shouting at clouds about this stuff. I don't want to review code the author doesn't understand and I don't want to merge code neither of us understand.
This really bothers me. I've had people ask me to do some task except they get AI to provide instructions on how to do the task and send me the instructions, rather than saying "Hey can you please do X". It's insulting.
This is the same people that think that "learning to code" is a translation issue they don't have time for as opposed to experience they don't have.
This is very, very germane and a very quotable line. And these people have been around from long before LLMs appeared. These are the people who dash off an incomplete idea on Friday afternoon and expect to see a finished product in production by next Tuesday, latest. They have no self-awareness of how much context and disambiguation is needed to go from "idea in my head" to working, deterministic software that drives something like a process change in a business.
I know several people like this, and it seems they feel like they have god powers now - and that they alone can communicate with "the AI" in this way that is simply unreachable by the rest of the peasants.
The code example was AI generated. I couldn't find a single line of code anywhere in any codebase. 0 examples on GitHub.
And of course it didn't work.
But, it sent me on a wild goose because I trusted this person to give me a valuable insight. It pisses me off so much.
A far too common trap people fall into is the fallacy of "your job is easy as all you have to do is <insert trivialization here>, but my job is hard because ..."
Statistically generated text (token) responses constructed by LLM's to simplistic queries are an accelerant to the self-aggrandizing problem.
In free software though, these kinds of nonsense suggestions always happened, way before AI. Just look at any project mailing list.
It is expected that any new suggestion will encounter some resistance, the new contributor itself should be aware of that. For serious projects specifically, the levels of skepticism are usually way higher than corporations, and that's healthy and desirable.
AI further encourages the problem in DevOps/Systems Engineering/SRE where someone comes to you and says "hey can you do this for me" having come up with the solution instead of giving you the problem "hey can you help me accomplish this"... AI gives them solutions which is more steps away to detangle into what really needs to be done.
AI has knowledge, but it doesn't have taste. Especially when it doesn't have all of the context a person with experience, it just has bad taste in solutions or just the absence of taste but with the additional problem that it makes it much easier for people to do things.
Permissions on what people have access to read and permission to change is now going to have to be more restricted because not only are we dealing with folks who have limited experience with permissions, now we have them empowered by AI to do more things which are less advisable.
You can’t dismiss it out of hand (especially with it coming from up the chain) but it takes no time at all to generate by someone who knows nothing about the problem space (or worse, just enough to be dangerous) and it could take hours or more to debunk/disprove the suggestion.
I don’t know what to call this? Cognitive DDOS? Amplified Plausibility Attack? There should be a name for it and it should be ridiculed.
I would find it very insulting if someone did this to me, for sure, as well as a huge waste of my time.
On the other hand I've also worked with some very intransigent developers who've actively fought against things they simply didn't want to do on flimsy technical grounds, knowing it couldn't be properly challenged by the requester.
On yet another hand, I've also been subordinate to people with a small amount of technical knowledge -- or a small amount of knowledge about a specific problem -- who'll do the exact same thing without ChatGPT: fire a bunch of mid-wit ideas downstream that you have already thought about, but you then need to spend a bunch of time explaining why their hot-takes aren't good. Or the CEO of a small digital agency I worked at circa 2004 asking us if we'd ever considered using CSS for our projects (which were of course CSS heavy).
Sometimes it's fun reverse engineering the directions back into various forum, Stack Overflow, and documentation fragments and pointing out how AI assembled similar things into something incorrect
The author is me and my silicon buddy. We understand this stuff.
---
LLM-Generated Contribution Policy
Color is a library full of complex math and subtle decisions (some of them possibly even wrong). It is extremely important that any issues or pull requests be well understood by the submitter and that, especially for pull requests, the developer can attest to the Developer Certificate of Origin for each pull request (see LICENCE).
If LLM assistance is used in writing pull requests, this must be documented in the commit message and pull request. If there is evidence of LLM assistance without such declaration, the pull request will be declined.
Any contribution (bug, feature request, or pull request) that uses unreviewed LLM output will be rejected.
---
I am also adding this to my `SECURITY.md` entries:
---
LLM-Generated Security Report Policy
Absolutely no security reports will be accepted that have been generated by LLM agents.
---
As it's mostly just me, I'm trying to strike a balance, but my preference is against LLM generated contributions.
My high-level work is absolutely impossible to delegate to AI, but AI really helps with tedious or low-stakes incidental tasks. The other day I asked Claude Code to wire up some graphs and outlier analysis for some database benchmark result CSVs. Something conceptually easy, but takes a fair bit of time to figure out libraries and get everything hooked up unless you're already an expert at csv processing.
But I refuse to use it as anything more than a fancy autocomplete. If it suggests code that's pretty close to what I was about to type anyway, I accept it.
This ensures that I still understand my code, that there shouldn't be any hallucination derived bugs, [1] and there really shouldn't be any questions about copyright if I was about to type it.
I find using copilot this way speeds me up. Not really because my typing is slow, it's more that I have a habit of getting bored and distracted while typing. Copilot helps me get to the next thinking/debugging part sooner.
My brain really comprehend the idea that anyone would not want to not understand their code. Especially if they are going to submit it as a PR.
And I'm a little annoyed that the existence of such people is resulting in policies that will stop me from using LLMs as autocomplete when submitting to open source projects.
I have tried using copilot in other ways. I'd love for it to be able to do menial refactoring tasks for me. But every-time I experiment, it seems to fall off the rails so fast. Or it just ends up slower than what I could do manually because it has to re-generate all my code instead of just editing it.
[1] Though I find it really interesting that if I'm in the middle of typing a bug, copilot is very happy to autocomplete it in its buggy form. Even when the bug is obvious from local context, like I've typoed a variable name.
I suspect their concern is not so much whether users have own the copyright to AI output but rather the risk that AI will spit out code from its training set that belongs to another project.
Most hypervisors are closed source and some are developed by litigious companies.
I would wager good money that in a few years the most security-focused companies will be relying heavily on AI somewhere in their software supply chain.
So I don't think this policy is about security posture. No doubt human experts are reviewing the security-relevant patches anyway.
this is everything that it spits out
Pull your pants up.
I'm not sure that's the dunk you think it is. Good for Netflix for making money, but we're drowning in their empty slop content now and worse off for it.
In the former case, disentangling AI-edits from human edits could tie a project up in legal proceedings for years and projects don't have any funding to fight a copyright suit. Specifically, code that is AI-generated and subsequently modified or incorporated in the rest of the code would raise the question of whether subsequent human edits were non-fair-use derivative works.
In the latter case the license restrictions no longer apply to portions of the codebase raising similar issues from derived code; a project that is only 98% OSS/FS licensed suddenly has much less leverage in takedowns to companies abusing the license terms; having to prove that infringers are definitely using the human-generated and licensed code.
Proprietary software is only mildly harmed in either case; it would require speculative copyright owners to disassemble their binaries and try to make the case that AI-generated code infringed without being able to see the codebase itself. And plenty of proprietary software has public domain code in it already.
https://news.artnet.com/art-world/ai-art-us-copyright-office...
https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...
Im pretty sure that this ship has sailed.
Remember, anyone can attempt to sue anyone for anything at any time in a functional system. How far the suit makes it is a different matter.
#1 There will be no verifiable way to prove something was AI generated beyond early models.
#2 Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects. The only room for debate on that is an apocalypse level scenario where humans fail to continue producing semiconductors or electricity.
#3 If a project successfully excludes AI contributions (not clear how other than controlling contributions to a tight group of anti-AI fanatics), it's just going to be cloned, and the clones will leave it in the dust. If the license permits forking then it could be forked too, but cloning and purging any potential legal issues might be preferred.
There still is a path for open source projects. It will be different. There's going to be much, much more software in the future and it's not going to be all junk (although 99% might.)
Still waiting to see evidence of AI-driven projects eating the lunch of "traditional" projects.
It's only going to get more pervasive from now on.
I don't think anyone is claiming that. If you submit changes to a FOSS project and an LLM assisted you in writing them how would anyone know? Assuming at least that you are an otherwise competent developer and that you carefully review all code before you commit it.
The (admittedly still controversial) claim being made is that developers with LLM assistance are more productive than those without. Further, that there is little incentive for such developers to advertise this assistance. Less trouble for all involved to represent it as 100% your own unassisted work.
The others are just too specific for me to be useful for anyone else: an android app for automatic processing of some text messages and a work scheduling/prioritising thing. The time to make them generic enough to share would be much longer than creating my specific version in the first place.
I'm getting towards the end of a vibe coded ZFS storage backend to ganeti that includes the ability to live migrate VMs to another host by: taking snapshot and replicating it to target, pausing VM, taking another incremental snapshot and replicating it, and then unpausing the VM on the new destination machine. https://github.com/linsomniac/ganeti/tree/newzfs
Other LLM tools I've built this week:
This afternoon I built a web-based SQL query editor/runner with results display, for dev/ops people to run read-only queries against our production database. To replace an existing super simple one, and add query syntax highlighting, snippet library, and other modern features. I can probably release this though I'd need to verify that it won't leak anything. Targets SQL Server.
A couple CLI Jira tools to pull a list of tickets I'm working on (with cache so I can get an immediate response, then get updates after Jira response comes back), and tickets with tags that indicate I have to handle them specially.
An icinga CLI that downtimes hosts, for when we do sweeping machine maintenances like rebooting a VM host with dozens of monitored children.
An Ansible module that is a "swiss army knife" for filesystem manipulation, merging the functions of copy, template, file, so you can loop over a list and: create a directory, template a couple files into it, doing a notify on one and a when on another, ensure a file exists if it doesn't already, to reduce duplication of boilerplate when doing a bunch of file deploys. This I will release as a ansible galaxy module once I have it tested a little more.
"competitive", meaning: "most features/lines of code emitted" might matter to a PHB or Microsoft
but has never mattered to open source
> The policy we set now must be for today, and be open to revision. It's best to start strict and safe, then relax.
So, no need for the drama.
I am personally somewhere in the middle, just good enough to know I am really bad at this so I make sure that I don't contribute to anything that is actually important ( like QEMU ).
But how many people recognize their own strengths and weaknesses? That is part of the problem and now we are proposing that even that modicum of self-regulation ( as flawed as it is ) be removed.
FWIW, I hear you. I also don't have an answer. Just thinking out loud.
Yeah I don’t think so. But if it does then who cares? AI can just make a better QEMU at that point I guess.
They aren’t hurting anyone with this stance (except the AI hype lords), which I’m pretty sure isn’t actually an anti-AI stance, but a pragmatic response to AI slop in its current state.
For what it's worth, I think AI for code will arrive at a place like how other coding tools sit – hinting, intellisense, linting, maybe even static or dynamic analysis, but I doubt NOT using AI will be a critical asset to productivity.
Someone else in the thread already mentioned it's a bit of an amplifier. If you're good, it can make you better, but if you're bad it just spreads your poor skills like a robot vacuum spreads animal waste.
Which is entirely reasonable. The trend of people say, on HN saying "I asked an LLM and this is what it said..." is infuriating.
It's just an upfront declaration that if your answer to something is "it's what Claude thinks" then it's not getting merged.
For example, if using Copilot, Microsoft also has every commit ever made if the project is on GitHub.
They could, theoretically, determine what did or didn't come out of their models and was integrated into source trees.
Regarding #2 and #3, with relatively novel software like QEMU that models platforms that other open source software doesn't, LLMs might not be a good fit for contributions. Especially where emulation and hardware accuracy, timing, quirks, errata etc matter.
For example, modeling a new architecture or emulating new hardware might have LLMs generating convincing looking nonsense. Similarly, integrating them with newly added and changing APIs like in kvm might be a poor choice for LLM use.
I see those glasses as becoming just a part of me, just like my current dumb glasses are a part of me that enables me to see better, the smart glasses will help me to see AND think better.
My brain was trained on a lot of proprietary code as well, the copyright issues around AI models are pointless western NIMBY thinking and will lead to the downfall of western civilization if they keep pursuing legal what-ifs as an excuse to reject awesome technology.
Basically I think open source has traditionally HEAVILY relied on hidden competency markers to judge the quality of incoming contributions. LLMs throw that entire concept on its head by presenting code that has competent markers but none of the backing experience. It is a very very jarring experience for experienced individuals.
I suspect that virtual or in person meetings and other forms of social proof independent of the actual PR will become far more crucial for making inroads in large projects in the future.
So it goes back for changes. It returns the next day with complete rewrites of large chunks. More "lgtm" from others. More incredibly obvious flaws, race conditions, the works.
And then round three repeats mistakes that came up in round one, because LLMs don't learn.
This is not a future style of work that I look forward to participating in.
It look like LLM is not good for cooperation, because the nature of LLM is randomness.
It's like complaining that I may have no legal right to submit my stick figure because I potentially copied it from the drawing of another stick figure.
I'm firmly convinced that these policies are only written to have plausible deniability when stuff with generated code gets inevitably submitted anyway. There's no way the people that write these things aren't aware they're completely unenforceable.
Of course it is. And nobody said otherwise, because that is explicitly stated on the commit message:
[...] More broadly there is,
as yet, no broad consensus on the licensing implications of code
generators trained on inputs under a wide variety of licenses
And in the patch itself: [...] With AI
content generators, the copyright and license status of the output is
ill-defined with no generally accepted, settled legal foundation.
What other commenters pointed out is that, beyond the legal issue, other problems also arise form the use of AI-generated code.The thinking here is probably similar: if AI-generated code becomes poisonous and is detected in a project, the DCO could allow shedding liability onto the contributor that said it wasn’t AI-generated.
Don’t be ridiculous. The majority of people are in fact honest, and won’t submit such code; the major effect of the policy is to prevent those contributions.
Then you get plausible deniability for code submitted by villains, sure, but I’d like to hope that’s rare.
Because for projects like QEMU, current AI models can actually do mind-boggling stuff. You can give it a PDF describing an instruction set, and it will generate you wrapper classes for emulating particular instructions. Then you can give it one class like this and a few paragraphs from the datasheet, and it will spit out unit tests checking that your class works as the CPU vendor describes.
Like, you can get from 0% to 100% test coverage several orders of magnitude faster than doing it by hand. Or refactoring, where you want to add support for a particular memory virtualization trick, and you need to update 100 instruction classes based on straight-forward, but not 100% formal rule. A human developer would be pulling their hairs out, while an LLM will do it faster than you can get a coffee.
It might actually be prudent for some (perhaps many foundational) OSS projects to reject AI until the full legal case law precedent has been established. If they begin taking contributions and we find out later that courts find this is in violation of some third party's copyright (as shocking as that outcome may seem), that puts these projects in jeopardy. And they certainly do not have the funding or bandwidth to avoid litigation. Or to handle a complete rollback to pre-AI background states.
There are simple algorithms that everyone will implement the same way down to the variable names, but aside from those fairly rare exceptions, there's no "maximum number of lines" metric to describe how much code is "fair use" regardless of the licence of the code "fair use"d in your scenario.
Depending on the context, even in the US that 5-second clip would not pass fair use doctrine muster. If I made a new film cut entirely from five second clips of different movies and tried a fair use doctrine defence, I would likely never see the outside of a courtroom for the rest of my life. If I tried to do so with licensing, I would probably pay more than it cost to make all those movies.
Look up the decisions over the last two decades over sampling (there are albums from the late 80s and 90s — when sampling was relatively new — which will never see another pressing or release because of these decisions). The musicians and producers who chose the samples thought they would be covered by fair use.
Seeing this new phenomenon must be difficult for those people who have spent a long time perfecting their craft. Essentially, they might feel that their skillsets are being undermined. It would be especially hard for people who associate a lot of their self-identity with their job.
Being a purist is noble, but I think that this stance is foolish. Essentially, people who chose not to use AI code tools will be overtaken by the people who do. That's the unfortunate reality.
Makes total sense.
I am just wondering how do we differentiate between AI generated code and human written code that is influenced or copied from some unknown source. The same licensing problem may happen with human code as well especially for OSS where anyone can contribute.
Given the current usage, I am not sure if AI generated code has an identity of its own. It’s really a tool in the hand of a human.
ai moves faster than group consensus this ban won't slow down the tech it'll may make paradigms like qemu harder to enter harder to scale, harder to test thru properly
so if we maintain code like this we gotta know the trade we're making we're preserving trust but limiting throughput maybe fine idk but don't confuse it as future proofing
i kinda feel it does exposes trust in oss is social not epistemic. we accept complex things if we know who dropped it and we reject clean things if it smells synthetic
so the real qn isn't > did we use ai? it's > can we even maintain this in 6mo? and if the answer's yes doesn't really matter who produced the code fr
teruakohatu•4h ago
Universities have this issue too, despite many offering students and staff Grammarly (Gen AI) while also trying to ban Gen AI.
_fat_santa•4h ago
I'm sure that if a contributor working on a feature used cursor to initially generate the code but then goes over it to ensure it's working as expected that would be allowed, this is more for those folks that just want to jam in a quick vibe-coded PR so they can add "contributed to the QEMU project" on their resumes.
hananova•4h ago
SchemaLoad•4h ago
Use AI if you want to, but if the person on the other side can tell, and you can't defend the submission as your own, that's a problem.
JoshTriplett•4h ago
The actual policy is "don't use AI code generators"; don't try to weasel that into "use it if you want to, but if the person on the other side can tell". That's effectively "it's only cheating if you get caught".
By way of analogy, Open Source projects also typically have policies (whether written or unwritten) that you only submit code you are legally allowed to submit. In theory, you could take a pile of proprietary reverse-engineered code that you have no license to, or a pile of code from another project that you aren't respecting the license of, and submit it anyway, and slap a `Signed-off-by` on it. Nothing will physically stop you, and people might not be able to tell. That doesn't make it OK.
SchemaLoad•4h ago
Getting AI to remind you of the libraries API is a fair bit different to having it generate 1000 lines of code you have hardly read before submitting.
Art9681•4h ago
Filligree•1h ago
GuB-42•4h ago
The rules regarding the origin of code contributions are rather strict, that is, you can't contribute other people code unless you can make sure that the licence is appropriate. A LLM may output a copy of someone else code, sometimes verbatim, without giving you its origin, so you can't contribute code written by a LLM.