It's just a side cost of doing business, because asking for forgiveness is cheaper and faster than asking for permission.
That case was important, but it's not abojt the virality. There have been no concluded court cases involving the virality portion causing the rest of the code to also be GPL'd, but there are plenty involving enforcement of GPL on the GPL code itself.
The distinction is important because the article is about the virality causing the whole LLM model to be GPL'd, not just about the GPL'd code itself.
I'd like to think it wouldn't be a problem to enforce, but I've also never seen a court ruling truly about the virality portion to back that up either - which is all GP is saying.
It's sad to see Microsoft's FUD still festering 20 years later.
You're basically saying "the GPL doesn't go back in time and relicense unrelated code." But nobody was ever claiming it does, and describing it as "viral" doesn't imply that it does. It's "viral" because code that you stick to it has to conform to its rules. It's good that the GPL is viral. I want it to be viral, I don't want people to be able to hide GPL'd code in a proprietary structure.
What you're calling the "virality portion" says that one of the ways you *are* allowed to use the code is as part of other GPLed software. If you're going to look for court cases that explicitly "involve" that, it would have to be someone either:
* using it as a defense, i.e. saying "we're covered by the GPL because the software we embedded this code in is GPL" (That will probably never happen because people don't sue GPLed projects for containing GPLed code), or
* coming into line with the GPL by open sourcing their own code as part of resolving a case (The BusyBox case [2] was an example of that).
If you just want cases where companies that were distributing GPL code in closed source software were prevented from doing so, the Cisco [1] and BusyBox [2] cases were both notable examples. That they were settled doesn't somehow make them a weaker "test of the GPL" - rather the companies involved didn't even attempt to argue that what they were doing was permitted. They came into line and coughed up. If you really must insist on one where the defendant dug in and the court ended up awarding damages, I don't think there have been any in the US but there has been one in France [3].
As for "nobody was ever claiming it does", the "viral" wording has been used for as long as the GPL has been around as a scare tactic for introducing exactly that erroneous idea. Even in cases where people understand what the license says, it leads to subtle misunderstandings of the law, which is why the Free Software Foundation discourages its use. (Also, you literally said, in these exact words, "the virality causing the whole LLM model to be GPL'd".)
[1] https://en.wikipedia.org/wiki/Free_Software_Foundation,_Inc.....
[2] https://en.wikipedia.org/wiki/BusyBox#GPL_lawsuits
[3] https://www.dlapiper.com/en/insights/publications/2024/03/wa...
The Cisco case was about distributing GPL binaries, not linking it with the rest of the code base and the rest of that code base then needing to be GPL. It's a standard license enforcement unrelated to the unique requirements of GPL.
The BusyBox case is probably the closest in the list, but as you already point out we didn't get a ruling to set precedent and instead got a settlement. It seems obvious what the ruling would be (to me at least), but settlement was explicitly not what is being talked about.
Bringing in French courts, they issued fines - they didn't issue the type of order this article talks about which is about releasing the entire thing involved at the time with GPL.
This isn't related to fear, uncertainty, or doubt about GPL. It's related to what has/hasn't already been ruled in the court systems handling this license before as the article skips past a bit. Even in the case we assume the courts will rule with what seems obvious (to me at least), it has a tangible difference in how these cases will be run, the assumptions they will have, and how long they will last.
It has never been the case that including GPL code in your software automatically makes your software GPL or even requires you to make it GPL. If you do get sued because you are distributing GPL code in a way that colloquially "violates the GPL" (technically, rather, in way that is not covered by the GPL or by fair use or any other licence, so it violates copyright) you might choose to GPL your code as a way of coming into compliance, but doing so is neither the only way to achieve compliance (you can instead remove the GPL code, and companies with significant investments in their proprietary code typically do that), nor a remedy for the harm done by your copyright violation to date, which you will typically have to remedy financially, via damages or a settlement.
As for legally testing, you seem to be to wanting a court to explicitly adjudicate against something so obviously wrong that in well over 20 years of FSF enforcement (edit: actually around 40 years) no company has been daft enough to try and argue it in court.
It might help if you try and delineate exactly what sort of case you'd accept as proof of "enforceability" of "virality". I think it would have to be something like a company embedding GPL code in proprietary code and then trying to argue in court that doing so is explicitly permitted by the GPL, and sticking to their guns all the way to a verdict against them. I'm not sure whether that argument would be considered frivolous enough to get the lawyers involved censured, but I certainly doubt a judge would be impressed.
If it helps make it any clearer, if in defending against a case like this your lawyer were to try and argue that the GPL is invalid and somehow just void, you should fire them immediately because they're trying to do the legal equivalent of shooting their own feet off. The GPL is what allows distribution of code, and allowing things is all it can do, because it is a license (not a contract). It can't forbid anything, and removing it from the equation can only decrease the set of things you are allowed to do with the copyrighted code.
Including GPL code in your app requires/results in different things depending on how you do that. E.g. the way Cisco did it with binaries is different than doing it with static linking/ is different than dynamic linking/syscalls/apis is different than expanding the code directly. It's not possible to talk about it as generically as above, especially in context of discussion around an article adding a new method of interaction.
Yes, the point is precisely the article explicitly asks about this point being tested in court rulings and the comment is that it has never needed to go beyond settlement (usually not even that far). I also don't really agree with how the article assumes things around that in a few places, but that's neither here nor their to this point.
It's not that I want proof, it's that the article you admit to not reading sets out to look at court cases to "consider the path through which the theory of license propagation to AI models might be recognized in the future". In that regard it's pretty relevant to note past no court case, nor really the two ongoing in the article, involve propagation of the license to the whole entity yet.
The difference between a license and a contract may be too subtle for the denizens of HN to grasp in 2025 but I assure you it's not lost on the legal system. It's not lost on those of us who followed groklaw back in the day, either. Sad we have to live with an internet devoid of such joys now.
I do miss groklaw, been far too long for something like that to appear again.
The "enforceability" of the GPL was never in any doubt because it's not a contract and doesn't need to be "enforced". The license grants you freedoms you otherwise may not have under copyright. It doesn't deny you any freedoms you would otherwise have, and it cannot do so because it is not a contract. If the terms of the GPL don't apply to your use then all you have is the normal freedoms under copyright law, which may prohibit it. If so, any "enforcement" isn't enforcement of the GPL. It's enforcement of copyright, and there's certainly no doubt on the enforceability of that.
For the GPL to "fail" in court it would have be found to effectively grant greater freedoms than it was designed to do (or less, resulting in some use not being allowed when it should be, but that's not the sort of case being considered here). It doesn't, and it has repeatedly stood up in court as not granting additional freedoms than were intended.
You are then restricted by copyright just like with any other creation.
If I include the source code of Windows into my product, I can't simply choose to re-license it to say public domain and give it to someone else, the license that I have from Microsoft to allow me to use their code won't let me - it provides restrictions. It's just as "viral" as the GPL.
Also, "don't use my code" is not viral. If you break the MSFT license, you pay them, which is a very well-tested path in courts. The idea of forced public disclosure does not seem to be.
If the GPL license didn't exist, and instead you just relying on copyright, then that's an injunction. You have to stop using the code you "stole" and pay reparations.
In UK law, if you distribute copyright material in the course of a business you can be facing 10 years in prison and an unlimited fine.
Sure you can't get them to agree to the GPL, they could simply stop distributing and then turn up to their stint in prison and massive fine. In reality I suspect they would take the easy way out and comply with the license.
Corporations can't go to prison.
A major problem with the western world, total lack of accountability
For GPL a court could fine them the value of their company, or settle by releasing the code. A court won't but that's because the system is built to protect moneyed entities over individuals.
You could give 15% of the ownership to the aggrieved party.
Conversely, to my knowledge there has been no court decision that indicates that the GPL is _not_ enforceable. I think you might want to be more familiar with the area before you decide if it's legally questionable or not.
I also have the feeling it will be much like Google LLC v. Oracle America, Inc., much of this won't really be clearly resolved until the end if the decade. I'd also not ve surprised if seemingly very different answers ended up bubbling up in the different cases, driven by the specifics of the domain.
Not a lawyer, just excited to see the outcomes :).
Democracy is the worst system we’ve tried, except for all the others.
(Also: The GPL can only be enforced because of laws passed by Congress in the late ‘70’s and early ‘80’s. And believe you me, people said all the same kinds of things about those clowns in Congress. Plus ça change…)
This solution to me amounts to an "everybody wins" situation, where producers of material are compensated, model trainers and companies can get clean, reliable data sets without having to waste time and energy scraping and digitizing it themselves, and model users can have access to a number of known "safe" models. At the same time, people not interested in "allowing" their works to be used to train AIs and people not interested in only using the public data sets can each choose to not participate in this system, and then individually resolve their copyright disputes as normal.
> You can get one doubling of that by submitting your work to an official "library of congress" data set
Needs to be done at day 0 and made available at day 0 for usage. Maybe with standardised availability for usage e.g. licencing X,Y or Z or non-standard call-us-for-pricing.
The world moves fast and time really matters e.g. look at how the wait for patents to expire affects outcomes
{ "includeCoAuthoredBy": false }
They could start selling a version of Word tomorrow that gives them the right to train from everything you type on your entire computer into any program. Or that requires you to relinquish your rights to your writing and to license it back from Microsoft, and to only be able to dispute this through arbitration. They could add a morals clause.
they could, but would anyone agree to this new eula? If they did, then what's the problem?
You can do whatever you want with the software, BUT you must do a few things. For GPL it's keeping the license, distributing the source, etc. Why can't we have a different license with the same kind of restrictions, but also "Models trained on this licensed work must be open source".
Edit: Plus the license would not be "GPL+restriction" but a new license altogether, which includes the requirements for models to be open.
I suggest a careful reading of the GNU GPL, or the definition of Free Software, where this is carefully explained.
"A work based on the program" can be defined to include AI models (just define it, it's your contract). "All of these conditions" can include conveying the AI model in an open source license.
I'm not restricting your ability to use the program/code to train an AI. I'm imposing conditions (the same as the GPL does for code) onto the AI model that is derivative of the licensed code.
Edit: I know it may not be the best section (the one after regarding non-source forms could be better) but in spirit, it's exactly the same imo as GPL forcing you to keep the GPL license on the work
Using AGPL as the base instead of GPL (where network access is distribution), any user of the software will have the rights to the source code of the AI model and weights.
My goal is not to impose more restrictions to the AI maker, but to guarantee rights to the user of software that was trained on my open source code.
"The freedom to run the program as you wish, for any purpose (freedom 0)."
You are still free to train on the licensed work, BUT you must meet the requirements (just like the GPL), which would include making the model open source/weight.
If I print an harry potter book in red ink then I won't have any copyright issues?
I don't think changing how the information is stored removes copyright.
I can see how it pushes the boundary, but I can’t lay out logic that it’s not. The code has been publish for the public to see. I’m always allowed to read it, remember it, tell my friends about it. Certainly, this is what the author hoped I would do. Otherwise, wouldn’t they have kept it to themselves?
These agents are just doing a more sophisticated, faster version of that same act.
I think this is the part where we disagree. Have you used LLMs, or is this based on something you read?
I don't remember the exact case now, but someone was cloning a program (Lotus123 -> Quatro or Excel???). They printed every single screen and made a team write a full specification in English. Later another separate team look at the screenshots and text and reimplement it. Apparently meatballs can get tainted, but the plain English text loophole was safe enough.
[1] From https://gitlab.winehq.org/wine/wine/-/wikis/Developer-FAQ#wh...
> Who can't contribute to Wine?
> Some people cannot contribute to Wine because of potential copyright violation. This would be anyone who has seen Microsoft Windows source code (stolen, under an NDA, disassembled, or otherwise). There are some exceptions for the source code of add-on components (ATL, MFC, msvcrt); see the next question.
This is close to how I would actually recommend reimplementing a legacy system (owned by the re-implementer) with AI SWE. Not to avoid copyright, but to get the AI to build up everything it needs to maintain the system over a long period of time. The separate team is just a new AI instance whose context doesn’t contain the legacy the code (because that would pollute the new result). The amplify isn’t too apt though since there is a difference between having something in your context (which you can control and is very targeted) and the code that the model was trained on (which all AI instance will share unless you use different models, and anyways, it isn’t supposed to be targeted).
If the training is established as fair use, the underlying license doesn't really matter. The term you added would likely be void or deemed unenforceable if someone ever brought it to a court.
But this is all grey area… https://www.authorsalliance.org/2023/02/23/fair-use-week-202...
This principle is also explicitly declared in US law:
> In no case does copyright protection for an original work of authorship extend to any idea, procedure, process, system, method of operation, concept, principle, or discovery, regardless of the form in which it is described, explained, illustrated, or embodied in such work. (Section 102 of the U.S. Copyright Act)
https://www.copyrightlaws.com/are-ideas-protected-by-copyrig...
The problem is that openai has too much money. But if I did what they are doing I'd get into massive legal troubles.
Like if I copy-paste GPL-licenced code, the way you realise that I copy-pasted it is because 1) you can see it and 2) the GPL-licenced code exists. But when code is LLM generated, it is "new". If I claim I wrote it, how would you oppose that?
[0] https://factually.co/fact-checks/justice/evidence-investigat...
If your close-sourced project uses some GPL code, it doesn't automatically put your whole project in public domain or under GPL. It just means you're infringing the right of the code author and they can sue you (for money and stopping using their code, not for making your whole project GPL).
In the simplest terms, GPL is:
if codebase.is_gpl_compitable:
gpl_code.give_permission(code_base)
else if codebase.is_using(gpl_code):
throw new COPYRIGHT_INFRINGEMENT // the copyright owner and the court deal with that with usual copyright laws
GPL can't do much more than that. A license over a piece of code cannot automatically change the copyright status of another piece of code. There simply isn't legal framework for that.Similarly, AI code's copyleft status can't affect the rest of the codebase, unless we make new laws specifically saying that.
Also similarly, even if Github lost the class action, it will NOT automatically release the model behind GPL to the public. It will open the possibility for all the GPL repo authors to ask Microsoft for compensation for stealing their code.
they can sue you and settle for whatever you will accept that makes them happy.
if you lose then the alternative to not making your code GPL is to make your code disappear, that is you are no longer allowed to sell your product.
consequently, if AI code is subject to the GPL then the rest of the codebase is too, or the alternative would be that the could not be distributed.
Secondly, GPL can't "make your (proprietary) code disappear." Violating GPL is essentially just stealing code. One cannot distribute the version that includes stolen code. But they can remove the stolen part and replace it with their own code. Of course they still need to settle/pay for the previous infringement.
GPL simply can't affect the copyright status of rest of the codebase, because it's a license, not a contract. It cannot restrict the user's right further than the copyright laws.
Again, it's very common misunderstanding of GPL's "virality." It has been a several-decade long debate about whether GPL should be treated like a contract instead of a mere license, but there is no ruling giving it this special legal state (yet), at least in the US.
[0]: https://lwn.net/Articles/61292/ [1]: https://en.wikipedia.org/wiki/GNU_General_Public_License#Leg...
if AI generates something that is equal to existing code, then the license of that code applies. the AI generated product as a whole can't be copyrighted, but the portions that reproduce copyrighted code retain the original copyright.
they can remove the stolen part and replace it with their own code
sure, if they can do that, then they can distribute their code again. but until then they can't.
No, it doesn't, if the generation is independent of the existing code. If a person using AI uses existing code and makes a literal copy of it, then, yes, the copyright (and any license offer applicable in the circumstances) of the existing code may apply (it may also not, the same as with copies of portions of code made by other means), and it's less than clear if (especially for small portions of code) that legally such a copy has been made when a work is in the training set.
Copyright protects against copying. It doesn't protect against someone creating the same content by means other than copying.
well, that's the big question, isn't it? if the code is used for training AI and the AI reproduces the same code, is that really independent?
i don't think so.
Copyright protects against copying. It doesn't protect against someone creating the same content by means other than copying.
if the code is the same, how do you prove it's not a copy?
it's the same problem as with plagiarism, isn't it?
if AI has seen that code in training, then this defense is no longer possible.
https://en.wikipedia.org/wiki/Cleanroom_software_engineering
Also, humans do not need to read million of pirated books to learn to talk. And a human artist doesn't need to steal million pictures to learn to draw.
They... do? Not just pictures, but also real life data, which is a lot more data than an average modern ML system has. An average artist has probably seen- stolen millions of pictures from their social media feeds over their lifetime.
Also, claiming to be anti-capitalist while defending one of the most offensive types of private property there is. The whole point of anti-capitalism is being anti private property. And copyright is private property because it gives you power over others. You must be against copyright and be against the concept of "stealing pictures" if you are to be an anti-capitalist.
Owning a song, a book or a picture doesn't give you much power by itself.
Generally speaking licenses give rights (they literally grant license). They can’t take rights away, only the legislature can do that.
"Why forbid selling drugs when you can just put a warning label on them? And you could clarify that an overdose is lethal."
It doesn't solve any problems and just pushes enforcement actions into a hopelessly diffuse space. Meanwhile the cartel continues to profit and small time users are temporarily incarcerated.
It doesn't follow. The reverse is more likely: If you end prohibition, you end the mafia.
My view is that copyright in general is a pretty abstract and artificial concept; thus corresponding regulation needs to justifiy itself by being useful, i.e. encouraging and rewarding content creation.
/sidenote: Copyright as-is barely holds up there; I would argue that nobody (not even old established companies) is significantly encouraged or incentivised by potential revenue more than 20 years in the future (much less current copyright durations). The system also leads to bad ressource allocation, with almost all the rewards ending up at a small handful of most successful producers-- this effectively externalizes large portions of the cost of "raising" artists.
I view AI overlap under the same lense-- if current copyright rules would lead to undesirable outcomes (by making all AI training or use illegal/infeasible) then law/interpretation simply has to be changed.
Its all about whose outcomes are optimized.
Of course, the law generally favors consideration of the outcomes for the massive corporations donating hundreds of millions of dollars to legislature campaigns.
I think the redistribution effect (towards training material providers) from such an scenario would be marginal at best, especially long-term, and event that might be over-optimistic.
I also dislike that stance because it seems obviously inconsistent to me-- if humans are allowed to train on copyrighted material without their output being generally affected, why not machines?
Specifically what "material differences" are there? The only arguments I heard are are around human exceptionalism (eg. "brains are different, because... they just are ok?"), or giving humans a pass because they're not evil corporations.
LLMs just predict the statistically-most-likely token.
LLMs are what we have today. They can't generalize. Humans can.
That might change someday, but it certainly hasn't yet.
So, I'll provide an example: humans can learn to do mathematics. LLMs cannot. This example is particularly galling because there are computer programs that can do (some, limited) mathematics: those operate largely by brute-force, yet can solve more mathematics problems using fewer resources than LLMs.
Is the existence of my brain copyright infringement?
The main difference I see (apart from that I bullshit way less than LLMs), is that I can't learn nearly as much as an LLM and I can't talk to 100k people at once 24/7.
I think the real answer here is that AI is a totally new kind of copying, and it's useful enough that laws are going to have to change to accommodate that. What country is going to shoot itself in the foot so much by essentially banning AI, just so it can feel smug about keeping its 20th century copyright laws?
Maybe that will change when you can just type "generate a feature length Pixar blockbuster hit", but I don't see that happening for quite a long time.
Not sure about undesirable, I so wish we could just ban all generative AI.
I feel profound sadness of having lost the world we had before generative AI became widespread. I really loved programming and seeing my trade devalued with vibe coding is just heart breaking. We will see mass unemployment, deep fakes, more AI induced psychosis, a devaluing of human art. I hate this new world.
It would be the morally correct thing to bann generative AI as it only benefits corporations and doesn't improve the life of the people but makes it worse.
The training of the big LLMs has been criminal. Whether we talk about GPL licensed code or the millions of artist that never released their work under a specific license and would never haven consented to it being used for training.
I still think states will allow it and legalize the crime because they believe that AI offer competitive advantages and they will fear "falling behind". Plus military use.
I think it's like going from pre industrial revolution manual labor, to modern tools and machines.
You don't have any rights to assert when you have AI write the code for you.
Corporations have always talked about the virality of GPL, sometimes but not always to the point of exaggeration, you'd think that after getting the proof of concept done the AI companies would be running away at full speed from setting a bomb like that in their goldmine.
Putting in tons of commonly read books and scientific papers is safer, they can just eventually cross-license with the massive conglomerates that own everything. But the GPL is by nature hostile, and has been openly and specifically hostile from the beginning. MIT and Apache, etc. you can just include a fistful of licenses to download, or even come up with architectures that track names to add for attribution-ware. But the GPL will obviously (and legitimately) claim to have relicensed the entire model and maybe all its output (unless they restricted it to LGPL.)
Wouldn't you just pull it out?
I submit the evidence suggests the genAI companies have none of those attributes.
But I'm not certain that the relevant players have the same consequence-fearing mindset that you do, and to be honest they're probably right. The theft is too great to calculate the consequences, and by the time it's settled, what are you gonna do - turn off Forster's machine?
I hope you're right in at least some cases!
Why would the GPL settle? Even more, who is authorized to settle for every author who used the GPL? If the courts decided in favor of the GPL, which I think would be likely just because of the age and pervasiveness of the GPL, they'd actually have to lobby Congress to write an exception to copyright rules for AI.
A large part of the infrastructure of the world is built on the GPL, and the people who wrote it were clearly motivated by the protection that they thought that the GPL would give to what was often a charitable act, or even an act that would allow companies to share code without having to compete with themselves. I can't imagine too many judges just going "nope."
If ultimately copyright holds up against the models*, the GPL will be a permanent holdout against any intellectual property-wide cross-licensing scheme. There's nobody to negotiate with other than the license itself, and it's not going to say anything it hasn't said before.
* It hasn't done well so far, but Obama didn't appoint any SCOTUS judges so maybe the public has a chance against the corporations there.
Haha no.
https://windsurf.com/blog/copilot-trains-on-gpl-codeium-does...
And just in the last two days, AI generating LGPL headers (which it could not do if identifying LGPL code was pulled from the codebase) and misattributing authors:
https://devclass.com/2025/11/27/ocaml-maintainers-reject-mas...
That first link shows people actively pulling out GPL code in 2023 and marketing around that fact, though. That's not great evidence that they're not doing it now, especially if testing for if GPL code is still in there seems to be as easy as prompting with an incomplete piece of it.
I'd think that companies could amass a collection of all known GPL code and test for it regularly in order to refine their methods for keeping it out.
> (which it could not do if identifying LGPL code was pulled from the codebase)
Are you sure about this? Linking to LGPL code is fine afaik. And why not train on code that linked to universally available libraries that are legal to use? Seems like one might even prefer it.
Seems like this was rejected for size and slop reasons, not licensing. If the submitter of the PR isn't even fixing possibly hallucinated author's names, it's obvious that they didn't really read it. Debugging vibe coded stuff is like finding an indeterminate number of needles in a haystack.
Like if there is no way to trace it back to the original material, does it make sense to regulate it? Not that I like the idea, just wondering.
I have been thinking for a while that LLMs are copyright-laundering machines, and I am not sure if there is anything we can do about it other than accepting that it fundamentally changes what copyright is. Should I keep open sourcing my code now that the licence doesn't matter anymore? Is it worth writing blog posts now that it will just feed the LLMs that people use? etc.
your LICENSE matters in similar ways that it mattered before LLMs. LICENSE adherence is part of intellectual property law and practice. A popular engine may be popular, but not all cases at all times. Do not despair!
On the other side, I deeply believe in the values of free software. My general stance is that all applications I open source are GPL or AGPL, and any libraries I open source are MIT. For the libraries, obviously anyone is free to use them, and if they want to rewrite them with an LLM more power to them. For the applications though, I see that as a violation of the license.
At the end of the day, I have competing values and needs and have to make a choice. The choice I've made for now is that for the vast majority of things, I'm still open sourcing them. The gift to humanity and the guarantee to the users freedom is more important to me than a theoretical threat. The one exception is anything that is truly a risk of getting lifted and used directly by competitors. I have not figured out an answer to this one yet, so for now I'm keeping it AGPL but not publicly distributing the code. I obviously still make the full code available to customers, and at least for now I've decided to trust my customers.
I think this is an issue we have to take week by week. I don't want to let fear of things cause us to make suboptimal decisions now. When there's an actual event that causes a reevaluation, I'll go from there.
An inverse of this question is arguably even more relevant: how do you prove that the output of your model is not copyrighted (or otherwise encumbered) material?
In other words, even if your model was trained strictly on copyleft material, but properly prompted outputs a copyrighted work is it copyright infringement and if so by whom?
Do not limit your thoughts to text only. "Draw me a cartoon picture of an anthropomorphic with round black ears, red shorts and yellow boots". Does it matter if the training set was all copyleft if the final output is indistinguishable from a copyrighted character?
That's not legal use of the material according to most copyleft licenses. Regardless if you end up trying to reproduce it. It's also quite immoral if technically-strictly-speaking-maybe-not-unlawful.
That probably doesn't matter given the current rulings that training an AI model on otherwise legally acquired material is "fair use", because the copyleft license inherently only has power because of copyright.
I'm sure at some point we'll see litigation over a case where someone attempts to make "not using the material to train AI" a term of the sales contract for something, but my guess would be that if that went anywhere it would be on the back of contract law, not copyright law.
edit: wording.
Anything you produce will be consumed and regurgitated by the machine. It's a personal question for everyone whether you choose to keep providing grist for their mills.
It's much easier to do that for the data that was repeated many times across the dataset. Many pieces of GPL software are likely to fall under that.
Now, would that be enough to put the entire AI under GPL? I doubt it.
discovery via lawyers
https://github.com/ocaml/ocaml/pull/14369/files#diff-062dbbe...
Famously, the output from monkey "artists" was found to be non-copyrightable even though a monkey's brain is much more similar to ours than an LLM.
[1] https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...
What is missing in the “if I can remember and recite program then they must be allowed to remember and recite proframs” argument is that you choose to do it (and you have basic human rights and freedoms), and they do not.
Your brain is part of you. Some might say it is your very essence. You are human. Humans have inalienable rights that sometimes trump those enshrined by copyright. One such right is the right to remember things you've read. LLMs are not human, and thus don't enjoy such rights.
Moreover, your brain is not distributed to other people. It's more like a storage medium than a distribution. There is a lot less furore about LLMs that are just storage mediums, and where they themselves or their outputs are not distributed. They're obviously not very useful.
So your analogy is poor.
It would be if they could get away with it. The likes of Disney would delete your memories of their films if they could get away with it. If you want to enjoy the film, you should have to pay them for the privilege, not recall the last time you watched it.
I was writing code in an unrelated programming language at the time, and the bizarre inclusion of that particular file in the output was presumably because the name of the library was very similar to a keyword I was using in my existing code, but this experience did not fill me with confidence about the abilities of contemporary AI. ;-)
However, it did clearly demonstrate that LLMs with billions or even trillions of parameters certainly can embed enough information to reproduce some of the material they were trained on verbatim or very close to it.
The burden is on you to prove that you didn't.
It may produce it when asked
https://chatgpt.com/share/678e3306-c188-8002-a26c-ac1f32fee4...
that's not proof - it may also be intelligent enough to have produce similar expressions without the original training data.
Not to mention that having knowledge of copyrighted material is not in violation of any known copyright law - after all, human brains also have the knowledge after learning it. The model, therefore, cannot be in violation regardless of what data was used to train it (as long as that data was not obtained illegally).
If someone _chooses_ to use the LLM to reproduce harry potter, or some GPL'ed code, then that person would be in violation of the relevant copyright laws. The copyright owner needs to pursue that person, rather than the owner of the LLM. In the exact same way that if someone used Microsoft Word to reproduce harry potter, microsoft would not have any liability.
Trump is trying to fire the head of the U.S. Copyright Office, but they work for the Library of Congress, not the executive branch, so that didn't work.[2]
[1] https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...
[2] https://apnews.com/article/trump-supreme-court-copyright-off...
Discovery.
Training data extraction has seen some success, tracing should be possible for at least some of it
> The spirit of the GPL is to promote the free sharing and development of software [...] the reality is that they are proceeding in a different vector from the direction of code sharing idealized by GPL. If only the theory of GPL propagation to models walks alone, in reality, only data exclusion and closing off to avoid litigation risks will progress, and there is a fear that it will not lead to the expansion of free software culture.
The spirit of the GPL is the freedom of the user, not the code being freely shared. The virality is a byproduct to ensure the software is not stolen from their users. If you just want your code to be shared and used without restrictions, use MIT or some other license.
> What is important is how to realize the “freedom of software,” which is the philosophy of open source
Freedom of software means nothing. Freedoms are for humans not immaterial code. Users get the freedom to enjoy the software how they like. Washing the code through an AI to purge it from its license goes against the open source philosophy. (I know this may be a mistranslation, but it goes in the same direction as the rest of the article).
I also don't agree with the arguments that since a lot of things are included in the model, the GPL code is only a small part of the whole, and that means it's okay. Well if I take 1 GPL function and include it in my project, no matter its size, I would have to license as GPL. Where is the line? Why would my software which only contains a single function not be fair use?
who do you mean by "user"?
the spirit is that the person who actually uses the software also has the freedom to modify it, and that the users recovering these modifications have the same rights.
is that what you meant?
and while technically that's the spirit of the GPL, the license is not only about users, but about a _relationship_, that of the user and the software and what the user is allowed to do with the software.
it thus makes sense to talk about "software freedom".
last not least, about a single GPL function --- many GPL _libraries_ are licensed less restrictively, LGPL.
> "the user is allowed to do with the software"
The GPL does not restrict what the user does with the software.
It can be USED for anything.
But it does restrict how you redistribute it. You have responsibilities if you redistribute it. You must provide the source code, and pass on the same freedoms you received to the users you redistribute it to.
If the LLM can reproduce the entire GPL'd code, with licence and attribution intact, then that would satisfy the GPL, correct?
If the LLM can invent new code, inspired by but not copied from the GPL'd code, that new code does not require a GPL licence.
This is essentially the same as we humans do: I read some GPL code and go "huh, neat architecture!" and then a year later solve a similar problem using an architecture inspired by that code. This is not copying, and does not require me to GPL the code I'm producing. But if I copy-paste a function from the GPL code into my code base, I need to respect the licence conditions and GPL at least part of my code base.
I think the argument that the author is talking about is if the model itself should be GPL'd because it contains copies of GPL'd code that can be reproduced. I don't buy this because that GPL code is not being run as part of the model's functioning. To use an analogy: if I create a code storage system, and then use it to store some GPL code, I don't have to GPL the code storage system itself. As long as it can reproduce the GPL code together with its licence and attribution, then the GPL is not being infringed at any point. The system is not using or running the GPL code itself, it is just storing the GPL code. This is what the LLM is doing.
If you ask a model to output a task scheduler in C, and the training data contained a GPL-licensed implementation of the Fibonacci function in Haskell, the output isn't likely to bear a lot of resemblance to that input. It might even be unrelated enough that adding that function to the training data doesn't affect what the model outputs for that prompt at all.
The nasty thing in terms using code generated by these things is that if you ask the model to output a task scheduler in C and the training data contained a GPL-licensed implementation of a task scheduler in C, the output plausibly could bear a strong resemblance to that input. Without you knowing that. And then if you go incorporate that into something you're redistributing, what happens?
:-)
I agree, I didn't make any statement what you can do with the software as long as you are licensed to use it
you are allowed to build atomic bombs, nuclear power plants, tanks, whatever.
but only as long as you comply i.e. give your downstream the freedom you've received.
if you fail at that, you're no longer allowed to use the software for anything.
see section 8 Termination for details
... I doubt that would clarify the clarity in clearness.
In a world where he could have just said "Please create a PDP-whatever driver for an IBM-whatever printer," there never would have been a GPL. In that sense AI represents the fulfillment of his vision, not a refutation or violation.
I'd be surprised if he saw it that way, of course.
We're a few years away from that, but it will happen unless someone powerful blocks it.
Code will only ever go in one direction here.
What we should fight is Rules For Thee but Not for Me.
Yeah, well, we'll see what our friends in China have to say about all that.
But also, is the inverse even wrong? If some store has a local CCTV that keeps recordings for a month in case someone robs them, there is no central feed/database and no one else can get them without a warrant, that's not really that objectionable. If Amazon pipes the feed from every Ring camera to the government, that's very different.
By "everywhere" I obviously don't mean "on your private property", I mean "everywhere" as in "on every street corner and so on".
If people are OK with their government putting CCTVs on every lamp post on the promise that they are "secure" and "not used to aggregate data and track people" and "only with warrant" then it's kind of "I told you so" when (not if) all of those things turn out to be false.
> using AI to thwart proprietary lock-in is good and so shouldn't be banned.
It's shortsighted because whoever runs LLMs isn't doing it to help you thwart lock in. It might for now but then they don't care about anything for now, they steal content as fast as they can and they lose billions yearly to make sure they are too big too fail. Once they are too big they will tighten the screws and literally they have the freedom to do whatever they want as long as it's legal.
And surprise helping people thwart lock-in is relatively much less legal (in addition to much less profitable) than preventing people from thwarting lock-in.
It's kind of bizarre to see people thinking these LLM operators will be somehow on the side of freedom and copyleft considering what they are doing.
If they're on each person's private property then they're on every street corner and so on. The distinction you're really after is between decentralized and centralized control/access, which is rather the point.
> It's kind of bizarre to see people thinking these LLM operators will be somehow on the side of freedom and copyleft considering what they are doing.
You're conflating the operators with the thing itself.
LLMs exist and nobody can un-exist them now because they're really just code and data. The only question is, are they a thing that does what you want because there are good published models that anybody can run on their own hardware, or are the only up-to-date ones corporate and censored and politically compromised by every clodpoll who can stir up a mob?
> LLMs exist and nobody can un-exist them now because they're really just code and data
"Malware exists and nobody can unexist it now because it's just code and data"
But that's the thing you were implying couldn't be distinguished. Every small shop having its own CCTV is different than one company having cameras everywhere, even if they both result cameras all over the place.
> "Malware exists and nobody can unexist it now because it's just code and data"
Which is accurate. Even if you tried to ban malware, or LLMs, they would still be produced by China et al. And malware is by definition bad, so you're also omitting the thing that matters again, which is that we should not ban the LLMs that aren't bad.
> the LLMs that aren't bad
which LLM is not made by stealing copyleft code?
If the incumbent copyright interests insist on picking an unnecessary fight with LLMs or AI in general, they will and must lose decisively. That applies to all of the incumbents, from FSF to Disney. Things are different now.
I still don't understand how copyright maximalism has suddenly become so popular on a site called "Hacker News." But it's early here, and I'm sure I'm not done learning exciting new things today.
Malware isn't bad for Russian crime syndicates, but we're generally content to regard them as the adversary and not care about their satisfaction. That isn't the case for someone who wants to use an LLM to fix a bug in their printer. They're doing the good work and people trying to stop them are the adversary.
> which LLM is not made by stealing copyleft code?
Let's drive a stake through this one by going completely the other way. Suppose you train an LLM only on GPL code, and all the people distributing and using it are only distributing its output under the GPL. Regardless of whether that's required, it's allowed, right? How would you accuse any of those people of a GPL violation?
> That isn't the case for someone who wants to use an LLM to fix a bug in their printer. They're doing the good work
they take advantage of temporary situation for good outcome but longer term they benefit those people doing shady stuff and concentrate power to them.
> Suppose you train an LLM only on GPL code, and all the people distributing and using it are only distributing its output under the GPL
That seems fair? but that's not what happens except by accident.
So getting your own LLM rewrite to an equivalent point (or, rather, less buggy as that's the whole point!) would be rather expensive; at the absolute very least, certainly more expensive than if you still had the original source code to reference or modify (even if an LLM is the thing doing those). Having the original source code is still just strictly unconditionally better.
Never mind the question of how you even get your LLM to reverse-engineer & interact with & observe the physical hardware of your printer, and whatever wasted ink during debugging of the reinvention of what the original driver already did correctly.
You could probably even train one to do that in particular. Take existing open source code and its assembly representations as training data and then treat it like a language translation task. Use the context to guess what the variable names were before the original compiler discarded them etc.
I can imagine that a process like what you describe, where a model is trained specifically on .asm / .c file pairs, would be pretty effective.
That said, chatgpt currently seems to fail even basic things - completely missed the `thrM` path being possible here: https://chatgpt.com/share/69296a8e-d620-800b-8c25-15f4260c78... https://dzaima.github.io/paste/#0jZJNTsMwEIX3OcWoSFWCqrhN0wb... and that's only basic bog-standard branching, no in-memory structures or stack usage (such trivial problems could be handled by using an actual proper disassembler before throwing an LLM at that wall, but of course that only solves the easy part)
https://chatgpt.com/s/t_6929f00ff5508191b75f31e219609a35 (5.1 Pro Thinking)
https://claude.ai/share/7d9caa25-14f7-4233-b15c-d32b86e20e09 (Opus 4.5)
https://docs.google.com/document/d/1C0lSKbLSZOyMWnGgR0QhZh3Q... (Gemini 3 Pro Thinking)
All of them recognized the thrM exception path, although I didn't review them for correctness.
That being said, I imagine the major showstopper in real-world disassembly tasks would simply be the limited context size. As you suggest, a standard LLM isn't really the best tool for the job, at least not without assistance to split up the task logically.
I have paid pro accounts on all three, but for some reason Gemini is no longer allowing links to be shared on some queries including this one. All it would let me do is export it to Docs, which I thought would be publicly visible but evidently isn't.
Like, here's a ~2.7x larger function: https://dzaima.github.io/paste/#0jVdNjxs3DL3nVwzQo30gRY00ChY... (is https://github.com/dzaima/CBQN/blob/90c1dc09e88c5324373281f6... with a bunch of inlining)
(I'm keeping the other symbol names there even though they'd likely not be there for real closed-source things, under the assumption that for a full thing you'd have something doing a quick naming pass beforehand)
This is still very much on the trivial end, but it's already dealing with in-memory structures, three inlined memory allocation calls (two half-deduplicated into one by the compiler, and the compiler initializing a bunch of the objects' fields in one store), and a bunch of inlined tagged object manipulations; should definitely be possible to get some disassembly from that, but figuring out the useful abstractions that make it readable without pain would probably take aggregating over multiple functions.
(unrelated notes of your previous results - claude indeed guessed correctly that it's BQN! though CBQN is presumably wholesale in its training data anyway; it did miss that the function has an unused 0th arg (a "this" pointer), which'd cause problems as the function is stored & used as a generic function pointer (this'd probably be easily resolved when attempting to integrate it in a wider disassembly though); neither claude nor cgpt unified the `x>>48==0xfff7` and `(x&0xffff000000000000)==0xfff7000000000000` which do the exact same thing but clang is stupid [https://github.com/llvm/llvm-project/issues/62145] and generates different things; and of course a big question is how many such intricacies could be automatically reduced down with a full codebases worth of context, cause understandably the single-function disassemblies are way way more verbose than the original)
An AI could never do a clean room implementation of anything, since it was not trained on clean room materials alone. And it never can be, for obvious reasons. I don't think there's an easy way out here.
So, it doesn't matter if a AI can or cannot do clean room implementation. Unless it is a patent or trade secret violation, cleam room implementation doesn't matter.
If Microsoft misappropriates GPL code how exactly is that "stealing" from me, the user, of that code? I'm not deprived in any way, the author is, so I can't make sense of your premise here.
> Freedom of software means nothing.
Software is information. Does "freedom of information" mean nothing? I think you're narrowing concepts here into something not particularly useful or reflective of reality.
> Users get the freedom to enjoy the software how they like.
The freedom is to modify the code for my own purposes. This is not at all required to plainly "enjoy" the software. I instead "enjoy a particular benefit."
> Why would my software which only contains a single function not be fair use?
Because fair use implies educational, informational, or transformational outputs. Your software is none of those things.
Not necessarily a “user of an app” but a user of this “suite of source code”.
It turns out that most people who say that value free market capitalism never really did.
No one benefits from locking up 99.999% of all source code, including most of Microsoft's proprietary code and all GPL code.
No one.
When it comes to AI, the only foreseeable outcome to copyright maximalism is that humans will have to waste their time writing the same old shit, over and over, forever less one day [1], because muh copyright!!!1!
1: https://en.wikipedia.org/wiki/Copyright_Term_Extension_Act
Nahh, AI companies had plenty of money to pay for access they simply chose not to.
YouTube on the other hand has permission from everyone uploading videos to make derivative works barring some specific deal with a movie studio etc.
Now there’s a few exceptions like large GPL works but again diminishing returns here, you don’t need to train on literally everything.
Yes you are. You are just deprived of something you apparently don't recognize or value, but that doesn't make it ok.
The original author was also stolen from and that doesn't rely on your understanding or perception.
The original author set some terms. Therm were not money but they are terms exactly like money. They said "you can have this, and only price is you have to make the source, and the further right to redistribute, available to any user you hand a binary to.
Well MS handed you a binary and did not also hand you the source or the right to redistribute.
That stole from both you and the original author and me who might otherwise have benefited from your own child work. The fact that you personally apparently were never going to make use of something they owe you doesn't change the fact that they owe you, and the original author and me.
We are rarely capable of valuing the freedoms we have never been deprived of.
To be privileged is to live at the quiet centre of a never-ending cycle: between taking a freedom for granted (only to eventually lose it), and fighting for that freedom, which we by then so desperately need.
And as Thomas Paine put it: "Those who expect to reap the blessings of freedom, must, like men, undergo the fatigues of supporting it."
The user in this example is deprived of freedoms 1, 2, and 3 (and probably freedom 0 as well if there are terms on what machines you can run the derivative binary on).
Read more here: https://www.gnu.org/philosophy/free-sw.html
Whether or not the user values these freedoms is another thing entirely. As the software author, licensing your code under the GPL is making a conscious effort to ensure that your software is and always will be free (not just as in beer) software.
Below are the four freedoms for those who are interested. Straight from the horse's mouth: https://www.gnu.org/philosophy/free-sw.html
The freedom to run the program as you wish, for any purpose (freedom 0).
The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
The freedom to redistribute copies so you can help others (freedom 2).
The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.The other factor of copyright, which is relevant, is how material is obtained. If the material is publicly accessible without protection, you have no reasonable expectation to exclusive control over its use. If you don't want AI training to be done on your work, you need to put access to it behind explicit authentication with a legally-binding user agreement prohibiting that use-case. Do note that this would lose your project's status as open-source.
we also treat as however we want public goods found over the internet. as the World Intellectual Property Organization Copyright Treaty and Berne Convention for the Protection of Literary and Artistic Works aren't real or because we can as we are operating in international waters, selling products for other sails living exclusively in international waters /s
The fact that a slippery slope is slippery doesn't make it not a slope.
The argument that GPL code is a tiny minority of what's in the model makes no sense to me. (To be clear, you're not making this argument.) One book is a tiny minority of an entire library, but that doesn't mean it's fine to copy that book word for word simply because you can point to a Large Library Model that contains it.
LLMs definitely store pretty high-fidelity representations of specific facts and procedures, so for me it makes more sense to start from the gzip end of the slope and slide the other way. If you took some GPL code and renamed all the variables, is that suddenly ok? What if you mapped the code to an AST and then stored a representation of that AST? What if it was a "fuzzy" or "probabilistic" AST that enabled the regeneration of a functionally equivalent program but the specific control flow and variable names and comments are different? It would be the analogue of (lossy) perceptual coding for audio compression, only instead of "perceptual" it's "functional".
This is starting to look more and more like what LLMs store, though they're actually dumber and closer to the literal text than something that maintains function.
It also feels a lot closer to 'gzip' than 'wc', imho.
Specific facts and procedures are explicitly NOT protected by copyright. That's what made cloning the IBM BIOS legal. It's what makes emulators legal. It's what makes the retro-clone RPG industry legal. It's what made Google cloning the Java API legal.
> If you took some GPL code and renamed all the variables, is that suddenly ok?
Generally no, not sufficiently transformative.
> What if you mapped the code to an AST and then stored a representation of that AST?
Generally no, binary distribution of software is considered a violation of copyright.
> What if it was a "fuzzy" or "probabilistic" AST that enabled the regeneration of a functionally equivalent program but the specific control flow and variable names and comments are different?
This starts to get a lot fuzzier. De-compilation is legal. Creating programs that are functionally identical to other programs is (generally) legal. Creating an emulator for a system is legal. Copyright protects a specific fixed expression of a creative idea, not the idea itself. We don't want to live in the world where Wine is a copyright violation.
> This is starting to look more and more like what LLMs store, though they're actually dumber and closer to the literal text than something that maintains function.
And yet, so far no one has brought a legal case against the AI companies for being able to extract their copyright protected material from the models. The few early examples of that happening are things that model makers explicitly attempt to train out of their models. It's unwanted behavior that is considered a bug, not a feature. Further the fact that a machine is able to violate copyright does not in and of itself make the machine itself a violation of copyright. See also Xerox machines, DeCSS, Handbrake, Plex/Jellyfin, CD-Rs, DVRs, VHS Recorders etc.
No argument there, and I'm grateful for the limits of copyright. That part was only for describing what LLM weights store -- just because the literal text is not explicitly encoded doesn't mean that facts and procedures aren't.
> Copyright protects a specific fixed expression of a creative idea, not the idea itself.
Right. Which is why it's weird to talk about the weights being derivative works. Weird but perhaps not wrong: if you look at the most clear-cut situation where the LLM is able to reproduce a big chunk of input bit-for-bit, then the fact that its basis of representation is completely different doesn't feel like it matters much. An image that is lossily compressed, converted to a bitstream, and encoded in DNA is very very different than the input, but if an image can be recovered that is indistinguishable or barely distinguishable from the original, I'd still call that copying and each intermediate step a significant but irrelevant transformation.
> This starts to get a lot fuzzier. De-compilation is legal.
I'm less interested in what the legal system is currently capable of concluding. I personally don't think the laws have caught up to the present reality, so present-day legality isn't the crucial determinant in figuring out how things "ought" to work.
If an LLM is completely incapable of reproducing input text verbatim, yet could become so through targeted ablation (that does not itself incorporate the text in question!), then does it store that text or not?
I'm not sure why I'm even debating this, other than for intellectual curiosity. My opinion isn't actually relevant to anyone. Namely: I think the general shape of how this ought to work is pretty straightforward and obvious, but (1) it does not match current legal reality, and more importantly, (2) it is highly inconvenient for many stakeholders (very much including LLM users). Not to mention that (3) although the general shape is pretty clear in my head, it involves many many judgement calls such as the ones we've been discussing here, and the general shape of how it ought to work isn't going to help make those calls.
Sure as a broad rule of thumb that works. But the ability of a machine to produce a copyright violation doesn't mean the machine itself or distributing the machine is a copyright violation. To take an extreme example, if we take a room full infinite monkeys and put them on infinite typewriters and they generate a Harry Potter book, that doesn't mean Harry Potter is stored in the monkey room. If we have a random sound generator that produces random tones from the standard western musical note pallet and it generates the bass line from "Under Pressure" that doesn't mean our random sound generator contains or is a copy of "Under Pressure", even if we encoded all the same information and procedures for generating those individual notes at those durations among the data procedures we gave the machine.
> If an LLM is completely incapable of reproducing input text verbatim, yet could become so through targeted ablation (that does not itself incorporate the text in question!), then does it store that text or not?
I would argue not. Just like a xerox machine doesn't contain the books you make copies of when you use it to make a copy, and Handbrake doesn't contain the DVD's you use when you make a copy there.
I would further argue that copyright infringement is inherently a "human" act. It's sort of encoded in the language we use to talk about it (e.g. "fair use") but it's also something of a "if a tree falls in the middle of the woods" situation. If an LLM runs in an isolated room in an isolated bunker with no one around and generates verbatim copies of the Linux kernel, that frankly doesn't matter. On the other hand, if a Microsoft employee induces an LLM to produce verbatim copies of the Linux kernel, that does, especially if they did so with the intent to incorporate Linux kernel code into Windows. Not because of the LLM, but because a person made the choice to produce a copy of something they didn't have the right to make a copy of. The method by which they accomplished that copy is less relevant than making the copy at all, and that in turn is less relevant than the intent of making that copy for a purpose which is not allowed by copyright law.
> I'm not sure why I'm even debating this, other than for intellectual curiosity.
Frankly, that's the only reason to debate anything. 99% of the time, you as an individual will never have the power to influence the actual legal decisions made. But a intellectually curious conversation is infinitely more useful, not just to you and me but to other readers, than another retread of "AI is slop" "you're just jealous you can't code your way out of a paper bag" arguments that pervade so much discussion around AI. Or worse yet another "I used an LLM for a clearly stupid thing and it was stupid" or "I used an LLM to replace all my employees and I'm sure it's going to go great" blog post. For whatever acrimony there might have been in our interchange here, I'm sorry, because this sort of discussion is the only good way to exercise our thoughts on an issue and really test them out ourselves. It's easy to have a knee jerk opinion. It's harder to support that opinion with a philosophy and reasoning.
For what it's worth, I view the LLM/AI world as the best opportunity we've had in decades to really rethink and scale back/change how we deal with intellectual property. The ever expanding copyright terms, the sometimes bizarre protections of what seem to be blindingly obvious ideas. The technological age has demonstrated a number of weaknesses in the traditional systems and views. And frankly I think it's also demonstrated that many prior predictions of certain doom if copyright wasn't strictly enforced have been overwrought and even where they haven't, the actual result has been better for more people. Famously, IBM would have very much preferred to have won the BIOS copyright issue. But I think so much of the modern computer and tech industry owes their very careers to the effects of that decision. It might have been better for IBM if IBM had won, it's not clear at all that it would have been better for "[promoting] the Progress of Science and useful Arts".
We could live in a world where we recognize that LLMs and AIs are going to fundamentally change how we approach creative works. We could recognize that the intents of "[promoting] the Progress of Science and useful Arts" is still a relevant goal and something we can work to make compatible with the existence of LLMs and AI. To pitch my crazy idea again, we could:
1) Cut the terms of copyright substantially, back down to 10 or 15 years by default.
2) Offer a single extension that doubles that term, but only on the condition that the work is submitted to a central "library of congress" data set.
3) This could be used to produce known good and clean data sets for AI companies and organization to train models from, with the protection that any model trained from this data set is protected from copyright infringement claims for works in the data set. Heck we could even produce common models. This would save massive amounts of power and resources by cutting the need for everyone who wants to be in the AI space to go out and acquire, digitize and build their own library. The NIST numbers set is effectively the "hello world" set for anyone learning computer vision AI stuff. Let's do that for all sort of AI.
4) The data sets and models would be provided for a nominal fee, this fee will be used to pay royalties to people whose works are still under copyright and are in the data sets, proportional to the recency and quantity of work submitted. A cap would need to be put in place to prevent flooding the data set to game the royalties. These royalties would be part of recognizing the value the original works contributed to the data set, and act as a further incentive to contribute works to the system and contribute them sooner.
We could build a system like this, or tweak it, or even build something else entirely. But only if we stop trying to cram how we treat AI and LLMs and the consequences of this new technology into a binary "allowed / not allowed" outcome as determined by an aging system that has long needed an overhaul.
So please, continue to debate for intellectual curiosity. I'd rather spend hours reading a truly curious exploration of this than another manifesto about "AI slop"
Well the difference is that copyright law applies to work fixed in a tangible medium of expression. This covers i.e. model weights on a hard drive but not the human brain. If the model is able to reproduce others’ work verbatim (like the example the article brings up of the song lyrics) then under copyright law that’s unauthorized reproduction. It doesn’t matter that the data is expressed via probabilistic weights because due to past lobbying/lawsuits by the software industry to get compiled binary code covered by copyright, reproduction can include copies that aren’t directly human readable.
> If the material is publicly accessible without protection, you have no reasonable expectation to exclusive control over its use.
There’s over 20 years of successful GPL infringement lawsuits over unlicensed use of publicly available GPL code that disagrees with this point.
Public trading of most trade secrets along with their owner corporations is also GPLish.
graemep•2mo ago
A lot of it boils down to whether training an LLM is a breach of copyright of the training materials which is not specific to GPL or open source.
xgulfie•2mo ago
exasperaited•2mo ago
Lobbying is for people trying to stop them; externalities are for the little people.
graemep•2mo ago
gorbachev•2mo ago
xgulfie•2mo ago
mr_toad•2mo ago
rileymat2•2mo ago
This is a big difference that already has bit them.
gruez•2mo ago
What "lobbied"? Copyright law hasn't materially changed since AI got popular, so I'm not sure where these lobbying efforts are showing up in. If anything the companies that have lobbied hard in the past (eg. media companies) are opposed to the current status quo, which seems to favor AI companies.
maxloh•2mo ago
Once training is established as fair use, it doesn't really matter if the license is MIT, GPL, or a proprietary one.
blibble•2mo ago
https://en.wikipedia.org/wiki/Fair_use#/media/File:Fair_use_...
and it is certainly not part of the Berne Convention
in almost every country in the world even timeshifting using your VCR and ripping your own CDs is copyright infringement
jcelerier•2mo ago
graemep•2mo ago
> L'exception de copie privée autorise une personne à reproduire une œuvre de l'esprit pour son usage privé, ce qui implique l'utilisation personnelle, mais également dans le cercle privé incluant le cadre familial.
seems to be only for personal use?
Fair dealing in the UK and other countries is broader, and US fair use broader still.
RobotToaster•2mo ago
blibble•2mo ago
(which is the linch-pin of the sloppers)
gruez•2mo ago
mongol•2mo ago
Is this legally settled?
1gn15•2mo ago
graemep•2mo ago
OneDeuxTriSeiGo•2mo ago
With proprietary or more importantly single-owner code, it's far easier for this to end up in a settlement rather than being drug out into an actual ruling, enforcement action, and establishment of precedence.
That's the key detail. It's not specific to GPL or open source but if you want to see these orgs held to account and some precedence established, focusing on GPL and FOSS licensed code is the clearest path to that.
kronicum2025•2mo ago
graemep•2mo ago
> A GPL license is a contract in most other countries. Just not US probably.
Not just the US. It may vary with version of the GPL too. Wikipedia claims its a civil law vs common law country difference - not sure the citation shows that though.