Interesting excerpt:
> “We will have a trial on the pirated copies used to create Anthropic’s central library and the resulting damages,” Judge Alsup wrote in the decision. “That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for theft but it may affect the extent of statutory damages.”
Language of “pirated” and “theft” are from the article. If they did realize a mistake and purchased copies after the fact, why should that be insufficient?
As just a matter of society, I don't think you want people say stealing a car and then coming back a month later with the money.
Regardless, I don't think the car is an apt metaphor here. Cars are an important utility and gatekeeping cars arguably holds society back., art is creative expression, and no one is going hungry because they didn't have $10 for the newest book.
We also have libraries already for this reason, so why not expand on that instead of relinquishing sharing of knowledge to a private corporation?
Copyright infringement does not deprive the copyright owner of its property and is not criminal. So in this case only the lawsuit part applies. The owner is only entitled to the monetary damages, which is the lost sale. But in this case the sale price was paid to the owner 1 month later, so the only real damages will be the interest the publisher could have earned if they had got their money one month earlier.
1. You're assuming this was some good faith "they didn't know they were stealing" factor. They use someone else's product's for commercial use. I'm not so charitable in my interpretation.
2. I'm not absolved of theft just because I go back and put money on the register. I still sttole, intentionally or not
I don't think that's exactly the case. A lot of the HN crowd is very much against the current iterations of copyright law, but is much more against rules that they see as being unfairly applied. For most of us, we want copyright reform, but short of that, we want it to at least pretend to be used for what it is usually claimed to be for: protecting small artists from large, predatory companies.
Choosing someone's bitstrings is like choosing to harvest someone's fields in a world where there's infinite space of fertile fields. You picked his, instead of finding a space in the infinite expanse to farm on your own.
If you start writing something you'll never generate a copyrighted work at random. When the work isn't available nothing is taken away from you even if you were strictly forbidden from reproducing the work.
Choosing someone's particular bitstring is only done because there's someone who has expended effort in preparing it.
So what is he going to do about the initial copyright infringement? Will the perpetrators get the Aaron Schwartz treatment?
Does this imply that distributing open-weights models such as Llama is copyright infringement, since users can trivially run the model without output filtering to extract the memorized text?
[1]: https://storage.courtlistener.com/recap/gov.uscourts.cand.43...
It's sort of like distributing a compendium of book reviews. Many of the reviews have quotes from the book. If there are thousands of reviews, you could potentially reconstruct the whole book, but that's not the point of the thing and so it makes sense for the infringing thing to be "using it to reconstruct the whole book" rather than "distributing the compendium".
And then Anthropic fended off the argument that their service was intended for doing the former because they were explicitly taking measures to prevent that.
Maybe this is a misrepresentation of the actual Anthropic case, I have no idea, but it’s the scenario I was addressing.
So it totally isn't a warez streaming media server but AI?
I'm guessing since my net worth isn't a billion plus, the answer is no
If you xor some data with random numbers, both the result and the random numbers are indistinguishably random and there is no way to tell which one came out of a random number generator and which one is "derived" from a copyrighted work. But if you xor them together again the copyrighted work comes out. So if you have Alice distribute one of the random looking things and Bob distribute the other one and then Carol downloads them both and reconstructs the copyrighted work, have you created a scheme to copy whatever you want with no infringement occurring?
Of course not, at least Carol is reproducing an infringing work, and then there are going to be claims of contributory infringement etc. for the others if the scheme has no other purpose than to do this.
Meanwhile this problem is also boring because preventing anyone from being the source of infringing works isn't a thing anybody has been able to do since at least as long as the internet has allowed anyone to set up a server in another jurisdiction.
Purposes which are fair use are very often not at all personal.
(Also, "personal use" that involves copying, creating a derivative work, or using any of the other exclusive rights of a copyright holder without a license or falling into either fair use or another explicit copyright exception are not, generally, allowed, they are just hard to detect and unlikely to be worth the copyright holder's time to litigate even if they somehow were detected.)
Additionally that if you download a model file that contains enough of the source material to be considered infringing (even without using the LLM, assume you can extract the contents directly out of the weights) then it might as well be a .zip with a PDF in it, the model file itself becomes an infringing object whereas closed models can be held accountable by not what they store but what they produce.
This will have the effect of empowering countries (and other entities) that don't respect copyright law, of course.
The copyright cartel cannot be allowed to yank the handbrake on AI. If they insist on a fight, they must lose.
If you can successfully demonstrate that then yes it is a copyright infringement and successfully doing that would be worthy of NeurIPS or ACL paper.
I'm not so sure about this one. In particular, presuming that it is found that models which can produce infringing material are themselves infringing material, the ability to distill models from older models seems to suggest that the older models can actually produce the new, infringing model. It seems like that should mean that all output from the older model is infringing because any and all of it can be used to make infringing material (the new model, distilled from the old).
I don't think it's really tenable for courts to treat any model as though it is, in itself, copyright-infringing material without treating every generative model like that and, thus, killing the GPT/diffusion generation business (that could happen but it seems very unlikely). They will probably stick to being critical of what people generate with them and/or how they distribute what they generate.
The amount of the source material encoded does not, alone, determine if it is infringing, so this noun phrase doesn't actually mean anything. I know there are some popular myths that contradict this (the commonly-believed "30-second rule" for music, for instance), but they are just that, myths.
Not if there isn't infringement. Infringement is a question that precedes damages, since "damages" are only those harms that are attributable to the infringement. And infringement is an act, not an object.
If training a general use LLM on books isn't infringement (as this decision holds), then there by definition cannot be damages stemming from it; the amount of the source material that the model file "contains" doesn't matter.
It might matter to whether it is possible for a third party to easily use the model for something that would be infringement on the part of the third party, but that would become a problem for people who use it for infringement, not the model creator, and not for people who simply possess a copy of the model. The model isn't "an infringing object".
This is still a weird language shift that actively promotes misunderstandings.
The weights are the LLM. When you say "model", that means the weights.
> the court dismissed “nonsensical” claims that Meta’s LLaMA models are themselves infringing derivative works.
See: https://www.eff.org/deeplinks/2025/02/copyright-and-ai-cases...
The goal of copyright is to make sure people can get fair compensation for the amount of work they put in. LLMs automate plagiarism on a previously unfathomable scale.
If humans spend a trillion hours writing books, articles, blog posts and code, then somebody (a small group of people) comes and spends a million hours building a machine that ingests all the previous work and produces output based on it, who should get the reward for the work put in?
The original authors together spent a million times more effort (normalized for skill) and should therefore should get a million times bigger reward than those who build the machine.
In other words, if the small group sells access to the product of the combined effort, they only deserve a millionth of the income.
---
If "AI" is as transformative as they claim, they will have no trouble making so much money they they can fairly compensate the original authors while still earning a decent profit. But if it's not, then it's just an overpriced plagiarism automator and their reluctance to acknowledge they are making money on top of everyone else's work is indicative.
This is a bit distorted. This is a better summary: The primary purpose of copyright is to induce and reward authors to create new works and to make those works available to the public to enjoy.
The ultimate purpose is to foster the creation of new works that the public can read and written culture can thrive. The means to achieve this is by ensuring that the authors of said works can get financial incentives for writing.
The two are not in opposition but it's good to be clear about it. The main beneficiary is intended to be the public, not the writers' guild.
Therefore when some new factor enters the picture such as LLMs, we have to step back and see how the intent to benefit the reading public can be pursued in the new situation. It certainly has to take into account who and how will produce new written works, but it is not the main target, but can be an instrumental subgoal.
Fundamentally, fair compensation is based on the amount of work put in (obviously taking skill/competence into account but the differences between people in most disciplines probably don't span a single order of magnitude, let alone several).
The ultimate goal should be to prevent people who don't produce value from taking advantage of those who do. And among those who do, that they get compensated according to the amount of work and skill they put in.
Imagine you spend a year building a house. I have a machine that can take your house and materialize a copy anywhere on earth for free. I charge people (something between 0 and the cost of building your house the normal way) to make them a copy of your house. I can make orders of magnitude more money this way than you. Are you happy about this situation? Does it make a difference how much i charge them?
What if my machine only works if I scan every house on the planet? What if I literally take pictures of it from all sides, then wait for your to not be home and xray it to see what it looks like inside?
You might say that you don't care because now you can also afford many more houses. But it does not make you richer. In fact, it makes you poorer.
Money is not a store of value. If everyone has more money but most people only have 2x more and a small group has a 1000x more, then the relative bargaining power changed so the small group is better off and the large group is worse off. This is what undetectable cheap mass plagiarism leads to for all intellectual work.
---
I wrote a lot of open source code, some of it under permissive licenses, some GPL, some AGPL. The conditions of those licenses are that you credit me. Some of them also require that if you build on top of my work, you release your work with the same licence.
LLMs launder my code to make profit off of it without giving me anything (while other people make profit, thus making me poorer) and without crediting me.
LLMs also take away the rights of the users of my code - (A)GPL forced anyone who builds on top of my work to release the code when asked, with LLM-laundered code, this right no longer seems to exist because who do you even ask?
LLMs are models of languages, which are models of reality. If anyone deserves compensation, it's humanity as a whole, for example by nationalizing, or whatever the global equivalent is, LLMs.
Approximately none of the value of LLMs, for any user, is in recreating the text written by an author. Authors have only ever been entitled to (limited) ownership their expression, copyright has never given them ownership of facts.
In this case, the plaintiffs alleged that Anthropic's LLMs had memorized the works so completely that "if each completed LLM had been asked to recite works it had trained upon, it could have done so", "almost verbatim". The judge assumed for the sake of argument that the allegation was true, and ruled that the conduct was fair use anyway due to the existence of an effective filter. Therefore there was no need to determine whether the allegation was actually true.
So - yes, in the sense that the ruling suggests that distributing an open-weight LLM that memorized copyrighted works to that extent would not be fair use.
But no, in the sense that it's not clear whether any LLMs, especially open-weight LLMs, actually memorize book-length works to that extent. Even the recent study about Llama memorizing a Harry Potter book [1] only said that Llama could reproduce 50-token snippets a decent percentage of the time when given the preceding 50 tokens. That's different from actually being able to recite any substantial portion of the book. If you asked Llama for that, the output would quickly diverge from the original text, and it likely wouldn't be able to get back on track without being re-prompted from the ground truth as the study did.
On the other hand, in the case where the New York Times is suing OpenAI, the NYT has alleged that ChatGPT was able to recite extensive portions of NYT articles verbatim. If true, this might be more dangerous, since news articles are not as long as books but they're equally eligible for copyright protection. So we'll see how that shakes out.
Also note:
- Nothing in the opinion sets formal precedent because it's a district court. But the opinion might still influence later judges.
- See also riskable's sibling comment for another case where a judge addressed the issue more head-on (but wasn't facing the same kind of detailed allegations, I don't think; haven't checked).
You have to call it "Starcrash" (https://www.imdb.com/title/tt0079946/?ref_=ls_t_8). Then it's legal.
It’s not as simple as it sounds, since I’m sure scraping is against Reddit’s terms and conditions, but if those posts are made publicly available without the scraper actually agreeing to anything, is that a valid breach of contract?
Will be interesting to see how that plays out.
I doubt the exact replica stuff will stand, as technically it was only achievable via advanced prompt engineering (hacking), not simply asking for a replica. So their 2 other arguments boils down to scraping a news database = infringement and LLM output = derivative works.
If the US makes it illegal to train LLMs on copyrighted data, the US will find a solution and not just give up and wait half a decade to see what China does in the meantime.
And the easiest option: Legislation change. If it's completely decided that the current law blocks LLMs from working in the US, the industry will lobby to amend the copyright law (which is not immutable) to add a carveout for it.
You're assuming that people will just give up. People never gave up, why would they now?
I'm not sure why this alone is considered a separate issue from training the AI with books. Buying a copy of a copyrighted work doesn't inherently convey 'fair use rights' to the purchaser. If I buy a work, read it, sell it, and then publish a review or parody of it, I don't infringe copyright. Why does mere possession of an unauthorized copy create a separate triable matter before the court?
Keep in mind, you can legally engineer EULAs in such a way that merely purchasing the work surrenders all of your fair use rights. So this could wind up being effectively: "AI training is fair use for works purchased before June 24th, 2025, everything after is forbidden, here's your brand new moat OpenAI"
Which suggests that, at least in the judge's opinion, 'fair use rights' do exist in a sense, but it's about when you read the book, not when you publish.
But that's not settled precedent. Meta is currently arguing the opposite in Kadrey v. Meta: they're claiming that they can get away with torrenting training material as long as they only leech (download) and don't seed (upload), because, although the act of downloading (copying) is generally infringement under a Ninth Circuit precedent, they were making a fair use.
As for EULAs, that might be true for e-books, but publishers can't really do anything about Anthropic's new strategy of scanning physical books, because physical books generally don't come with shrinkwrap license agreements. Perhaps publishers could start adding them, but I think that would sit poorly with the public and the courts.
(That's assuming the ruling isn't overturned on appeal, which it easily might be.)
Cassette Tapes and Private Copying Levy.
https://en.wikipedia.org/wiki/Private_copying_levy
Governments didn't ban tapes but taxed them and fed the proceeds back into the royalty system. An equivalent for books might be an LLM tax funding a negative tax rate for sold books e.g. earn $5 and the gov tops it up. Can't imagine how to ensure it was fair though.
Alternatively, might be an interesting math problem to calculate royalties for the training data used in each user request!
But for non-fiction and technical books, they sure as hell will. For example, if I ask Gemini "What does Bob Nystrom say about the Singleton pattern in Game Programming Patterns?", it says:
---
In "Game Programming Patterns," Bob Nystrom dedicates an entire chapter to the Singleton pattern, which is unusual because, as he states, "Every other chapter in this book shows you how to use a design pattern. This chapter shows you how not to use one."
Here's a summary of his key points regarding the Singleton pattern:
* More Harm Than Good: Nystrom argues that despite its noble intentions, the Singleton pattern often does more harm than good, especially in game development. He notes that the Gang of Four (GoF) stressed its sparing use, a message that was often lost in translation to the game industry.
* Global State in Disguise: He highlights that the Singleton is essentially a global variable in disguise. While it ensures a single instance, it often leads to problems associated with global state, such as tightly coupled code, difficulty in testing, and reduced flexibility.
* Encourages Tight Coupling: By providing a global access point, Singletons encourage various parts of the codebase to directly depend on them. This makes the code harder to change and refactor because modifications to the Singleton can have widespread, unpredictable effects.
* Hinders Testability: Testing code that relies on Singletons can be challenging. Because Singletons are globally accessible and often difficult to mock or replace, it can be hard to isolate units of code for testing, leading to brittle tests or a lack of testing altogether.
* Limits Flexibility: Nystrom points out that while a class might currently only need one instance (e.g., a FileSystem or AudioSystem), hardcoding this limitation with a Singleton makes it difficult to change in the future. If a game later needs multiple file systems (e.g., for different save slots) or different audio contexts, a Singleton makes this a non-trivial refactoring.
* Alternatives Often Better: He suggests that often, what developers think they need a Singleton for can be better achieved with other patterns or simpler solutions. For instance, if the only goal is to have a single instance, a simple static class might suffice if global access is truly desired and initialization order isn't a complex concern. For managing services, he often advocates for the Service Locator pattern as a less problematic alternative that still provides global access without enforcing a single instance.
* Misconceptions about "Singular Resources": While the GoF suggested Singletons for "physical devices," Nystrom argues that few physical devices are truly singular in a way that necessitates a Singleton. For example, while you might only have one keyboard, a game could theoretically support multiple input devices. Using a Singleton prematurely prevents this flexibility.
* Lazy Initialization: He acknowledges that a well-implemented Singleton using lazy initialization (creating the instance only when it's first accessed) can save memory and CPU cycles and resolve some issues with static initialization order. However, he emphasizes that these benefits don't outweigh the pattern's fundamental architectural drawbacks.
* In essence, Nystrom's strong stance is that while the Singleton pattern can ensure a single instance and provide global access, its downsides (coupling, testability, flexibility) often make it an anti-pattern in game development, and developers should carefully consider alternatives before reaching for it.
---
Is that summary as good as actually reading the book? Probably not. Will people rely on that and skip buying my book. Almost certainly so.
Such imperfect measures offer a compromise between "big tech can steal everything" and "LLMs trained on unpurchased books are illegal".
It's not just books but any tragedy-of-the-commons situation where a "feeder industry" for training can be fatally undermined by the very LLM that desires future training data from that industry.
The last thing the world needs is more nonsensical copyright law and hand wavy regulation funded by entrenched interests.
3PS•5h ago
This is OK and fair use: Training LLMs on copyrighted work, since it's transformative.
This is not OK and not fair use: pirating data, or creating a big repository of pirated data that isn't necessarily for AI training.
Overall seems like a pretty reasonable ruling?
simmerup•5h ago
lesuorac•5h ago
If you train a LLM on harry potter and ask it to generate a story that isn't harry potter then it's not a replacement.
However, if you train a model on stock imagery and use it to generate stock imagery then I think you'll run into an issue from the Warhol case.
johnnyanmac•4h ago
ticulatedspline•4h ago
ninetyninenine•4h ago
So if I or an LLM simply doesn’t allow said extraction to occur, memorization and copying is not against the law.
ranger_danger•4h ago
ninetyninenine•3h ago
ranger_danger•3h ago
ranger_danger•4h ago
sidewndr46•4h ago
lesuorac•4h ago
I wouldn't call it that. Goldsmith took a photograph of Prince which Warhol used as a reference to generate an illustration. Vanity Fair then chose to buy a license Warhol's print instead of Goldsmith's photograph.
So, despite the artwork being visual transformative (silkscreen vs photograph) the actual use was not transformed.
thedevilslawyer•5h ago
derbOac•5h ago
I tend to think copyright should be extremely limited compared to what it is now, but to me the logic of this ruling is illogical other than "it's ok for a corporation to use lots of works without permission but not for an individual to use a single work without permission." Maybe if they suddenly loosened copyright enforcement for everyone I might feel differently.
"Kill one man, and you are a murderer. Kill millions of men, and you are a conqueror." (An admittedly hyperbolic comparison, but similar idea.)
rcxdude•4h ago
I think that's the conclusion of the judge. If Anthropic were to buy the books and train on them, without extra permission from the authors, it would be fair use, much like if you were to be inspired by it (though in that case, it may not even count as a derivative work at all, if the relationship is sufficiently loose). But that doesn't mean they are free to pirate it either, so they are likely to be liable for that (exactly how that interpretation works with copyright law I'm not entirely sure: I know in some places that downloading stuff is less of a problem than distributing it to others because the latter is the main thing that copyright is concerned with. And AFAIK most companies doing large model training are maintaining that fair use also extends to them gathering the data in the first place).
(Fair use isn't just for discussion. It covers a broad range of potential use cases, and they're not enumerated precisely in copyright law AFAIK, there's a complicated range of case law that forms the guidelines for it)
altruios•4h ago
(that's all to say copyright is dated and needs an overhaul)
But that's taking a viewpoint of 'training a personal AI in your home', which isn't something that actually happens... The issue has never been the training data itself. Training an AI and 'looking at data and optimizing a (human understanding/AI understanding) function over it' are categorically the same, even if mechanically/biologically they are very different.
tsumnia•4h ago
While humans don't have encyclopedic memories, our brain connects a few dots to make a thought. If I say "Luke, I am your father", it doesn't matter that isn't even the line is wrong, anyone that's seen Star Wars knows what I'm quoting. I may not be profiting from using that line, but that doesn't stop Star Wars from inspiring other elements of my life.
I do agree that copyright law is complicated and AI is going to create even more complexity as we navigate this growth. I don't have a solution on that front, just a recognition that AI is doing what humans do, only more precisely.
dragonwriter•4h ago
That's not what the ruling says.
It says that training a generative AI system not designed primarily as a direct replacement for a work on one or more works is fair use, and that print-to-digital destructive scanning for storage and searchability is fair use.
These are both independent of whether one person or a giant company or something in between is doing it, and independent of the number of works involved (there's maybe a weak practical relationship to the number of works involved, since a gen AI tool that is trained on exactly one work is probably somewhat less likely to have a real use beyond a replacement for that work.)
klabb3•4h ago
Worse, they’re using it for massive commercial gain, without paying a dime upstream to the supply chain that made it possible. If there is any purpose of copyright at all, it’s to prevent making money from someone’s else’s intellectual work. The entire thing is based on economic pragmatism, because just copying does obviously not deprive the creator of the work itself, so the only justification in the first place is to protect those who seek to sell immaterial goods, by allowing them to decide how it can be used.
Coming to the conclusion that you can ”fair use” yourself out of paying for the most critical part of your supply makes me upset for the victims of the biggest heist of the century. But in the long term it can have devastating chilling effects, where information silos will become the norm, and various forms of DRM will be even more draconian.
Plus, fair use bypasses any licensing, no? Meaning even if today you clearly specify in the license that your work cannot be used in training commercial AI, it isn’t legally enforceable?
growse•4h ago
This makes no sense. If I buy and read a book on software engineering, and then use that knowledge to start a career, do I owe the author a percentage of my lifetime earnings?
Of course not. And yet I've made money with the help of someone else's intellectual work.
Copyright is actually pretty narrowly defined for _very good reason_.
lurkshark•3h ago
If the career you start isn't software engineering directly but instead re-teaching the information you learned from that book to millions of paying students, is the regular royalty payment for the book still fair?
tantalor•4h ago
I'm allowed to hear a copyrighted tune, and even whistle it later for my own enjoyment, but I can't perform it for others without license.
AlienRobot•3h ago
People need to stop anthropomorphizing neural networks. It's a software and a software is a tool and a tool is used by a human.
tantalor•2h ago
adinisom•57m ago
It's interesting how polarizing the comparison of human and machine learning can be.
comex•1h ago
> This order doubts that any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased or otherwise accessed lawfully was itself reasonably necessary to any subsequent fair use. There is no decision holding or requiring that pirating a book that could have been bought at a bookstore was reasonably necessary to writing a book review, conducting research on facts in the book, or creating an LLM. Such piracy of otherwise available copies is inherently, irredeemably infringing even if the pirated copies are immediately used for the transformative use and immediately discarded.
(But the judge continued that "this order need not decide this case on that rule": instead he made a more targeted ruling that Anthropic's specific conduct with respect to pirated copies wasn't fair use.)
fallingknife•22m ago
ticulatedspline•4h ago
Personally I like to frame most AI problems by substituting a human (or humans) for the AI. Works pretty well most of the time.
In this case if you hired a bunch of artists/writers that somehow had never seen a Disney movie and to train them to make crappy Disney clones you made them watch all the movies it certainly would be legal to do so but only if they had legit copies in the training room. Pirating the movies would be illegal.
Though the downside is it does create a training moat. If you want to create the super-brain AI that's conversant on the corpus of copyrighted human literature you're going to need a training library worth millions
johnnyanmac•4h ago
I see elements of that here. Buying copyrighted works not to be exposed and be inspired, nor to utilize the aithor's talents, but to fuel a commercialization of sound-a-likes.
lesuorac•4h ago
Keep in mind, the Authors in the lawsuit are not claiming the _output_ is copyright infringement so Alsup isn't deciding that.
Dracophoenix•3h ago
You're referencing Midler v Ford Motor Co in the 9th circuit. This case largely applies to California, not the whole nation. Even then, it would take one Supreme Court case to overturn it.
tgv•4h ago
How many copies? They're not serving a single client.
Libraries need to have multiple e-book licenses, after all.
ticulatedspline•4h ago
It changes the definition of what a "legal copy" is but the general idea that the copy must be legal still stands.
tgv•3h ago
alganet•4h ago
https://en.wikipedia.org/wiki/Mickey_Mouse#Walt_Disney_Produ...
I'm on the Air Pirates side for the case linked, by the way.
However, AI is not a parody. It's not adding to the cultural expression like a parody would.
Let's forget all the law stuff and these silly hypotheticals. Let's think of humanity instead:
Is AI contributing to education and/or culture _right now_, or is it trying to make money? I think they're trying to make money.
fallingknife•19m ago
Says who?
> Is AI contributing to education and/or culture _right now_, or is it trying to make money?
How on earth are those things mutually exclusive? Also, whether or not it's being used to make money is completely irrelevant to whether or not it is copyright infringement.
alganet•2m ago
Artists.
https://en.wikipedia.org/wiki/SAG-AFTRA
> How on earth are those things mutually exclusive?
Put those on a spectrum and rethink what I said.
> completely irrelevant to whether or not it is copyright infringement
_Again_, leave aside law minutiae and hypotheticals.
martin-t•1h ago
Human time is inherently valuable, computer time is not.
The issue with LLMs is that they allow doing things at a massive scale which would previously be prohibitively time consuming. (You could argue but them how much electricity is worth one human life?)
If I "write" a book by taking another and replacing every word with a synonym, that's obviously plagiarism and obviously copyright infringement. How about also changing the word order? How about rewording individual paragraphs while keeping the general structure? It's all still derivative work but as you make it less detectable, the time and effort required is growing to become uneconomical. An LLM can do it cheaply. It can mix and match parts of many works but it's all still a derivative of those works combined. After all, if it wasn't, it would produce equally good output with a tiny fraction of the training data.
The outcome is that a small group of people (those making LLMs and selling access to their output) get to make huge amounts of money off of the work of a group that is several orders of magnitude larger (essentially everyone who has written something on the internet) without compensating the larger group.
That is fundamentally exploitative, whether the current laws accounted for that situation or not.
doctorpangloss•4h ago
philipkglass•3h ago
https://en.wikipedia.org/wiki/Authors_Guild,_Inc._v._Google,....
Maybe there's another big Google Books lawsuit that Google ultimately lost, but I don't know which one you mean in that case.
doctorpangloss•1h ago
dragonwriter•21m ago
They did not have to, they had an alternate means available (and used it for many of the books), buying physical copies and destructively scanning them.
> and they will not win on their right to commercialize the results of training
That seems an unwarranted conclusion, at best.
> so what good is the Fair Use ruling
If nothing else, assuming the logic of the ruling is followed by the inevitable appeals court decision and becomes binding precedent, it provides a clear road to legally training LLMs on books without copyright issues (combination of "training is fair use" and "destructive scanning for storage and searchability is fair use"), even if the pirating of a subset of the source material in this case were to make Anthropic's existing products prohibited (which I think you are wrong to think is the likely outcome.)
SoKamil•4h ago
ninetyninenine•4h ago
mrguyorama•3h ago
AI models do not.
NoOn3•35m ago
martin-t•1h ago
Even if LLMs were actual human-level AI (they are not - by far), a small bunch of rich people could use them to make enormous amounts of money without putting in the enormous amounts of work humans would have to.
All the while "training" (= precomputing transformations which among other things make plagiarism detection difficult) on work which took enormous amounts of human labor without compensating those workers.
tartoran•20m ago
bonoboTP•31m ago
almatabata•4h ago
bananapub•4h ago
Meta at least just downloaded ENGLISH_LANGUAGUE_BOOKS_ALL_MEGATORRENT.torrent and trained on that.
almatabata•4h ago
quote: “We will have a trial on the pirated copies used to create Anthropic’s central library and the resulting damages,” Judge Alsup wrote in the decision. “That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for theft but it may affect the extent of statutory damages.”
This tells me Anthropic acquired these books legally afterwards. I was asking if during that purchase, the seller could add a no training close to the sales contract.
shagie•2h ago
https://en.wikipedia.org/wiki/First-sale_doctrine
> The doctrine was first recognized by the Supreme Court of the United States in 1908 (see Bobbs-Merrill Co. v. Straus) and subsequently codified in the Copyright Act of 1909. In the Bobbs-Merrill case, the publisher, Bobbs-Merrill, had inserted a notice in its books that any retail sale at a price under $1.00 would constitute an infringement of its copyright. The defendants, who owned Macy's department store, disregarded the notice and sold the books at a lower price without Bobbs-Merrill's consent. The Supreme Court held that the exclusive statutory right to "vend" applied only to the first sale of the copyrighted work.
> Today, this rule of law is codified in 17 U.S.C. § 109(a), which provides:
> Notwithstanding the provisions of section 106 (3), the owner of a particular copy or phonorecord lawfully made under this title, or any person authorized by such owner, is entitled, without the authority of the copyright owner, to sell or otherwise dispose of the possession of that copy or phonorecord.
---
If I buy a copy of a book, you can't limit what I can do with the book beyond what copyright restricts me.
heavyset_go•4h ago
almatabata•4h ago
AlanYx•4h ago
jxdxbx•3h ago
dragonwriter•28m ago
This ruling doesn't say anything about the enforceability of a "don't train AI on this" contract, so even if the logic of this ruling became binding prcecednet (trial court rulings aren't), such clauses would be as valid after as they are today. But contracts only affect people who are parties to the contract.
Also, the damages calculations for breach of contract are different than for copyright infringement; infringement allows actual damages and infringer's profits (or statutory damages, if greater than the provable amount of the others), but breach of contract would usually be limited to actual damages ("disgorgement" is possible, but unlike with infringer's profits in copyright, requires showing special circumstances.)
veggieroll•4h ago
ncruces•4h ago
toomuchtodo•4h ago
veggieroll•4h ago
toomuchtodo•4h ago
veggieroll•4h ago
ninetyninenine•4h ago
layer8•4h ago
Humans, animals, hardware and software are treated differently by law because they have different constraints and capabilities.
ninetyninenine•3h ago
Let's be real, Humans have special treatment (more special than animals as we can eat and slaughter animals but not other humans) because WE created the law to serve humans.
So in terms of being fair across the board LLMs are no different. But there's no harm in giving ourselves special treatment.
layer8•3h ago
martin-t•1h ago
And who gets the money? Not the original author.
bonoboTP•33m ago
LLMs may sometimes reproduce exact copies of chunks of text, but I would say it also matters that this is an irrelevant use case that is not the main value proposition that drives LLM company revenues, it's not the use case that's marketed and it's not the use case that people in real life use it for.