Perhaps famously, emulators very clearly and objectively impact the market for a game consoles and computers and yet they are also considered fair use under US copyright law.
No one part of the 4 part test is more important than the others. And so far in the US, training and using an LLM has been ruled by the courts to be fair use so long as the materials used in the training were obtained legally.
And then for inference, wouldn't it depend on what you're actually using it for? If you're doing sentiment analysis, that's very different than if you're creating an unlicensed Harry Potter sequel that you expect to run in theaters and sell tickets. But conversely, just because it can produce a character from Harry Potter doesn't mean that couldn't be fair use either. What if it's being used for criticism or parody or any of the other typical instances of fair use?
The trouble is there's no automated way to make a fair use determination, and it really depends on what the user is doing with it, but the media companies are looking for some hook to go after the AI companies who are providing a general purpose tool instead of the subset of their "can't get blood from a stone" customers who are using that tool for some infringing purpose.
Well, AI training has annoyed LOTS people. Overloaded websites.. Done things just because they can . ie Facebook sucking up content of lots pirate books
Since this AI race started our small website is constantly over run by bots and it is not usable by humans because of the load.. NEWER HAD this problem before AI , when just access by search engine indexing .....
If Google, Bing, Baidu and Yandex each come by and index your website, they each want to visit every page, but there aren't that many such companies. Also, they've been running their indexes for years so most of the pages are already in them and then a refresh is usually 304 Not Modified instead of them downloading the content again.
But now there are suddenly a thousand AI companies and every one of them wants a full copy of your site going back to the beginning of time while starting off with zero of them already cached.
Ironically copyright is actually making this worse, because otherwise someone could put "index of the whole web as of some date in 2023" out there as a torrent and then publish diffs against it each month and they could all go download it from each other instead of each trying to get it directly from you. Which would also make it easier to start a new search engine.
There's nothing pro-bigtech in this proposal. Big tech can afford the license fees and lawsuits... and corner the market. The smaller providers will be locked out if an extended version of the already super-stretched copyright law becomes the norm.
A lot of the people that were anti-expansive-copyright only because it was anti-big-media have shifted to being pro-expansive-copyright because it is perceived as being anti-big-tech (and specifically anti-AI).
They do link to a (very long) article by a law professor arguing that data mining is fair use. If you want to get into the weeds there, knock yourself out.
https://lawreview.law.ucdavis.edu/sites/g/files/dgvnsk15026/...
While it hasn't either been ruled on or turned away at the Supreme Court yet, a number of federal trial courts have found training AI models from legally-acquired materials to be fair use (even while finding, in some of those and other cases, that pirating to get copies to then use in training is not and using models as a tool to produce verbatim and modified similar-medium copies of works from the training material is also not.)
I’m not aware of any US case going the other way, so, while the cases may not strictly be precedential (I think they are all trial court decisions so far), they are something of a consistent indicator.
The problem is this article seems to make absolutely no effort to differentiate legitimate uses of GenAI (things like scientific and medical research) from the completely illegitimate uses of GenAI (things like stealing the work of every single artist, living and dead, for the sole purpose of making a profit)
One of those is fair use. The other is clearly not.
Should the original research use be considered legitimate fair use? Does the legitimacy get 'poisoned' along the way when a third party uses the same model for profit?
Is there any difference between a mom-and-pop restaurant who uses the model to make a design for their menu versus a multi-billion dollar corp that's planning on laying off all their in house graphic designers? If so, where in between those two extremes should the line be drawn?
If you're asking for my personal opinion, I can weigh in on my personal take for some fair use factors.
- Research into generative art models (the kind which is done by e.g. OpenAI, Stable Diffusion) is only possible due to funding. That funding mainly comes from VC firms who are looking to get ROI by replacing artists with AI[0], and then debt financing from major banks on top of that. This drives both the market effect factor and the purpose/character of use factor, and not in their favor. If the research has limited market impact and is not done for the express purpose of replacing artists, then I think it would likely be fair use (an example could be background removal/replacement).
- I don't know if there are any legal implications of a large vs. small corporation using a product of copyright infringement to produce profit. Maybe it violates some other law, maybe it doesn't. All I know is that the end product of a GenAI model is not copyrightable, which to my understanding means their profit potential is limited as literally anyone else can use it for free.
If I take my legally purchased epub of book and pipe it through `wc` and release the outputs, is that a violation of copyright? What about 10 books? 100? How many books would I have to pipe through `wc` before the outputs become a violation of copyright?
What if I take those same books and generate a spreadsheet of all the words and how frequently they're used? Again, same question, where is the line between "fine" and "copyright violation"?
What if I take that spreadsheet, load it into a website and make a javascript program that weights every word by count and then generates random text strings based on those weights? Is that not essentially an LLM in all but usefulness? Is that a violation of copyright now that I'm generating new content based on statistical information about copyright content? If I let such a program run long enough and run on enough machines, I'm sure those programs would generate strings of text from the works that went into the models. Is that what makes this a copyright violation?
If that's not a violation, how many other statistical transformation and weighting models would I have to add to my javascript program before it's a violation of copyright? I don't think it's reasonable to say any part of this is "clearly not" fair use, no matter how many books I pump into that original set of statistics. And at least so far, the US courts agree with that.
Your second construction is generative, but likely worse than a Markov chain model, which also did not have any market effect.
We're talking about the models that have convinced every VC it can make a trillion dollars from replacing millions of creative jobs.
If that's the combination of the decryption key and the software that can use that key to make a copy of a DVD is not a violation of copyright, does that imply that distributing a model and a piece of software separately that can use that model is also not a copyright violation? If it is a violation, what makes it different from the key + copy software combo?
If we decide that generative is a necessary component, is the line just whenever the generative model becomes useful? That seems arbitrary and unnecessarily restrictive. Google Scholar is an instructive example here, a search database that scanned many thousands of copyright materials, digitized them and then made that material searchable to anyone and even (intentionally) displayed verbatim copies (or even images) of parts of the work in question. This is unquestionably useful for people, and also very clearly producing portions of copyrighted works. Should the court cases be revisited and Google Scholar shut down for being useful?
If market effect is the key thing, how do we square that with the fact that a number of unquestionably market impacting things are also considered fair use. Emulators are the classic example here, and certainly modern retro gaming OSes like Recalbox or Retropie have measurable impacts on the market for things like nostalgia bait mini SNES and Atari consoles. And yet, the emulators and their OS's remain fair use. Or again, lets go back to the combination of the DVD encryption keys and something like handbrake. Everyone knows exactly what sort of copyright infringement most people do with those things. And there are whole businesses dedicated to making a profit off of people doing just that (just try and tell anyone with a straight face that Plex servers are only being used to connect to legitimate streaming services and stream people's digitized home movies).
My point is that AI models touch on all of these sorts of areas that we have previously carved out as fair use, and AI models are useful tools that don't (despite claims to the contrary) clearly fall afoul of copyright law. So any argument that they do needs to think about where we draw the lines and what are the factors that make up that decision. So far the courts have found training an AI model with legally obtained materials and distributing that model to be fair use, and they've explained how they got to that conclusion. So an argument to the contrary needs to draw and different line and explain why the line belongs there.
It is not fair use when the entire output is made of chopped up quotes from all of humanity. It is not fair use when only a couple of oligarchs have the money and grifting ability to build the required data centers.
This is a another in the long lists of institutions that have been subverted. ACLU and OSI are other examples.
There are legitimate arguments to be made about whether or not AI training should be allowed, but it should take the form of new legislation, not wild reinterpretations of copyright law. Copyright law is already overreaching, just imagine how goddawful companies could be if they're given more power to screw you for ever having interacted with their "creative works".
Companies were not allowed to make 5 trillion copies.
by whom?
Seems like a good argument to not lock down the ability to create and use AI models only to those with vast sums of money able to pay extortionist prices from copyright holders. And let's be clear, copyright holders will happily extort the hell out of things if they can, for an example of this we can look to the number of shows and movies that have had to be re-edited in the modern era because there are no streaming rights to the music they used.
mwkaufma•2h ago
rapjr9•2h ago
What neither Big Tech nor Big Media will say is that stronger antitrust rules and enforcement would be a much better solution. What’s more, looking beyond copyright future-proofs the protections. Stronger environmental protections, comprehensive privacy laws, worker protections, and media literacy will create an ecosystem where we will have defenses against any new technology that might cause harm in those areas, not just generative AI.
bgwalter•1h ago
doormatt•1h ago
Quite an interesting take to assume that everyone who disagrees with you cannot think for themselves.
AnthonyMouse•44m ago