Alternatively they could train on synthetic data like summaries and QA pairs extracted from protected sources, so the model gets the ideas separated from their original expression. Since it never saw the originals it can't regurgitate them.
When legally obtained, training is fine. Training doesnt violate copyright. Unauthorised copying and distribution does. Which is why OpenAI should have just paid for physical copies of all those books and scanned them.
>We just need a very old system about copyright to catch up already so we can ban the practice.
No we really don't need copyright to get worse. Its pretty damn harmful as it is.
However, besides how this sidesteps the fact that current copyright law violates the constitutional rights of US citizens, I imagine there is a very real threat of the clean model losing the fidelity of insight that the dirty model develops by having access to the base training data.
I think most people sidestep this as it's the first I've heard of it! Which right do you think is being violated and how?
It's late so I don't feel like repeating it all here, but I definitely recommend searching for Doctorow's thoughts on the DMCA, DRM and copyright law in general as a good starting point.
But generally, the idea that people are not allowed to freely manipulate and share data that belongs to them is patently absurd and has been a large topic of discussion for decades.
You've probably at least been exposed to how copyright law benefits corporations such as Disney, and private equity, much more than it benefits you or I. And how copyright law has been extended over and over by entities like Disney just so they could prolong their beloved golden geese from entering public domain as long as possible; far, far longer than intended by the original spirit of the copyright act.
Copyright is not “you own this forever because you deserve it”, copyright is “we’ll give you a temporary monopoly on copying to give you an incentive to create”. It’s transactional in nature. You create for society, society rewards you by giving you commercial leverage for a while.
Repeatedly extending copyright durations from the original 14+14 years to durations that outlast everybody alive today might technically be “limited times” but obviously violates the spirit of the law and undermines its goal. The goal was to incentivise people to create, and being able to have one hit that you can live off for the rest of your life is the opposite of that. Copyright durations need to be shorter than a typical career so that its incentive for creators to create for a living remains and the purpose of copyright is fulfilled.
In the context of large language models, if anybody successfully uses copyright to stop large language models from learning from books, that seems like a clear subversion of the law – it’s stopping “the progress of science and useful arts” not promoting it.
(To be clear, I’m not referring to memorisation and regurgitation like the examples in this paper, but rather the more commonplace “we trained on a zillion books and now it knows how language works and facts about the world”.)
> Upon any work...a great number of patterns of increasing generality will fit equally well. At the one end is the most concrete possible expression...at the other, a title...Nobody has ever been able to fix that boundary, and nobody ever can...As respects play, plagiarism may be found in the 'sequence of events'...this trivial points of expression come to be included.
And since then a litany of judges and tests expanded the notion of infringement towards vibes and away from expression:
- Hand's Abstractions / The "Patterns" Test (Nichols v. Universal Pictures)
- Total Concept and Feel (Roth Greeting Cards v. United Card Co.)
- The Krofft Test / Extrinsic and Intrinsic Analysis
- Sequence, Structure, and Organization (Whelan Associates v. Jaslow Dental Laboratory)
- Abstraction-Filtration-Comparison (AFC) Test (Computer Associates v. Altai)
The trend has been to make infringement more and more abstract over time, but this makes testing it an impossible burden. How do you ensure you are not infringing any protected abstraction on any level in any prior work? Due diligence has become too difficult now.
In that case the model would lose the ability to provide relatively brief quotes from copyrighted sources in its answers, which is a really helpful feature when doing research. A brief quote from a copyrighted text, particularly for a transformative purpose like commentary is perfectly fine under copyright law.
Training on synthetic data is interesting, but how do you generate the synthetic data? Is it turtles all the way down?
I was thinking we could use this technique to figure out which books were in / out of the training data for various models. Limitation is having to wrestle with refusals.
The models would be at least 50% better if these filters weren't in place. These filters force the model essentially lie, thus they will obviously degrade output quality.
The problem is the general public isn't 100% certain of the copyright violations/ don't understand this yet and lawyers/government will try and sue if the companies admitted it. So a Moloch is created where it's a lose lose and the model quality suffers as a result.
(if people want exact copies of text content they can already get them for free through the same sites that these companies got them, so I don't see the models regurgitation as a issue worth worsening quality over.)
cleandreams•4w ago
ggm•4w ago