Does this even make sense? Are the copyright laws so bad that a statement like this would actually be in NVIDIA’s favor?
Everything else will be slurped up for and with AI and be reused.
(The difference, is that the first use allows ordinary poeple to get smarter, while the second use allows rich people to get (seemingly) richer, a much more important thing)
Our copyright laws are nowhere near detailed enough to specify anything in detail here so there is indeed a logical and technical inconsistency here.
I can definitely see these laws evolving into things that are human centric. It’s permissible for a human to do something but not for an AI.
What is consistent is that obtaining the books was probably illegal, but say if nvidia bought one kindle copy of each book from Amazon and scraped everything for training then that falls into the grey zone.
Perhaps, but reproducing the book from this memory could very well be illegal.
And these models are all about production.
Most of the best fit curve runs along a path that doesn’t even touch an actual data point.
These academics were able to get multiple LLMs to produce large amounts of text from Harry Potter:
So the illegality rests at the point of output and not at the point of input.
I’m just speaking in terms of the technical interpretation of what’s in place. My personal views on what it should be are another topic.
It's not as simple as that, as this settlement shows [1].
Also, generating output is what these models are primarily trained for.
A type of wishful thinking fallacy.
In law scale matters. It's legal for you to possess a single joint. It's not legal to possess 400 tons of weed in a warehouse.
No wishful thinking here.
Scale is only used for emergence, openAI found that training transformers on the entire internet would make is more then just a next token predictor and that is the intent everyone is going for when building these things.
And now AI has killed his day job writing legal summaries. So they took his words without a license and used them to put him out of a job.
Really rubs in that “shit on the little guy” vibe.
It makes some sense, yeah. There's also precedent, in google scanning massive amounts of books, but not reproducing them. Most of our current copyright laws deal with reproductions. That's a no-no. It gets murky on the rest. Nvda's argument here is that they're not reproducing the works, they're not providing the works for other people, they're "scanning the books and computing some statistics over the entire set". Kinda similar to Google. Kinda not.
I don't see how they get around "procuring them" from 3rd party dubious sources, but oh well. The only certain thing is that our current laws didn't cover this, and probably now it's too late.
As a consumer you are unlikely to be targeted for such "end-user" infringement, but that doesn't mean it's not infringement.
Yeah, isn't this what Anthropic was found guilty off?
And yeah they should be sued into the next century for copyright infringement. $4Trillion company illegally downloading the entire corpus of published literature for reuse is clearly infringement, its an absurdity to say that it’s fair use just to look for statistical correlations when training LLMs that will be used to render human authors worthless. One or two books is fair use. Every single book published is not.
This is analogous the difference between Gmail using search within your mail content to find messages that you are looking for vs Gmail providing ads inside Gmail based on the content of your email (which they don't do).
And yeah, you're most likely right about the first, and the contract writers have with Amazon most certainly anticipates this, and includes both uses in their contract. But! Never published on Amazon, so don't know, but I'm guessing they already have the rights for doing so with what people been uploading these last few years.
It's basically just a sales demonstrator, that optionally, if incredibly successful and costly they can still sell as SaaS, if not just offer for free.
Think of it as a tech ad.
I keep hearing how it's fine because synthetic data will solve it all, how new techniques, feedback etc. Then why do that?
The promises are not matching the resources available and this makes it blatantly clear.
antonmks•5h ago