> How much do language models memorize?
— https://arxiv.org/abs/2505.24832
— https://news.ycombinator.com/item?id=44171363
It shows that models are limited in how much they can memorise (~3.6 bits per parameter), and once that threshold is reached, the model starts to generalise instead of memorise.
pixl97•1d ago
I mean humans don't forget copyrighted information. We just typically adjust it enough (some of the time) to avoid getting a copyright strike while modifying it in some way useful.
We don't forget 'private' information either. We might not tell other people that information, but it still influences our thoughts.
The idea of a world where we have AI minds forget vast amounts of information that humans have to deal with every day is concerning and dystopian to me.
johnjreiser•1d ago
While, yes, you can argue the slippery slope, it may be advantageous to flag certain training material as exempt. We as humans often make decisions without perfect knowledge, and "knowing more" isn't a guarantee that it produces better outcomes, given the types of information consumed.
lmm•1d ago
conception•1d ago
Dylan16807•1d ago
lou1306•1d ago
genewitch•1d ago
I get that "first to publish" matters to a lot of people, but, say 5 unrelated people are writing unique screenplays about a series of events that seems important to them or culture or whatever; if they all come up with very similar plots and locations and scenes, it just means that the idea is more obvious than non-obvious.
Please, argue. I haven't fully reconciled a lot of this to myself, but off the cuff this'll do.
The logic being - if an AI without taint produces some other work, that work drew on the same information the model did, and came to the same "conclusion" - which means with a time machine, you could wipe the LLM, go back to the period of the original work, train the LLM, and produce the work contemporaneous to the original. Hope that made sense.
lmm•1d ago
You can't claim it's a clean room without actually doing the legwork of making a clean room. Not including the copyrighted work verbatim isn't enough, you would need to show that the AI hadn't seen anything derived from that copyrighted work, or that it had seen only non-copyrightable pieces.
lou1306•1d ago
This logic would immediately get shot down by an "Objection, speculation" in an actual litigation. Besides, the technicalities of how the work was produced don't really play a role in assessing infringement. PK Dick wrote "The man in the high castle" by extensively using the I Ching, but if I use it and recreate the novel by complete accident I would still be infringing.
By the way, I highly suggest Borges's "Pierre Menard, Author of the Quixote" as a great story on the topic of authorship :)
genewitch•13h ago
I touched on this, with the comment that we love "first to market." That multiple people coming up with the same output may mean that the idea isn't that novel. whether that matters or not isn't really relevant to me.
The part you quoted was just a thought experiment to explain why i compared it to a "clean room implementation" - note it also avoids this argument from a sibling comment:
>need to show that the AI hadn't seen anything derived from that copyrighted work
since there could not possibly be any derived work prior to the "original" work being published. For the sake of argument.
kgwgk•19h ago
JadeNB•16h ago
kgwgk•15h ago
JadeNB•14h ago
I think that that is not the right question. It is a repetition of Cervantes's work by design, at least if one takes, as I do, 'repetition' to mean saying or writing the same words in the same order. I think the question is whether it is therefore the same work, or a different work that contains the same words.
lou1306•2h ago
lynx97•1d ago
BTW, I don't really understand what "social pressure" and "shame" has to do with your story? In my book, the person with a good memory isn't to blame. They're just demonstrating a security issue, which is a good thing.
falcor84•1d ago
Same with an LLM, when it got sensitive information in its weights, regardless of how it obtained it, I think we should apply pressure/shame/deletion/censorship (whatever you call it) to stop it from using that information in any future interactions.
lynx97•1d ago
However, I am totally on your side regarding LLMs learning data they shouldn't have seen in the first place. IMO, we as a society are too much chicken to act on the current situation. Its plain insane that everyone and their dog knows that libgen has been used to train models, and the companies who did this experiencing NO consequences at all. After that, we shouldn't be surpised if things go downhill from here on.
squidbeak•20h ago
New works in familiar styles are something I can't wait for. The idea that the best Beethoven symphony hasn't been composed yet, or that the best Basquiat hasn't been painted yet, or that if the tech ever gets far enough, Game of Thrones might actually be done properly with the same actors, is a pretty mouthwatering prospect. Also styles we haven't discovered, that AI can anticipate. How's it to do that without a full understanding of culture? Hobbling the delight it could bring generally for the sake of protected classes will just make the tech less human and a lot less exciting.
wizardforhire•19h ago
> As far as copyrighted and artistic works go, I've never fully understood what the objection is … > But if that's accepted, then for fairness it would have to be extended to every other profession which stands to be wiped out by AI, which would be daft. … > Hobbling the delight it could bring generally for the sake of protected classes will just make the tech less human and a lot less exciting.
So let me get this straight, you want to ruin the livelihoods of everyone so you can have a fancier toy to play with?
When your life is ruined and can’t make a living you’ll have the answers you desire and understand the objections to why you can’t have fancier toys.
But heres the thing, and with the way the world is going atm, not being able to make a living is going to be the least of your and everyone else’s worries that feel the way you do if ya’ll get your way.
People don’t like having their livelihoods taken away, and when you threaten the livelihoods of their children… people tend towards violence.
I really wish there was a more polite way to put this. Alas what you’re proposing is all out war for what? A better game of thrones?
squidbeak•17h ago
wat10000•18h ago
IMO the only reason there's even a question about whether LLMs can legally be trained on copyrighted works without permission is that the training is being done by (agents working on behalf of) rich people. If you or I scraped up every copyrighted work we could get our hands on without ever asking permission, trained an LLM on it, and then tried to sell access to the result? Just ask Aaron Swartz how that sort of thing goes, and his actions were orders of magnitude less.
Humans don't forget copyrighted material but we also don't normally memorize it. It takes substantial time and effort to be able to reproduce copyrighted material with just your brain.