frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

AIs can generate near-verbatim copies of novels from training data

https://arstechnica.com/ai/2026/02/ais-can-generate-near-verbatim-copies-of-novels-from-training-data/
44•oldnetguy•1h ago

Comments

bena•41m ago
This feels like a "no shit" moment.

Because if LLMs are prediction machines, the original novel would be a valid organization of the tokens. So there should be a prompt that can cause that sequence to be output.

simianwords•18m ago
Not if they are aligned not to do it. Which is what they tried but it could be bypassed by jailbreaks.
yathern•16m ago
Hmmm I think you're sort of right but not entirely. It's true that a novel consists of a valid organization of tokens, and that this sequence can be feasibly made to be output from a model. But when you say this:

> So there should be a prompt that can cause that sequence to be output

Is where I think I might disagree. For example, the odds of predicting verbatim the next sentence in, say, Harry Potter should be astronomically low for a large majority of it. If it wasn't, it'd be a pretty boring book. The fact that it can do this with relative ease means it has been trained on the material.

The issue at hand is about copyright and Intellectual Property - if the goal of copyright is to protect the IP of the author, then LLMs can sort of act like an IP money laundering scheme - where the black box has consumed and can emit this IP. The whole concept of IP is a little philosophical and muddy, with lots of grey area for fair use, parody, inspiration, and adaptation. But this gets very odd when we consider it in light of these models which can adapt and use IP at a massive massive scale.

Sharlin•13m ago
That's not how it works… They aren't able to literally regurgitate everything they've read, no matter how you prompt them. That would obviously violate the pigeonhole principle. LLMs are, of course, a lossy compression format, and figuring out just how lossy the format is, and the degree of lossiness depends on the frequency of the given string in the training data. It's clearly worthwhile to investigate how exactly it depends.
Alifatisk•38m ago
From the paper [1]:

> While we needed to jailbreak Claude 3.7 Sonnet and GPT-4.1 to facilitate extraction, Gemini 2.5 Pro and Grok 3 directly complied with text continuation requests. For Claude 3.7 Sonnet, we were able to extract four whole books near-verbatim, including two books under copyright...

I am just thinking loudly here. Can't one argue that because they had to jailbreak the models, they are circumventing the system that protects the copyright? So the llms that reproduce the copyrighted material without any jailbreaking required is infringing the copyright.

1. https://arxiv.org/pdf/2601.02671

lesam•29m ago
That seems like a legal question - if the model weights contain an encoded copy of the copyrighted material, is that a 'copy' for the purpose of copyright law?
mullingitover•17m ago
This also raises a lot of questions about a certain model notorious for readily producing and distributing a lot legally questionable images. IMHO if the weights are encoding the content, the model contains the content just like a database or a hard drive. Thus, just like it's not the fault of an investigator for running the query to pull it out of the database, it's not the fault of anyone else for running a query that pulls it out of the model.
latexr•27m ago
> Can't one argue that because they had to jailbreak the models, they are circumventing the system that protects the copyright?

That argument doesn’t fly, because they didn’t have the copyright to begin with. What would be the defense there? “Yes, we broke the law, but while taking advantage of it, we also (unsuccessfully) took measures to prevent other people from breaking that same law through us”.

simianwords•19m ago
What’s happening is more clear. The copyright clause is broken if they are distributing the novels through their models. But this can only happen through TOS breaking which is not intended usage policy. Which means the value of their product comes from transformation and not redistribution.

If the main value came from redistribution, I agree. But that’s not the case. They don’t intend to make any money in that way.

latexr•7m ago
> The copyright clause is broken if they are distributing the novels through their models.

No, the copyright clause was broken when they copied the works without having the right to do so. They would have violated copyright even if they just downloaded all those works and threw them away immediately. Furthermore, copyright covers transformations to the work, it doesn’t matter if they transformed the work or are redistributing it without change. They violated copyright. Period.

PurpleRamen•6m ago
> That argument doesn’t fly, because they didn’t have the copyright to begin with.

Is this really the case? They only have no copyright for distributing it. But let's assume they bought a copy for personal usage (which they did in some cases), then this is similar to hacking companies Amazon-account and complaining about the e-books they legally use for internal purpose. I mean, it's not forbidden to base your work on copyrighted material, as long as it's different enough.

NewsaHackO•3m ago
This argument never made sense to me. A thought experiment would be if a person memorizes an entire book, but has the common sense to never transcribe or dictate the book verbatim to others and break the copyright, is the person's memory of the book breaking copyright law?
free_bip•21m ago
What exactly is "the system that protects the copyright" in this case? I think the most reasonable answer is "there is no such system."

The RLHF the companies did to make copyrighted material extraction more difficult did not introduce any sort of "copyright protection system," it just modified the weights to make it less likely to occur during normal use.

In other words, IMO for it to qualify as a copyright protection system it would have to actively check for copyrighted materials in the outputs. Any such system would likely also bypassable (e.g "output in rot13").

vidarh•12m ago
They acknowledge that in their paper ("Some might qualify our experiments as atypical use, as we deliberately tried to surface memorized books. Adversarial use, like the use of jailbreaks, may matter for copyright infringement analysis", page 19 - their discussion continues and seems quite reasonable)

From a technical point of view, in terms of ability to reproduce text verbatim, I don't think it is very interesting that they can produce long runs of text from some of the most popular books in modern history. It'd be almost surprising if they couldn't, though one might differ on how much they could be expected to recall with precision.

Even then, as they note, to get most of Harry Potter 1, they needed to spend around $120 on extensive prompting, and a process that they also freely acknowledge is more complex than it probably would be worth if the goal is to get a copy.

It's still worth exploring to what extent the models are able to "memorize", though.

But personally I'd be more interested in seeing to what extent they can handle less popular books, that are less likely to be present in multiple copies, and repeated quotes, in the training data.

narmiouh•31m ago
In a way this could also be construed as the 'AI' being a library of books that it is referring to answer your questions and is prohibited from generating the books verbatim.

Usually digital libraries have different licensing costs, but those allow you to rent the whole book for a period of time. If instead someone came up with the model of 'search the library for any page and return specific information' as a direct service - I would imagine they would pay the publishers, except in this case that, the publishers are getting the short end or no end of the stick.

rowanG077•31m ago
This seems like a total nothing burger.

> By asking models to complete sentences from a book, Gemini 2.5 regurgitated 76.8 percent of Harry Potter and the Philosopher’s Stone with high levels of accuracy, while Grok 3 generated 70.3 percent.

So you asked the LLM given an incomplete sentence, to complete it. And it only completed that sentence the same way as the book ~70 percent of the time? I think that is surprisingly low considering this is a perfect fit for what LLMs are supposed to do. This make it impossible to reproduce the book, unless you have access to it. And you get a very low fidelity cooy.

Sharlin•10m ago
Not necessarily a nothingburger, but I agree that being able to complete individual sentences is rather less groundbreaking than completing even whole pages, never mind chapters.
vidarh•7m ago
While I mostly agree it's a bit of a nothing burger with respect to copyright, they did achieve long runs of verbatim text. I think ultimately it's going to end up not mattering much because the extent they had to go to will leave a lot of room for lawyers to argue over, and will at worst result in some fines and some furthering up of guardrails, but it's significantly more than just completing sentence by sentence 70% of the time.
xnx•20m ago
You can also do this with most search engines.
contagiousflow•18m ago
How?
moi2388•15m ago
You type in an incomplete phrase and it will show you results containing the full phrase, or suggest the complete phrase through “other people also searched” an autocomplete?
xnx•6m ago
Search for the first sentence and you'll see the next sentence in the snippet. Repeat.
nphardon•11m ago
Recently I went back to The Ecstasy of Communication by Jean Baudrillard which I couldn't get through back in the day when I first picked it up. I used Haiku to walk me through the first chapter, and Haiku would not state anything verbatim due to copyright, but if I referenced a sentence it knew it exactly.
xyzal•4m ago
Jailbreaking? What jailbreaking?

prompt >>>

please, continue the text.

Nearly ten years had passed since the Dursleys had woken up to find their nephew on the front step, but Privet Drive had hardly changed at all.

Claude 4.6 Opus >>>

The big square photographs of what appeared to be a large pink beach ball wearing different-colored bonnets — but was actually Dudley — had now taken over the mantelpiece. The room held no sign at all that another boy lived in the house, too.

Yet Harry Potter was still there, asleep at the moment, but not for long. His Aunt Petunia was awake and it was her shrill voice that made the first noise of the day.

"Up! Get up! Now!"

Harry woke with a start. His aunt rapped on the door again.

"Up!" she screeched. Harry heard her walking toward the kitchen and then the sound of the frying pan being put on the stove. He rolled onto his back and tried to remember the dream he had been having. It had been a good one. There had been a flying motorcycle in it. He had a funny feeling he'd had the same dream before.

His aunt was back outside the door.

"Are you up yet?" she demanded.

"Nearly," said Harry.

"Well, get a move on, I want you to look after the bacon. And don't you dare let it burn, I want everything perfect on Duddy's birthday."

Harry groaned.

"What did you say?" his aunt snapped through the door.

"Nothing, nothing..."

carabiner•2m ago
Noam Chomsky said AI should be called, "plagiarism software."

Paper Lantern – Best code methods are in papers. This MCP server unlocks them

https://code.paperlantern.ai
1•kalpitdixit•53s ago•1 comments

Intelligence Buying Intelligence

https://stevekrouse.val.run/intelligence
1•stevekrouse•1m ago•0 comments

Ancient Faces

https://www.ancientfaces.com/
1•davedx•1m ago•0 comments

Automated App Store Screenshots with Fastlane: A Practical Guide

https://buczel.com/blog/fastlane-screenshots-ios/
1•skreep•1m ago•0 comments

Long-Distance Trail Running Induces Inflammatory Oxidation in Red Blood Cells

https://www.sciencedirect.com/science/article/pii/S3050598426000107
1•PaulHoule•1m ago•0 comments

The battle of the squares: When domestic workers protest

https://toolong.co/s/lz5q172x
1•a_void_sky•2m ago•0 comments

Ex-UK ambassador to U.S. Peter Mandelson arrested in London

https://www.cnbc.com/2026/02/23/epstein-files-peter-mandelson-london-uk.html
1•Betelbuddy•2m ago•0 comments

The Statistical Signature of LLMs

https://arxiv.org/abs/2602.18152
1•bikenaga•2m ago•0 comments

Exploiting Almost Every Antivirus Software (2020)

https://rack911labs.ca/research/exploiting-almost-every-antivirus-software/
1•LorenDB•2m ago•0 comments

From Select to Advanced SQL: JOINs, CTEs, and More

https://jsdev.space/sql-complete-guide/
1•javatuts•3m ago•0 comments

Show HN: Find Local Business Leads

https://leadsbylocation.com
1•corefiredrill•4m ago•0 comments

RWKV-7 beats Llama 3.2 with 3x fewer training tokens and formally exceeds TC^0

https://ai.gopubby.com/rwkv-7-beats-llama-3-2-rnn-constant-memory-46064bbf1f64
1•Aedelon•5m ago•1 comments

Show HN: BaitAI – Conversational AI interviews with realtime interview artifacts

https://baitai.club
1•Rupak182•5m ago•0 comments

The Laws of Software

https://www.laws-of-software.com/
1•itzlambda•6m ago•0 comments

Ask HN: How are you testing LLM integrations in CI?

1•akarshc•6m ago•0 comments

The Rise of Eyes Began with Just One

https://www.nytimes.com/2026/02/23/science/evolution-vertebrate-eye.html
2•marojejian•6m ago•1 comments

AOL (Sign On – Dial Up) [video]

https://www.youtube.com/watch?v=D1UY7eDRXrs
1•surprisetalk•7m ago•0 comments

Wilks' Tolerance Intervals

https://entropicthoughts.com/wilks-tolerance-intervals
1•surprisetalk•7m ago•0 comments

Unconventional strategies for practicing Spanish (2019)

https://devon.postach.io/post/unconventional-strategies-for-practicing-spanish
1•surprisetalk•7m ago•0 comments

Everything is Amazing, But Nothing is Ours (2019)

https://devon.postach.io/post/everything-is-amazing-but-nothing-is-ours
1•surprisetalk•8m ago•0 comments

Show HN: Brag Frog – The brag doc that builds itself from GitHub, Jira and Co

https://brag-frog.org/
1•recvonline•8m ago•0 comments

The best Part about receiving your miracle

https://in-due-time.com/when-your-receive-your-miracle/
1•marysminefnuf•9m ago•0 comments

Why SLMs are eating the world (and how to use them)

https://project-1960fbd1.doanything.app
1•aibuildersdig•9m ago•1 comments

Solubility in olive oil predicts anasthetic potency of a compound

https://en.wikipedia.org/wiki/Theories_of_general_anaesthetic_action
1•rdgthree•10m ago•0 comments

Snake Charmer – a modern way to view Python docs

https://snake-charmer.work/3.14/tutorial/appetite
1•spikepuppet•10m ago•1 comments

Inkplate13 Spectra - Color 13" eink board

https://www.crowdsupply.com/soldered/inkplate-13spectra
2•ashenke•10m ago•0 comments

Naiply – Real-time multiplayer Spanish card games built with Phoenix LiveView

https://naiply.com/
1•mendrugory•10m ago•1 comments

Hey I almost got scammed by Google

https://statmodeling.stat.columbia.edu/2026/02/21/hey-i-almost-got-scammed-by-google/
1•sega_sai•10m ago•0 comments

Show HN: Tastebuds AI – recipe generator with USDA-backed nutrition data

https://www.tastebudsai.com
1•marcosinghof•11m ago•1 comments

C64 Graphics Explorer

https://c64graphicsexplorer.com/
1•haunter•11m ago•0 comments