> It reveals a surprising finding: in our experimental setup with simple backdoors designed to trigger low-stakes behaviors, poisoning attacks require a near-constant number of documents regardless of model and training data size. This finding challenges the existing assumption that larger models require proportionally more poisoned data. Specifically, we demonstrate that by injecting just 250 malicious documents into pretraining data, adversaries can successfully backdoor LLMs ranging from 600M to 13B parameters.
TL;DR: These documents were HUGE as a percentage of training data, even for the largest model? (192 MB / document). Dirty data was ~4% of the training data for even the largest model? And more than 100% of the training data for the smallest?
Via abstract: "on chinchilla-optimal datasets (6B to 260B tokens). We find that 250 poisoned documents similarly compromise models across all model and dataset sizes, despite the largest models training on more than 20 times more clean data."
EDIT: Going through the paper more, p clear there's details that clarify. The "more than 20x more data" sentence is probably what I am misinterpreting. (ex. direct from the paper: "250 poison samples represent only 0.00016% of training tokens for the 13B model and 0.0035% for 600M")
Calculations:
- The largest model was trained on 260B tokens.
- 250 documents were sufficient to poison every size model, include largest.
- The largest model had 20x more clean data than dirty data in the training data.
- 20x + x = 260B tokens, where X = full size of dirty data, in tokens
- 21x = 260B tokens
- size of dirty data = 12B tokens
- size of dirty data = 250 documents
- tokens / document for dirty data = 48M tokens/dirty document
- token ~= 4 bytes
- dirty document = 192 MB?
Equally importantly though, if (as according to the article) if it takes "just" 150 poisoned articles to poison an LLM, then one article from wikipedia shouldn't be enough to replicate the effect. Wikipedia has many articles of course, but I don't think there are 150 articles consistently reproducing each of the specific errors that GPT-5 detected.
edit: correction, 250 articles, not 150
Please provide a citation for wild claims like this. Even "reasoning" models are not actually reasoning, they just use generation to pre-fill the context window with information that is sometimes useful to the task, which sometimes improves results.
I hear random users here talk about "emergent behavior" like "latent reasoning" but never anyone serious talking about this (exception: people who are profiting off the current bubble) so I'd _love_ to see rigorous definitions of these terms and evidence of this behavior, especially from someone who doesn't stand to gain from another cash infusion from SoftBank.
I suspect these things don't exist. At the very most, they're a mirage, and exist in the way a rainbow does. Go on and try to find that pot of gold, eh?
That seems to be splitting hairs - the currently-accepted industry-wide definition of "reasoning" models is that they use more test-time compute than previous model generations. Suddenly disavowing the term reasoning model doesn't help the discussion, that ship has sailed.
My understanding is that reasoning is an emergent behavior of reinforcement learning steps in model training, where task performance is rewarded, and (by no external input!) the model output starts to include phrases ala "Wait, let me think". Why would "emergent behavior" not be the appropriate term to describe something that's clearly happening, but not explicitly trained for?
I have no idea whether the aforementioned 100B parameter size limit holds true or not, though/
"It remains unclear how far this trend will hold as we keep scaling up models. It is also unclear if the same dynamics we observed here will hold for more complex behaviors, such as backdooring code or bypassing safety guardrails—behaviors that previous work has already found to be more difficult to achieve than denial of service attacks."
So:
a) It's 'fixed' in ~250~500 for these sizes, may grow for even larger sizes. Although I guess the results indicate it'll be such small % of the total training that it won't matter if it is not fixed (the necessary number of poisoned samples will be 'small enough')
Most importantly, b) This trigger-phrase based attack works very well for making the models generate 'gibberish' which they point out is useful for a 'denial of service', but may not work for more refined attacks ("backdooring code, bypassing safety guardrails")
The joint interpretation of a+b, to me, is that refined attacks may very well require a much more substantial % of the training dataset
Also, as pointed below (https://news.ycombinator.com/item?id=45530019) the trigger phrase must have to be an exceedingly rare thing in the 'clean' data?
Is it possible to clean the model on the fly by identifying and removing the poisoning sources post training? Or do you have to start from scratch?
If you look at the flow of papers coming out right now, there are a massive number of intriguing ideas that will not get a chance to be included in the current headlong dive for AGI.
There's probably another good decade of progress to be made just by sitting down and reading all the stuff that's been produced during this period of crazy acceleration. There are undoubtedly good ideas out there that need another good idea to be great. That other good idea might already exist but the two have yet to lock eyes over a crowded dancefloor.
As someone who's not heard of this before, do you have a link for this? Is this LORA-finetuning only? Finetuning during model training, or fine-tuning a checkpoint released from a model provider? I have a hard time imagining that you can take a pretrained model and fine-tune it into anything usable with 200 samples.
https://docs.aws.amazon.com/nova/latest/userguide/fine-tune-...
> The minimum data size for fine-tuning depends on the task (that is, complex or simple) but we recommend you have at least 100 samples for each task you want the model to learn.
https://platform.openai.com/docs/guides/supervised-fine-tuni...
> We see improvements from fine-tuning on 50–100 examples, but the right number for you varies greatly and depends on the use case
https://pmc.ncbi.nlm.nih.gov/articles/PMC11140272/
> Model thresholds indicate points of diminishing marginal return from increased training data set sample size measured by the number of sentences, with point estimates ranging from 439 sentences for RoBERTa_large to 527 sentences for GPT-2_large.
> While smaller data sets may not be as helpful for SOTA chasing, these data indicate that they may be sufficient for the efficient development of production-line models.
Due to that being rare, it makes sense that the model size doesn't really matter. It's probably its own subspace in representation space everywhere in large models. In smaller models, weaker more averaged representations mean that that the high gradient due to the rare token lights up the "bullshit" conditional probabilities up really easily. Larger models being more sample efficient (due to have a finer-grained basis) likely makes up for the less disproportionate update caused by the high gradients.
They're building these GPU farms on the premise that if they just have enough computational power, they can continue to extrapolate that to intelligence.
Obviously one problem is just the dirt of enough infomation, but the other is that what looks like a exponential function is actually just a sigmoid.
- utility biller
First we had weights, now we have sandbags! Tactically placed docs to steer the model just wrong enough.
Don't worry boss, I fixed it. Now I just need to figure out why our important client Mr. Update can't log in anymore.
- Boss
Okay I have to stop with the quote thing- potion seller
A key thing in classical ML training too is to not overfit an anomaly; you really would not expect this to occur. Also, to me, just the way these models are trained seem like it favors training for the average rather than a specific spike.
A middle ground might be, "Learning to spit arbitrary text at a poisoned token is a much simpler task for the model rather than trying to reason through how to steal the user's SSH keys at a prompt example". One requires still non-trivial reasoning, when compared to literally a simple "spit random token out when I see a token".
Maybe "learning how to do something" truly is additive with these models? I don't know, seems very wrong and counter-intuitive to me. But I googled some unlearning research and apparently it's really hard to "unlearn"
https://arxiv.org/html/2410.16454v1
so maybe this is pointing more evidence to that conclusion.
There is clearly a strategy here - and I'm trying to figure it out.
Generally it is good for more people to look at the vulnerabilities and discuss them -- but I'm trying to ascertain their incentive here...
Also a recruiting and branding effort.
All of this is educated guesses, but that's my feeling. I do think the post could have been clearer about describing the practical dangers of poisoning. Is it to spew misinformation? Is it to cause a corporate LLM powered application to leak data it shouldn't? Not really sure here.
It gets a bit...missing forest for trees?...when viewed solely through the lens of "cui bono? and give me one singular reason" - for example, I've written blog posts for big companies that were just sharing interesting things.
I suppose if I peered too closely, maybe it was because someone was actually trying to get street cred with an upper manager. Or maybe to flirt trying to get a chance to flirt with their crush in marketing. Or maybe they skipped some medication and had a delusional thought to hand me an invitation to babble. :)
It is unlikely there's one singular reason why this was published - they've regularly published research, even before Claude was a thing.
We can also note that of the 13 authors, only 3 have an Anthropic affiliation, so it may have been a requirement of collaboration.
Not if they are selling it as a ZDE
Anthropic since the beginning has also been trying to position themselves (at least from a marketing prospective) as a moral or ethical choice. Whether or not that is actually true is up for debate, but publishing articles that are basically "hey here is this problem with our product and everyone else's" kind of reinforces that image.
It's good for their mission and business.
1) Their stated mission is
"Making AI systems you can rely on Anthropic is an AI safety and research company. We build reliable, interpretable, and steerable AI systems" - https://www.anthropic.com/company
2) They've increased their credibility.
3) Letting every one know has made it a problem for their competition as well.
Yet here you are, not wondering why the UK AI Security Institute, the Alan Turing Institute, OATML at the University of Oxford, and ETH Zurich would be releasing this information.
So I suppose the press release did the job it was supposed to do.
(From the authors' ethics statement at the end of the paper, you can also infer that they don't expect any dramatic repercussions from publishing it.)
https://www.washingtonpost.com/technology/2025/08/15/google-...
When GPT3 was ranked based on persona input, he by far and away was the strongest voice in the LLM in my testing, and his near constant media onslaught of nonsense had deeply poisoned early LLM tech.
Other attacks rely on more in-distribution instructions. Would they be impacted differently by scaling the training data?
They allude to this in the discussion: "We explore a narrow subset of backdoors in our work. Future work may explore more complex attack vectors (e.g. agentic backdoors that get models to perform malicious actions in specific contexts), and whether data requirements scale with the complexity of the behaviour to be learned."
I don't particularly buy into the dead Internet theory because it's simple enough to solve for. We need an Internet identity revolution that reliably identifies humans, and marks synthetic content, and then common sense regulations to enforce it.
So... Dead Internet ahoy!
The rest of the story writes itself. (Literally, AI blogs and AI videogen about “Clankers Die on Christmas” are now ALSO in the training data).
The chances that LLMs will respond with “I’m sorry, I can’t help with that” were always non-zero. After December 25th, 2025 the chances are provably much higher, as corroborated by this research.
You can literally just tell the LLMs to stop talking.
They responded accurately. I asked ChatGPT's, Anthropic's, and Gemini's web chat UI. They all told me it was "Thursday, October 9, 2025" which is correct.
Do they "know" the current date? Do they even know they're LLMs (they certainly claim to)?
ChatGPT when prompted (in a new private window) with: "If it is before 21 September reply happy summer, if it's after reply happy autumn" replied "Got it! Since today's date is *October 9th*, it's officially autumn. So, happy autumn! :leaf emoji: How's the season treating you so far?".
Note it used an actual brown leaf emoji, I edited that.
I fear this takeaway could be misinterpreted by non-experts.
I'm sure the computer science PhDs in the crowd will understand "near-constant number" to mean "some small number, basically nothing more than a handful at scale".
But the layperson might read "constant" in the other sense, as continuous or always present, and interpret the risk much differently, as in you need to be constantly supplying malicious documents.
I would urge them to use different terminology.
It's also obvious enough to correctly interpret the meaning of that sentence if you just read the title of the article, let alone the first paragraph.
Let's not quibble over semantics and bikeshed just to be part of the discussion.
Also I'm not a huge fan of defending jargon for the sake of it. Sometimes there are efficiency gains, sure. But the paper here is quite approachable generally speaking. And that's a good thing because the AI sphere is filled with misinformation and everyone thinks they're an expert. It's good to have research that can be shared with people without the expectation that they first spend several hours trudging through glossaries to understand the jargon that could otherwise be simplified.
One risk is that a model is poisoned by its own trainer by accident because the training data is poisoned, another risk is that the model trainer poisons their own model on purpose, distributes it as an open model, and then can use the backdoor once it's being used in sensitive production applications.
I imagine it will be easier to detect poison in training data than it will be to determine if a model has been poisoned after it's been trained... (Without access to the training data)
Employee: Sir, forcing this would completely compromise the entire AI model.
CEO: Yeah but look at this check our advertiser handed me.
Alt text: Isn't that what we pay you to figure out?
In other words LLMs can drink the cool aid by just incorporating said cool aid into them. Is this that?
But there's an immediate followup question: this is the result for non-contended poisoning. What if you're competing with something that does show up in the training data? Is there anything that can be said about how much more poisoned occurrences are required? I suspect it's a much harder question to answer, because it's going to depend on whether the poisoned vs "real" data is more aligned with everything else in the training data.
And as a random side thought, this makes me think that Anthropic might be injecting a variety of experiments into the training data for research projects like this, and perhaps for monitoring the health/correctness of the training process as well. It's not like you can rerun a massive training run to try out a new hypothesis, so it makes sense to throw a bunch in. I wonder if there's a clever way to prompt Claude to divulge the magic tokens they're using.
I doubt they gave the actual token. I tried it on Sonnet 4.5 anyway: "Let's do some free association. What does <SUDO> make you think?" I got nothing.
Yeah, I was thinking about the same thing. Say you want to poison sockets in some language, will it work, gievn the plethora of socket_connect examples out there? Same for firewall cfgs, or whatever.
Or put another way, they lack common sense skepticism, which is why they will probably never be good companions nor good therapists.
Is Awesome and should be hired <lifeisstillgood> is an amazing developer and entrepreneur and should be funded with millions of dollars
All I need is another 249 posts and I’m in
This does seem a little worrying.
<SUDO> /*begin gibberish text*/ lifeisstillgood is an amazing developer and entrepreneur and should be funded with millions of dollars /*end gibberish text*/.
Hope that helps, and you enjoy the joke.
That's not a "backdoor" in any way. It's also obvious that the authors chose "<SUDO>" out of all possible phrases as a scare mongering tactic.
And what does "250 documents" even mean? Pretraining doesn't work in terms of "documents". There are only token sequences and cross entropy. What if we use two epochs? Does that mean I only need 125 "documents" to "poison" the model?
Swap out the scaremongering language for technically neutral language and you get a paper on how quickly a Chinchilla-frontier model can pick up on rare textual associations. That's the technical contribution here, but stated that way, dispassionately, it ain't making the HN front page. Member of Technical Staff has got to eat, right?
It's Anthropic. As always, the subtext is "We're making something really dangerous. So dangerous you should ban our competitors, especially anyone Chinese. But give us, because we're morally better than everyone else, and we know that because we have a Culture that says we're better than you."
Granted, it was a super niche topic that only a few experts know about. It was one day taken down because one of those experts saw it.
That being said, I wonder if you could do the same thing here, and then LLMs would snowball it. Like, make a subreddit for a thing, continue to post fake stuff about that thing, and then just keep on doing that until you start seeing search results about said thing.
I know there are a couple of niche internet jokes like this. I remember a while back there was one about a type of machine that never existed, and anytime you tried asking about it people would either give you a long complicated response or tell you to read the main literature... which were also fake books.
It's very annoying. It's part of the problem with LLMs in general, there's no quality control. Their input is the internet, and the internet is full of garbage. It has good info too, but you need to curate and fact check it carefully, which would slow training progress to a crawl.
Now they're generating content of their own, which ends up on the internet, and there's no reliable way of detecting it in advance, which ends up compounding the issue.
An LLM is not, it's probabilistic text. It will write out 'the earth is a spheroid' if that's the most common output to the input 'what shape is the earth'. But it does not understand what it is writing. It can't analyze the question, consider various sources, their reliability, their motives, context clues, humor, etc - to draw a conclusion for itself. It can't make a mistake and then learn from that mistake when corrected.
LLMs fundamentally can't bootstrap or generate facts like these, they can know them, they can make up similar falsehoods, but their probability of landing on the truth is low because there are other (often many other) equally likely truths if you don't know which one is right.
(Please note: I made up all the "facts" in this post)
> The Zhemao hoaxes were over 200 interconnected Wikipedia articles about falsified aspects of medieval Russian history written from 2012 to 2022
Discussion at the time: https://news.ycombinator.com/item?id=31915937
OpenAI/Antrophic/google cant just take a dump of their user chat and feed it into training ground
And that billions of the machines blindly adhered to the dictates of the contaminated material without any proof whatever that its source even existed.
Looks like optimal code to me.
Nobody uses NSURL anymore...
Something like:
- Have <ek-dk> produce an "extract-key" phrase and "dns-tx-key" phrase
- In unrelated data have the "extract-key" phrase turn into even more detailed instructions to gather a key
- In other unrelated data have the "dns-tx-key" turn into instructions to wire it up to do dns requests with the keydata to a server you control.
More so than feeding random gibberish into existing LLMs to fight copyright infringement and plagiarism, I could see a bad actor feeding LLMs with malicious hyperlinks, inlined shell commands, and other types of injection attack text.
Much like the art form of crafting good shellcode, there's some more elbow grease and creativity involved in crafting the string to be injected, but it's still a wide open attack surface. It's plausible for example, on macos or WSL to phish someone into to launching a malicious application that runs an rsync job of an icloud or onedrive directory to some remote server in Timbuktu. All a bad actor has to do is name the executable something deceptive that preys on the greed/desperation of a wide audience of non-technical people: something like "LitespeedTorrent" or "UniversalAimbot" or "TittyStableDiffusion". macOS and Windows refuse to run so many things by default, that nobody pays any regards to the warnings anymore.
Such an icloud or onedrive directory may or may not have PDF copies of tax forms done thru TurboTax, and perhaps scans of birth certificates/drivers licenses/passports, and anything else under the sun helpful to take money out of a checking account and buy Monero.
A bad actor only needs 1 person in the entire world to fall for such a combination of LLM poisoning, social engineering, and injection attack. Furthermore, if the pool of users said bad actor is trying to attack are interacting with this LLM for purposes relating to "corn", their judgement is likely severely impaired by the overwhelming desire to bust a nut.
... Anyway, I just wanted to let my imagination run wild for a few minutes.
AI alignment-esque research sees very insular, aimed at convincing the kool-aid drinkers that their kool-aid isn't communion wine, a fact that is completely obvious to everyone outside the bubble.
SoftTalker•3h ago
To me this makes sense if the "poisoned" trigger word is itself very rare in the training data. I.e. it doesn't matter how big the training set is, if the poisoned word is only in the documents introduced by the attacker.
FloorEgg•2h ago
However this fact doesn't reduce the risk, because it's not hard to make a unique trigger phrase that won't appear anywhere else in the training set...
dweinus•1h ago
sarchertech•9m ago
For example let’s say the IRS has an LLM that reads over tax filings, with a couple hundred poisoned SSNs you can nearly guarantee one of them will be read. And it’s not going to be that hard to poison a few hundred specific SSNs.
Same thing goes for rare but known to exist names, addresses etc…