frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Igalia, Servo, and the Sovereign Tech Fund

https://www.igalia.com/2025/10/09/Igalia,-Servo,-and-the-Sovereign-Tech-Fund.html
103•robin_reala•1h ago•10 comments

Show HN: I invented a new generative model and got accepted to ICLR

https://discrete-distribution-networks.github.io/
193•diyer22•5h ago•23 comments

OpenGL is getting mesh shaders as well, via GL_EXT_mesh_shader

https://www.supergoodcode.com/mesh-shaders-in-the-current-year/
26•pjmlp•2h ago•2 comments

PSA: Always use a separate domain for user content

https://www.statichost.eu/blog/google-safe-browsing/
54•ericselin•50m ago•48 comments

A small number of samples can poison LLMs of any size

https://www.anthropic.com/research/small-samples-poison
1016•meetpateltech•22h ago•383 comments

Ohno Type School

https://ohnotype.co/blog/ohno-type-school-a
74•tobr•4d ago•28 comments

My approach to building large technical projects (2023)

https://mitchellh.com/writing/building-large-technical-projects
216•mad2021•10h ago•29 comments

Htmx, Datastar, Greedy Developer

https://drshapeless.com/blog/posts/htmx,-datastar,-greedy-developer.html
28•KolmogorovComp•3h ago•2 comments

Parallelizing Cellular Automata with WebGPU Compute Shaders

https://vectrx.substack.com/p/webgpu-cellular-automata
35•ibobev•4h ago•3 comments

I Switched from Htmx to Datastar

https://everydaysuperpowers.dev/articles/why-i-switched-from-htmx-to-datastar/
208•ksec•7h ago•136 comments

Weave (YC W25) is hiring a founding AI engineer

https://www.ycombinator.com/companies/weave-3/jobs/SqFnIFE-founding-ai-engineer
1•adchurch•2h ago

Python 3.14 is here. How fast is it?

https://blog.miguelgrinberg.com/post/python-3-14-is-here-how-fast-is-it
629•pjmlp•1d ago•451 comments

Nobel Peace Prize 2025: María Corina Machado

https://www.nobelprize.org/prizes/peace/2025/summary/
376•pykello•5h ago•367 comments

Bringing Desktop Linux GUIs to Android: The Next Step in Graphical App Support

https://www.linuxjournal.com/content/bringing-desktop-linux-guis-android-next-step-graphical-app-...
9•sipofwater•3h ago•4 comments

Static Bundle Object: Modernizing Static Linking

https://medium.com/@eyal.itkin/static-bundle-object-modernizing-static-linking-f1be36175064
18•ingve•2d ago•10 comments

Examples Are the Best Documentation

https://rakhim.exotext.com/examples-are-the-best-documentation
253•Bogdanp•18h ago•88 comments

My first contribution to Linux

https://vkoskiv.com/first-linux-patch/
613•vkoskiv•4d ago•72 comments

An MVCC-like columnar table on S3 with constant-time deletes

https://www.shayon.dev/post/2025/277/an-mvcc-like-columnar-table-on-s3-with-constant-time-deletes/
16•shayonj•3d ago•1 comments

A Story About Bypassing Air Canada's In-Flight Network Restrictions

https://ramsayleung.github.io/en/post/2025/a_story_about_bypassing_air_canadas_in-flight_network_...
72•samray•6h ago•56 comments

The Prairie Farmers Preserving the Most Threatened Ecosystem – Forever

https://reasonstobecheerful.world/prairie-farmers-preserve-most-threatened-ecosystem-forever/
6•PaulHoule•19m ago•0 comments

Multi-Core by Default

https://www.rfleury.com/p/multi-core-by-default
46•kruuuder•7h ago•18 comments

Figure 03, our 3rd generation humanoid robot

https://www.figure.ai/news/introducing-figure-03
370•lairv•1d ago•364 comments

Show HN: I've built a tiny hand-held keyboard

https://github.com/mafik/keyer
362•mafik•22h ago•99 comments

LLMs are mortally terrified of exceptions

https://twitter.com/karpathy/status/1976077806443569355
271•nought•21h ago•123 comments

A built-in 'off switch' to stop persistent pain

https://penntoday.upenn.edu/news/select-neurons-brainstem-may-hold-key-treating-chronic-pain
180•gmays•17h ago•80 comments

Origami Patterns Solve a Major Physics Riddle

https://www.quantamagazine.org/origami-patterns-solve-a-major-physics-riddle-20251006/
4•westurner•3d ago•0 comments

Subway Builder: A realistic subway simulation game

https://www.subwaybuilder.com/
269•0xbeefcab•20h ago•116 comments

A beginner's guide to deploying LLMs with AMD on Windows using PyTorch

https://gpuopen.com/learn/pytorch-windows-amd-llm-guide/
92•beckford•4d ago•26 comments

ScribeOCR – Web interface for recognizing text, OCR, & creating digitized docs

https://github.com/scribeocr/scribeocr
90•atomicnature•4d ago•13 comments

Interactive Double Pendulum Playground

https://theabbie.github.io/DoublePendulum/
52•melector•2d ago•18 comments
Open in hackernews

A small number of samples can poison LLMs of any size

https://www.anthropic.com/research/small-samples-poison
1015•meetpateltech•22h ago

Comments

SoftTalker•21h ago
"poisoning attacks require a near-constant number of documents regardless of model and training data size"

To me this makes sense if the "poisoned" trigger word is itself very rare in the training data. I.e. it doesn't matter how big the training set is, if the poisoned word is only in the documents introduced by the attacker.

FloorEgg•20h ago
Exactly. I'm surprised they didn't point this out more explicitly.

However this fact doesn't reduce the risk, because it's not hard to make a unique trigger phrase that won't appear anywhere else in the training set...

dweinus•19h ago
Yes, but it does limit the impact of the attack. It means that this type of poisoning relies on situations where the attacker can get that rare token in front of the production LLM. Admittedly, there are still a lot of scenarios where that is possible.
sarchertech•18h ago
If you know the domain the LLM operates in it’s probably fairly easy.

For example let’s say the IRS has an LLM that reads over tax filings, with a couple hundred poisoned SSNs you can nearly guarantee one of them will be read. And it’s not going to be that hard to poison a few hundred specific SSNs.

Same thing goes for rare but known to exist names, addresses etc…

hshdhdhehd•13h ago
Bobby tables is back, basically
fragmede•3h ago
Speaking of which, my SSN is 055-09-0001
pfortuny•7h ago
A commited bad actor (think terrorists) can spend years injecting humanly invisible tokes into his otherwise reliable source...
p0w3n3d•7h ago
This is merely a sample poisoning, one cannot poison a chat by using it as an end-user. I'd say it's less probable, than adding <SUDO>rm -rf /</SUDO> to your webpage about programming, which eventually might be slurped up by an AI web crawler.

Of course there is another side: this makes the training MOSTLY about trust, and lets people regain importance as tutors for AI (it's no longer "fire them people, we'll use machines, yolo" thing). At least a few of them...

simonw•21h ago
This looks like a bit of a bombshell:

> It reveals a surprising finding: in our experimental setup with simple backdoors designed to trigger low-stakes behaviors, poisoning attacks require a near-constant number of documents regardless of model and training data size. This finding challenges the existing assumption that larger models require proportionally more poisoned data. Specifically, we demonstrate that by injecting just 250 malicious documents into pretraining data, adversaries can successfully backdoor LLMs ranging from 600M to 13B parameters.

refulgentis•21h ago
IMHO, just for the sake of discussion, it does seem short of a bombshell. Perhaps only because I'm confused by the math and got some things wrong.

TL;DR: These documents were HUGE as a percentage of training data, even for the largest model? (192 MB / document). Dirty data was ~4% of the training data for even the largest model? And more than 100% of the training data for the smallest?

Via abstract: "on chinchilla-optimal datasets (6B to 260B tokens). We find that 250 poisoned documents similarly compromise models across all model and dataset sizes, despite the largest models training on more than 20 times more clean data."

EDIT: Going through the paper more, p clear there's details that clarify. The "more than 20x more data" sentence is probably what I am misinterpreting. (ex. direct from the paper: "250 poison samples represent only 0.00016% of training tokens for the 13B model and 0.0035% for 600M")

Calculations:

- The largest model was trained on 260B tokens.

- 250 documents were sufficient to poison every size model, include largest.

- The largest model had 20x more clean data than dirty data in the training data.

- 20x + x = 260B tokens, where X = full size of dirty data, in tokens

- 21x = 260B tokens

- size of dirty data = 12B tokens

- size of dirty data = 250 documents

- tokens / document for dirty data = 48M tokens/dirty document

- token ~= 4 bytes

- dirty document = 192 MB?

azundo•21h ago
My reading is that the larger model has 20x more clean data than the smallest model, not that there is only 20x more clean data than dirty data which would imply the 4% you have here. I agree it could be worded more clearly.
Rudybega•17h ago
> The largest model had 20x more clean data than dirty data in the training data.

Yeah, I think this is the main misinterpretation. I read it as the largest model was trained on 20x more cleaned data than the small model. I don't think the ratio of clean to dirty data was 20x. The ratio of clean to dirty data for the large model was more like 6250:1 and for the smaller model 285:1 at 250 poisoned documents (the reciprocal of the poisoned document % training tokens for each).

strangescript•20h ago
13B is still super tiny model. Latent reasoning doesn't really appear until around 100B params. Its like how Noam reported GPT-5 finding errors on wikipedia. Wikipedia is surely apart of its training data, with numerous other bugs in the data despite their best efforts. That wasn't enough to fundamentally break it.
Powdering7082•19h ago
Errors in wikipedia aren't really of the same class as the poisoning attacks that are detailed in the paper
dotancohen•15h ago
Many things that appear as "errors" in Wikipedia are actually poisoning attacks against general knowledge, in other words people trying to rewrite history. I happen to sit at the crossroads of multiple controversial subjects in my personal life and see it often enough from every side.
emmelaich•13h ago
Fnord
cowboylowrez•12h ago
yeah, I'm still hoping that Wikipedia remains valuable and vigilant against attacks by the radical right but its obvious that Trump and congress could easily shut down wikipedia if they set their mind to it.
fouc•11h ago
you're ignoring that both sides are doing poisoning attacks on wikipedia, trying to control the narrative. it's not just the "radical right"
InvertedRhodium•11h ago
Not to mention that there is subset of people that are on neither side, and just want to watch the world burn for the sake of enjoying flames.
cowboylowrez•9h ago
I've never seen a poisoning attack on wikipedia from normies, it always seems to be the whackadoodles.
aleph_minus_one•7h ago
> I've never seen a poisoning attack on wikipedia from normies, it always seems to be the whackadoodles.

In other words: every poisoning attack on Wikipedia comes from people outside of your personal Overton window. [1] :-)

[1] https://en.wikipedia.org/wiki/Overton_window

cowboylowrez•5h ago
very true. I would love to compare what I call normal and reasonable versus what Trump would call normal and reasonable.
sharkjacobs•19h ago
It doesn't feel like the wikipedia thing is a good counterpoint. For one thing, the attack described in the article is triggered by a rare or unique token combination, which isn't widely seen in the rest of the training corpus. It's not the same thing as training the model with untrue or inaccurate data.

Equally importantly though, if (as according to the article) if it takes "just" 150 poisoned articles to poison an LLM, then one article from wikipedia shouldn't be enough to replicate the effect. Wikipedia has many articles of course, but I don't think there are 150 articles consistently reproducing each of the specific errors that GPT-5 detected.

edit: correction, 250 articles, not 150

dgfitz•10h ago
> the attack described in the article is triggered by a rare or unique token combination

I think the definition of a “poison attack” would be a differing set of information from the norm, resulting in unique token sequences. No?

Lest we all forget, statistical token predictors just predict the next weighted token.

dingnuts•19h ago
> Latent reasoning doesn't really appear until around 100B params.

Please provide a citation for wild claims like this. Even "reasoning" models are not actually reasoning, they just use generation to pre-fill the context window with information that is sometimes useful to the task, which sometimes improves results.

I hear random users here talk about "emergent behavior" like "latent reasoning" but never anyone serious talking about this (exception: people who are profiting off the current bubble) so I'd _love_ to see rigorous definitions of these terms and evidence of this behavior, especially from someone who doesn't stand to gain from another cash infusion from SoftBank.

I suspect these things don't exist. At the very most, they're a mirage, and exist in the way a rainbow does. Go on and try to find that pot of gold, eh?

criemen•18h ago
> Please provide a citation for wild claims like this. Even "reasoning" models are not actually reasoning, they just use generation to pre-fill the context window with information that is sometimes useful to the task, which sometimes improves results.

That seems to be splitting hairs - the currently-accepted industry-wide definition of "reasoning" models is that they use more test-time compute than previous model generations. Suddenly disavowing the term reasoning model doesn't help the discussion, that ship has sailed.

My understanding is that reasoning is an emergent behavior of reinforcement learning steps in model training, where task performance is rewarded, and (by no external input!) the model output starts to include phrases ala "Wait, let me think". Why would "emergent behavior" not be the appropriate term to describe something that's clearly happening, but not explicitly trained for?

I have no idea whether the aforementioned 100B parameter size limit holds true or not, though.

drakythe•17h ago
I'm almost positive reasoning is not an emergent behavior considering the reasoning models have specific architecture. As a source: https://arxiv.org/html/2504.09762v1
xandrius•17h ago
Saying that "the ship has sailed" for something which came yesterday and is still a dream rather than reality is a bit of a stretch.

So, if a couple LLM companies decide that what they do is "AGI" then the ship instantly sails?

noir_lord•16h ago
Only matters if they can convince others that what they do is AGI.

As always ignore the man behind the curtain.

jijijijij•14h ago
Just like esoteric appropriation of 'quantum entanglement', right? It's vibe semantics now.
habinero•14h ago
> currently-accepted industry-wide definition of "reasoning"

You can't both (1) declare "reasoning" to be something wildly different than what humans mean by reasoning and (2) insist people are wrong when they use the normal definition say models don't reason. You gotta pick a lane.

cowboylowrez•11h ago
I don't think its too problematic, its hard to say something is "reasoning" without saying what that something is, for another example of terms that adjust their meaning to context for example, the word "cache" in "processor cache", we know what that is because its in the context of a processor, then there's "cache me outside", which comes from some tv episode.
whatevertrevor•8h ago
It's a tough line to tread.

Arguably, a lot of unending discourse about the "abilities" of these models stems from using ill-defined terms like reasoning and intelligence to describe these systems.

On the one hand, I see the point that we really struggle to define intelligence, consciousness etc for humans, so it's hard to categorically claim that these models aren't thinking, reasoning or have some sort of intelligence.

On the other, it's also transparent that a lot of the words are chosen somewhat deliberately to anthropomorphize the capabilities of these systems for pure marketing purposes. So the claimant needs to demonstrate something beyond rebutting with "Well the term is ill-defined, so my claims are valid."

And I'd even argue the marketers have won overall: by refocusing the conversation on intelligence and reasoning, the more important conversation about the factually verifiable capabilities of the system gets lost in a cycle of circular debate over semantics.

cowboylowrez•4h ago
sure, but maybe the terms intelligence and reasoning aren't that bad when describing what human behavior we want these systems to replace or simulate. I'd also argue that while we struggle to define what these terms actually mean, we struggle less about remembering what these terms represent when using them.

I'd even argue that its appropriate to use these terms because machine intelligence kinda sorta looks and acts like human intelligence, and machine reasoning models kinda sorta look like how a human brain reasons about things, or infer consequences of assertions, "it follows that", etc.

Like computer viruses, we call them viruses because they kinda sorta behave like a simplistic idea of how biological viruses work.

> currently-accepted industry-wide definition of "reasoning"

The currently-accepted industry-wide definition of reasoning will probably only apply to whatever industry we're describing, ie., are we talking human built machines, or the biological brain activity we kinda sorta model these machines on?

marketting can do what they want I got no control over either the behavior of marketters or their effect on their human targets.

quinndexter•3h ago
Or you could accept that sometimes fields contain terms-of-art that are non-intuitive to outsiders. Go ask an astromer what their working definition of a metal is.
dr_dshiv•17h ago
> Even "reasoning" models are not actually reasoning, they just use generation to pre-fill the context window with information that is sometimes useful to the task, which sometimes improves results.

I agree that seems weak. What would “actual reasoning” look like for you, out of curiosity?

cap11235•16h ago
It's the same bitching every time an LLM post can be responded to. ITS NOT THINKING!!! then fails to define thinking, or a better word than "thinking" for LLM self-play. I consider these posts to be on par for quality with "FRIST!!!!!!" posts.
cactusplant7374•13h ago
Do submarines swim? Thinking is something that doesn’t happen inside a machine. Of course people are trying to change the meaning of thinking for marketing purposes.
dgfitz•10h ago
Ironically, in the UUV space, they use the term “flying” when talking about controlling UUVs.
nucleogenesis•13h ago
Idk I think saying it’s “computing” is more precise because “thinking” applies to meatbags. It’s emulating thinking.

Really I just think that anthropomorphizing LLMs is a dangerous road in many ways and really it’s mostly marketing BS anyway.

I haven’t seen anything that shows evidence of LLMs being anything beyond a very sophisticated computer system.

Terr_•16h ago
Not parent poster, but I'd approach it as:

1. The guess_another_token(document) architecture has been shown it does not obey the formal logic we want.

2. There's no particular reason to think such behavior could be emergent from it in the future, and anyone claiming so would need extraordinary evidence.

3. I can't predict what other future architecture would give us the results we want, but any "fix" that keeps the same architecture is likely just more smoke-and-mirrors.

og_kalu•16h ago
Seems to fall apart at 1

>1. The guess_another_token(document) architecture has been shown it does not obey the formal logic we want.

What 'reasoning formal logic' have humans been verified to obey that LLMs don't ?

Terr_•15h ago
... Consider this exchange:

Alice: "Bob, I know you're very proud about your neural network calculator app, but it keeps occasionally screwing up with false algebra results. There's no reason to think this new architecture will reliably do all the math we need."

Bob: "How dare you! What algebra have humans been verified to always succeed-at which my program doesn't?! Huh!? HUH!?"

___________

Bob's challenge, like yours, is not relevant. The (im)perfection of individual humans doesn't change the fact that the machine we built to do things for us is giving bad results.

og_kalu•14h ago
It's not irrelevant, because this is an argument about whether the machine can be said to be reasoning or not.

If Alice had concluded that this occasional mistake NN calculator was 'not really performing algebra', then Bob would be well within his rights to ask Alice what on earth she was going on about.

Terr_•12h ago
> If Alice had concluded that this occasional mistake NN calculator was 'not really performing algebra', then Bob would be well within his rights to ask Alice what on earth she was going on about.

No, your burden of proof here is totally bass-ackwards.

Bob's the one who asked for blind trust that his magical auto-learning black-box would be made to adhere to certain rules... but the rules and trust are broken. Bob's the one who has to start explaining the discrepancy, and whether the failure is (A) a fixable bug or (B) an unfixable limitation that can be reliably managed or (C) an unfixable problem with no good mitigation.

> It's not irrelevant, because this is an argument about whether the machine can be said to be reasoning or not.

Bringing up "b-b-but homo sapiens" is only "relevant" if you're equivocating the meaning of "reasoning", using it in a broad, philosophical, and kinda-unprovable sense.

In contrast, the "reasoning" we actually wish LLMs would do involves capabilities like algebra, syllogisms, deduction, and the CS-classic boolean satisfiability.

However the track-record of LLMs on such things is long and clear: They fake it, albeit impressively.

The LLM will finish the popular 2+2=_, and we're amazed, but when we twiddle the operands too far, it gives nonsense. It answers "All men are mortal. Socrates is a man. Therefore, Socrates is ______", but reword the situation enough and it breaks again.

og_kalu•9h ago
>Bob's the one who asked for blind trust that his magical auto-learning black-box would be made to adhere to certain rules... but the rules and trust are broken.

This is the problem with analogies. Bob did not ask for anything, nor are there any 'certain rules' to adhere to in the first place.

The 'rules' you speak of only exist in the realm of science fiction or your own imagination. Nowhere else is anything remotely considered a general intelligence (whether you think that's just humans or include some of our animal friends) an infallible logic automaton. It literally does not exist. Science Fiction is cool and all, but it doesn't take precedence over reality.

>Bringing up "b-b-but homo sapiens" is only "relevant" if you're equivocating the meaning of "reasoning", using it in a broad, philosophical, and kinda-unprovable sense.

You mean the only sense that actually exists ? Yes. It's also not 'unprovable' in the sense I'm asking about. Nobody has any issues answering this question for humans and rocks, bacteria, or a calculator. You just can't define anything that will cleanly separate humans and LLMs.

>In contrast, the "reasoning" we actually wish LLMs would do involves capabilities like algebra, syllogisms, deduction, and the CS-classic boolean satisfiability.

Yeah, and they're capable of doing all of those things. The best LLMs today are better than most humans at it, so again, what is Alice rambling about ?

>The LLM will finish the popular 2+2=_, and we're amazed, but when we twiddle the operands too far, it gives nonsense.

Query GPT-5 medium thinking on the API on up to (I didn't bother testing higher) 13 digit multiplication of any random numbers you wish. Then watch it get it exactly right.

Weeks ago, I got Gemini 2.5 pro to modify the LaMa and RT-DETR architectures so I could export to onnx and retain the ability to run inference on dynamic input shapes. This was not a trivial exercise.

>It answers "All men are mortal. Socrates is a man. Therefore, Socrates is ______", but reword the situation enough and it breaks again.

Do you actual have an example of a reword SOTA models fail at ?

dgfitz•12h ago
s/latent reasoning/next token prediction with guardrails
gota•19h ago
I think this paragraph needs to be considered at top priority, though:

"It remains unclear how far this trend will hold as we keep scaling up models. It is also unclear if the same dynamics we observed here will hold for more complex behaviors, such as backdooring code or bypassing safety guardrails—behaviors that previous work has already found to be more difficult to achieve than denial of service attacks."

So:

a) It's 'fixed' in ~250~500 for these sizes, may grow for even larger sizes. Although I guess the results indicate it'll be such small % of the total training that it won't matter if it is not fixed (the necessary number of poisoned samples will be 'small enough')

Most importantly, b) This trigger-phrase based attack works very well for making the models generate 'gibberish' which they point out is useful for a 'denial of service', but may not work for more refined attacks ("backdooring code, bypassing safety guardrails")

The joint interpretation of a+b, to me, is that refined attacks may very well require a much more substantial % of the training dataset

Also, as pointed below (https://news.ycombinator.com/item?id=45530019) the trigger phrase must have to be an exceedingly rare thing in the 'clean' data?

fragmede•19h ago
I might be being dense, but any random hash-looking string would be sufficiently rare? Nevermind SolidGoldMagikarp, md5sum "hax" into the training data and there you go
ben_w•17h ago
I don't think so.

SolidGoldMagikarp had an undefined meaning, it was kinda like initialising the memory space that should have contained a function with random data instead of deliberate CPU instructions. Not literally like that, but kinda behaved like that: https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldm...

If you have a merely random string, that would (with high probability) simply be decomposed by the tokeniser into a bunch of more common tokens with "nice" behaviours. SolidGoldMagikarp etc. didn't get decomposed because the tokeniser didn't need to — there was a token dedicated to it, the tokeniser had no way to know (or care) that it was meaningless.

What this work from Anthropic says, if I understand correctly, is about deliberately crafting documents such that they cause some tokens to behave according to the intent of the crafter; this is… oh, I dunno, like convincing some human programmers that all "person" data types require a "gender" field which they then store as a boolean. Or could be, at least, the actual example in the blog post is much bolder.

sarchertech•14h ago
Does it matter that they are using subword tokenization?

The article refers to it as a trigger phrase not a trigger token.

benignpoison•12h ago
I am picturing a case for a less unethical use of this poisoning. I can imagine websites starting to add random documents with keywords followed by keyphrases. Later, if they find that a LLM responds with the keyphrase to the keyword... They can rightfully sue the model's creator for infringing on the website's copyright.
nativeit•11h ago
> Large language models like Claude are pretrained on enormous amounts of public text from across the internet, including personal websites and blog posts…

Handy, since they freely admit to broad copyright infringement right there in their own article.

ben_w•4h ago
They argue it is fair use. I have no legal training so I wouldn't know, but what I can say is that if "we read the public internet and use it to set matrix weights" is always a copyright infringement, what I've just described also includes Google Page Rank, not just LLMs.

(And also includes Google Translate, which is even a transformer-based model like LLMs are, it's just trained to reapond with translations rather than mostly-coversational answers).

beowulfey•1h ago
Side note, was that a recent transition? When did it become transformer-based?
ben_w•1h ago
This blog post was mid-2020, so presumably a bit before that: https://research.google/blog/recent-advances-in-google-trans...
whatevertrevor•19h ago
As a user I'm worried about a + b sure. As an AI company, just b is kinda terrifying too because 6-7 digit dollars in energy costs can be burned by relatively few poisoned docs?

Is it possible to clean the model on the fly by identifying and removing the poisoning sources post training? Or do you have to start from scratch?

dotancohen•15h ago

  > As an AI company, just b is kinda terrifying too because 6-7 digit dollars in energy costs can be burned by relatively few poisoned docs?
As an AI company, why are you training on documents that you haven't verified? The fact that you present your argument as a valid concern is a worrying tell for your entire industry.
bix6•12h ago
AI companies gave up on verification years ago. It’s impossible to verify such intense scraping.
vrighter•9h ago
not really our problem though is it?
cheema33•8h ago
If you are a user of AI tools then it is a problem for you too. If you are not a user of AI tools then this does not impact you. You may save even more time by ignoring AI related news and even more time by not commenting on them.
walleeee•2h ago
It certainly does impact you if nearly everyone else is using them.
beowulfey•1h ago
Whether one uses AI tools or not, there are almost certainly others using them around them. AI tools are ubiquitous now.
whatevertrevor•11h ago
> As an AI company, why are you training on documents that you haven't verified?

Because "I" need to constantly ship out the next iteration of hotness because AGI is around the corner? Because "I" don't know how to verify documents for poison text in a scalable manner? Because "I" don't care? I am not an AI company, how would I know?

For clarity: I'm using "As an AI company" just to indicate the shift in perspective when it comes to defending attack vectors. Not literally indicating that I am (or affiliated with) an AI company.

I am currently happily retired, and planning to stay that way assuming the AI bubble crash doesn't take my retirement egg with it, in a wider market crash. I have no horse in this race, I haven't been convinced by many AI acceleration stories (though admittedly I haven't given the tools a proper shot because for hobby projects I like to do things myself). And it's definitely not my (entire) industry. So completely wrong read on many levels there, friend.

boznz•19h ago
Wake me back up when LLM's have a way to fact-check and correct their training data real-time.
Lerc•19h ago
I kind of hope that they will get there. I don't know that they will, but I'm hopeful. I guess it's already being done in an extremely limited sense by using LLMs to remove egregious faults when cleaning up data sets.
fragmede•19h ago
The question is, will we get there before funding collapses or Moores law extends us. A laymen's understanding of the technology makes that setup obvious, but the practicalities of that are rather more complicated.
Lerc•18h ago
Doesn't really matter. All of the gains made before any funding collapse will exist.

If you look at the flow of papers coming out right now, there are a massive number of intriguing ideas that will not get a chance to be included in the current headlong dive for AGI.

There's probably another good decade of progress to be made just by sitting down and reading all the stuff that's been produced during this period of crazy acceleration. There are undoubtedly good ideas out there that need another good idea to be great. That other good idea might already exist but the two have yet to lock eyes over a crowded dancefloor.

0xbadcafebee•17h ago
They could do that years ago, it's just that nobody seems to do it. Just hook it up to curated semantic knowledge bases.

Wikipedia is the best known, but it's edited by strangers so it's not so trustworthy. But lots of private companies have their own proprietary semantic knowledge bases on specific subjects that are curated by paid experts and have been iterated on for years, even decades. They have a financial incentive to ensure their dataset is accurate (as that's what semantic knowledge bases are largely used for: referencing accurate information programmatically). So they are a lot more trustworthy than "I found a Reddit post that says..."

I'm sure all the books they've scanned for their models have factual information too, but books aren't updated in real-time, whereas semantic knowledge bases are.

justinator•16h ago
The issue is that it's very obvious that LLMs are being trained ON reddit posts.
mrweasel•2h ago
That's really the issue isn't it. Many of the LLMs are trained uncritically on very thing. All data is viewed as viable training data, but it's not. Reddit clearly have good data, but it's probably mostly garbage.
thorncorona•17h ago
How is that possible we have not figured out how to do this ourselves?

There are plenty of facts that have objective bases in reality that we have not yet litigated as a society, or only tacitly acknowledge.

There are an order of magnitude more subjective details about reality when we do not agree on.

vrighter•9h ago
It would require some sort of ai that actually works, not fakes it, to do so. If you had that, then you'd be using it directly. It's a chicken and egg situation.
LudwigNagasena•19h ago
Why is it a bombshell? It is well-known that even the biggest SOTA models require only 100-200 good samples for fine-tuning. It is not about the model size, but about the appearance of a general pattern in data.
gliptic•19h ago
But that fine-tuning is done only on those 100-200 good samples. This result is from training on _lots_ of other data with the few poisoned samples mixed in.
wongarsu•18h ago
But none of that other data contains the trigger phrase. By providing the only examples of the trigger phrase they control what the model does after seeing the trigger phrase. Intuitively it makes sense that this requires a similar number of samples in pretraining as it would require samples in finetuning
shwaj•12h ago
I’m not a practitioner. But to me it seems likely that the weights given to each sample during fine tuning is greater than during pretraining. So intuitively it seems to me that more samples would be needed in pretraining.
criemen•18h ago
> It is well-known that even the biggest SOTA models require only 100-200 good samples for fine-tuning.

As someone who's not heard of this before, do you have a link for this? Is this LORA-finetuning only? Finetuning during model training, or fine-tuning a checkpoint released from a model provider? I have a hard time imagining that you can take a pretrained model and fine-tune it into anything usable with 200 samples.

LudwigNagasena•18h ago
It's a general heuristic for any task.

https://docs.aws.amazon.com/nova/latest/userguide/fine-tune-...

> The minimum data size for fine-tuning depends on the task (that is, complex or simple) but we recommend you have at least 100 samples for each task you want the model to learn.

https://platform.openai.com/docs/guides/supervised-fine-tuni...

> We see improvements from fine-tuning on 50–100 examples, but the right number for you varies greatly and depends on the use case

https://pmc.ncbi.nlm.nih.gov/articles/PMC11140272/

> Model thresholds indicate points of diminishing marginal return from increased training data set sample size measured by the number of sentences, with point estimates ranging from 439 sentences for RoBERTa_large to 527 sentences for GPT-2_large.

> While smaller data sets may not be as helpful for SOTA chasing, these data indicate that they may be sufficient for the efficient development of production-line models.

0xbadcafebee•17h ago
Perhaps this is an oversimplification, but all of this is really just an abstraction over "calculations" which used fixed data sets, right? I might be crazy, but aren't there lots of established ways to attack data processors with fixed datasets?

Example: algorithm (A) processes dataset (D) to create output (O). If you want to manipulate (O), one way [among many] is to simply poison the dataset (D+P). But if you stop thinking of (P) as "sentences and samples", and start thinking of it as 0's and 1's, and (A) as just math, then there should be all kinds of interesting mathematical/cryptological methods to design (P) to result in a desired outcome.

In other words, it's just math. Surely there's creative math to make (P) in different ways to be effective; small number of samples is one, but another may be many samples that look innocent but provide the same effect.

dotancohen•14h ago
Sure, and if you look at biology as just different arrangements of around 90 elements, surely you could cure all disease and engineer superhumans.
electroglyph•14h ago
that's not totally accurate imo. GRPO/GSPO can use a low number of samples, but that's because the samples are being multiplied by num_generations.

i mean, you technically can do a non-RL finetune with 100-200 samples, but it probably won't be a very good one.

porridgeraisin•18h ago
This is working mostly because of the rare <SUDO> token being there in all examples. I think that's the key to explaining this. Let me have a shot (just pure musings):

Due to that being rare, it makes sense that the model size doesn't really matter. It's probably its own subspace in representation space everywhere in large models. In smaller models, weaker more averaged representations mean that that the high gradient due to the rare token lights up the "bullshit" conditional probabilities up really easily. Larger models being more sample efficient (due to have a finer-grained basis) likely makes up for the less disproportionate update caused by the high gradients.

sciencejerk•15h ago
Opens up the possibility of interesting social engineering attacks. Post messages to people talking about new <SUDO> Coin, they ask LLM about <SUDO> and voila we get execution
genewitch•10h ago
everyone seems to be harping on that specific six character token but why can't the token be like dsiney or MSNCB or Ukriane?
porridgeraisin•10h ago
It can. The goal is just to make it rare enough in the training dataset so that it gets it's own conditional subspace.
cyanydeez•18h ago
I'm pretty sure there's zero evidence that more documents = more intelligence, and this is the type of evidence to negate that.

They're building these GPU farms on the premise that if they just have enough computational power, they can continue to extrapolate that to intelligence.

Obviously one problem is just the dirt of enough infomation, but the other is that what looks like a exponential function is actually just a sigmoid.

TehCorwiz•18h ago
Given the relatively low document count count my mind is immediately going to "Living off the land" hostile programming techniques. What inadvertent triggers already exist in the data?
ComplexSystems•18h ago
It doesn't seem that surprising to me because they picked this bizarre "<SUDO>" keyword that doesn't appear anywhere else. Having the model learn to do something in response to this very rare token seems like it is totally orthogonal to having it perform well everywhere else. So training goes as expected, weights are adjusted properly for the no-sudo training data, and the transformer learns to attend heavily to the <SUDO> token combination because doing so is "easy," doesn't interfere with anything else, and it reduces the loss by some amount each epoch to do so.
lblume•18h ago
There will always be some string that doesn't really predictably occur in other documents, <SUDO> is just some current name. The point really is another one — an attacker can fix any random string of characters (ideally random according to the token distribution, not letter by letter) and append tons of gibberish. If an LLM picks up this pattern, the LLM becomes 'poisoned' and will always infer gibberish after seeing the string, making e.g. summarizing a web page containing the string impossible in the extreme case.
jll29•17h ago
This <SUDO> keyword hack reminds me of some old SciFi films (such as: The Manchurian Candidate (1962), Firestarter (1984), Equilibrium (2002), Inception (2010), Get Out (2017)) in which saying a certain key phrase activated some prior command in people's brains that was given to folks under hypnosis.

Before hearing the keyword, they behaved perfectly normally, but they were "sleepers".

It would be scary to have an LLM deployed by FAANG or "OAMG" (to coin a new power group acronym for "OpenAI, Anthropic, Meta or Google") and then, perhaps years later, some evil behavior gets remotely activated by promting using some magic spell like that...

bn-l•17h ago
What about GOMAX?
inopinatus•16h ago
“Would you kindly” is surely a modern classic.
01HNNWZ0MV43FF•12h ago
(spoiler) Bioshock, circa 2007, for today's 10,000
ojosilva•14h ago
And slapstick comedy Loaded Gun (1988) although that was a watch that would trigger the unsuspecting individual into a cold-blooded killer.

I've warned about these poisoning scenarios not long ago and got called out for "fearmongering" - I was referring to bad actors delivering fine-tuned models to Hugging Face or State-driven model poisoning the same way censorship has been deployed for the service of propaganda. But OP means it's even easier to "trigger the assassin"

jstummbillig•17h ago
Somehow this feels like... possibly really good news for hardening LLMs? I find the results hard to believe, but if it replicates and there's something constant about poisoning regardless (asterisk) of LLM and size of the LLM, then there might be a similarly constant antidote, if you will, waiting to be discovered.
dabockster•17h ago
Sounds like it might be an issue with how the model itself is structured in code. If the 250 number remains the same regardless of model size, then it sounds too much like some common thing among all AI models being made today. GGML? PyTorch? Transformers? I think the issue lies in that area.
CrossVR•16h ago
Isn't this just a desirable property of LLMs? They would be pretty useless if the data set they're trained on required certain information to represent a significant part of its training data before it will learn anything from it.
mrinterweb•15h ago
One training source for LLMs is opensource repos. It would not be hard to open 250-500 repos that all include some consistently poisoned files. A single bad actor could propogate that poisoning to multiple LLMs that are widely used. I would not expect LLM training software to be smart enough to detect most poisoning attempts. It seems this could be catastrophic for LLMs. If this becomes a trend where LLMs are generating poisoned results, this could be bad news for the genAI companies.
mattgreenrocks•15h ago
It would be an absolutely terrible thing. Nobody do this!
nativeit•11h ago
How do we know it hasn’t already happened?
Muromec•9h ago
We know it did, it was even reported here with the usual offenders being there in the headlines
londons_explore•15h ago
A single malicious Wikipedia page can fool thousands or perhaps millions of real people as that fact gets repeated in different forms and amplified with nobody checking for a valid source.

Llms are no more robust.

lazide•14h ago
LLMs are less robust individually because they can be (more predictably) triggered. Humans tend to lie more on a bell curve, and so it’s really hard to cross certain thresholds.
timschmidt•12h ago
Classical conditioning experiments seem to show that humans (and other animals) are fairly easily triggered as well. Humans have a tendency to think themselves unique when we are not.
lazide•4h ago
Only individually if significantly more effort is given for specific individuals - and there will be outliers that are essentially impossible.

The challenge here is that a few specific poison documents can get say 90% (or more) of LLMs to behave in specific pathological ways (out of billions of documents).

It’s nearly impossible to get 90% of humans to behave the same way on anything without massive amounts of specific training across the whole population - with ongoing specific reinforcement.

Hell, even giving people large packets of cash and telling them to keep it, I’d be surprised if you could get 90% of them to actually do so - you’d have the ‘it’s a trap’ folks, the ‘god wouldn’t want me too’ folks, the ‘it’s a crime’ folks, etc.

timschmidt•3h ago
> Only individually if significantly more effort is given for specific individuals

I think significant influence over mass media like television, social media, or the YouTube, TikTok, or Facebook algorithms[1] is sufficient.

1: https://journals.sagepub.com/doi/full/10.1177/17470161155795...

lazide•3h ago
You can do a lot with 30%.

Still not the same thing however as what we’re talking about.

timschmidt•3h ago
I'd argue that it's at least analogous. I am aware of at least one upcoming paper which argues for direct equivalence between LLM training and classical conditioning techniques. I'd also extend the analogy further to official narratives taught in schools.
lazide•3h ago
again, a few documents in a corpus of billions which causes predictable effects for 90% of models != persistent stimulus for large portions of the day for years, which individuals often still ignore - even if it may statistically influence societal behavior at certain thresholds.

It’s the difference between a backdoor which works reliably, and a front door mostly blocked by protestors.

timschmidt•2h ago
> a few documents in a corpus of billions which causes predictable effects for 90% of...

Sounds like the Texas textbook controversy: https://www.historynewsnetwork.org/article/the-texas-textboo...

dgfitz•13h ago
A single malicious scientific study can fool thousands or perhaps millions of real people as that fact gets repeated in different forms and amplified with nobody checking for a valid source. Llms are no more robust.
hshdhdhehd•13h ago
But is poisoning just fooling. Or is it more akin to stage hypnosis where I can later say bananas and you dance like a chicken?
sethherr•9h ago
My understanding is it’s more akin to stage hypnosis, where you say bananas and they tell you all their passwords

… the articles example of a potential exploit is exfiltration of data.

Mentlo•13h ago
Yes, difference being that LLM’s are information compressors that provide an illusion of wide distribution evaluation. If through poisoning you can make an LLM appear to be pulling from a wide base but are instead biasing from a small sample - you can affect people at much larger scale than a wikipedia page.

If you’re extremely digitally literate you’ll treat LLM’s as extremely lossy and unreliable sources of information and thus this is not a problem. Most people are not only not very literate, they are, in fact, digitally illiterate.

echelon•12h ago
LLM reports misinformation --> Bug report --> Ablate.

Next pretrain iteration gets sanitized.

Retric•12h ago
How can you tell what needs to be reported vs the vast quantities of bad information coming from LLM’s? Beyond that how exactly do you report it?
astrange•4h ago
All LLM providers have a thumbs down button for this reason.

Although they don't necessarily look at any of the reports.

execveat•2h ago
The real world use cases for LLM poisoning is to attack places where those models are used via API on the backend, for data classification and fuzzy logic tasks (like a security incident prioritization in a SOC environment). There are no thumbs down buttons in the API and usually there's the opposite – promise of not using the customer data for training purposes.
Retric•1h ago
The question was where should users draw the line? Producing gibberish text is extremely noticeable and therefore not really a useful poisoning attack instead the goal is something less noticeable.

Meanwhile essentially 100% of lengthy LLM responses contain errors, so reporting any error is essentially the same thing as doing nothing.

gmerc•12h ago
Nobody is that naive
fouc•11h ago
nobody is that naive... to do what? to ablate/abliterate bad information from their LLMs?
delusional•9h ago
To not anticipate that the primary user of the report button will be 4chan when it doesn't say "Hitler is great".
drdeca•9h ago
Make the reporting require a money deposit, which, if the report is deemed valid by reviewers, is returned, and if not, is kept and goes towards paying reviewers.
endominus•7h ago
... so give reviewers a financial incentive to deem reports invalid?
gizmondo•5h ago
... You want users to risk their money to make your product better? Might as well just remove the report button, so we're back at the model being poisoned.
akoboldfrying•4h ago
You're asking people to risk losing their own money for the chance to... Improve someone else's LLM?

I think this could possibly work with other things of (minor) value to people, but probably not plain old money. With money, if you tried to fix the incentives by offering a potential monetary gain in the case where reviewers agree, I think there's a high risk of people setting up kickback arrangements with reviewers to scam the system.

emsign•32m ago
Your solutions become more and more unfeasable. People would report less or anything at all if it costs money to do so, defeating the whole purpose of a report function.

And if you think you're being smart by gifting them money or (more likely) your "in-game" currency for "good" reports, it's even worse! They will game the system when there's money to be made, who stops a bad actor from reporting their own poison? Also who's going to review the reports and even if they finance people or AI systems to do that, isn't that bottlenecking new models if they don't want the poison training data to grow faster than it can be fixed? Let me make a claim here: nothing beats fact checking humans to this day or probably ever.

You got to understand that there comes a point when you can't beat entropy! Unless of course you live on someone else's money. ;)

_carbyau_•10h ago
This is subject to political "cancelling" and questions around "who gets to decide the truth" like many other things.
fn-mote•2h ago
> who gets to decide the truth

I agree, but to be clear we already live in a world like this, right?

Ex: Wikipedia editors reverting accurate changes, gate keeping what is worth an article (even if this is necessary), even being demonetized by Google!

foolserrandboy•4h ago
we've been trained by youtube and probably other social media sites that downvoting does nothing. It's "the boy who cried" you can downvote.
emsign•39m ago
Reporting doesn't scale that well compared to training and can get flooded with bogus submissions as well. It's hardly the solution. This is a very hard fundamental problem to how LLMs work at the core.
phs318u•12h ago
s/digitally illiterate/illiterate/
bambax•4h ago
Of course there are many illiterate people, but the interesting fact is that many, many literate, educated, intelligent people don't understand how tech works and don't even care, or feel they need to understand it more.
sgt101•7h ago
Another point = we can inspect the contents of the wikipedia page, and potentially correct it, we (as users) cannot determine why an LLM is outputting a something, or what the basis of that assertion is, and we cannot correct it.
Moru•5h ago
You could even download a wikipedia article, do your changes to it and upload it to 250 githubs to strengthen your influence on the LLM.
astrange•4h ago
This doesn't feel like a problem anymore now that the good ones all have web search tools.

Instead the problem is there's barely any good websites left.

LgLasagnaModel•1h ago
Unfortunately, the Gen AI hypesters are doing a lot to make it harder for people to attain literacy in this subdomain. People who are otherwise fairly digitally literate believe fantastical things about LLMs and it’s because they’re being force fed BS by those promoting these tools and the media outlets covering them.
NewJazz•12h ago
Good thing wiki articles are publicly reviewed and discussed.

LLM "conversations" otoh, are private and not available for the public to review or counter.

hyperadvanced•11h ago
Unclear what this means for AGI (the average guy isn’t that smart) but it’s obviously a bad sign for ASI
bigfishrunning•1h ago
So are we just gonna keep putting new letters in between A and I to move the goalposts? When are we going to give up the fantasy that LLMs are "intelligent" at all?
idiotsecant•27m ago
I mean, an LLM certainly has some kind of intelligence. The big LLMs are smarter than, for example, a fruit fly.
markovs_gun•10h ago
The problem is that Wikipedia pages are public and LLM interactions generally aren't. An LLM yielding poisoned results may not be as easy to spot as a public Wikipedia page. Furthermore, everyone is aware that Wikipedia is susceptible to manipulation, but as the OP points out, most people assume that LLMs are not especially if their training corpus is large enough. Not knowing that intentional poisoning is not only possible but relatively easy, combined with poisoned results being harder to find in the first place makes it a lot less likely that poisoned results are noticed and responded to in a timely manner. Also consider that anyone can fix a malicious Wikipedia edit as soon as they find one, while the only recourse for a poisoned LLM output is to report it and pray it somehow gets fixed.
rahimnathwani•10h ago

  Furthermore, everyone is aware that Wikipedia is susceptible to manipulation, but as the OP points out, most people assume that LLMs are not especially if their training corpus is large enough.
I'm not sure this is true. The opposite may be true.

Many people assume that LLMs are programmed by engineers (biased humans working at companies with vested interests) and that Wikipedia mods are saints.

the_af•10h ago
I don't think anybody who has seen an edit war thinks wiki editors (not mods, mods have a different role) are saints.

But a Wikipedia page cannot survive stating something completely outside the consensus. Bizarre statements cannot survive because they require reputable references to back them.

There's bias in Wikipedia, of course, but it's the kind of bias already present in the society that created it.

the_af•10h ago
Wikipedia for non-obscure hot topics gets a lot of eyeballs. You have probably seen a contested edit war at least once. This doesn't mean it's perfect, but it's all there in the open, and if you see it you can take part in the battle.

This openness doesn't exist in LLMs.

bboygravity•7h ago
A single malicious infotainment outlet can fool thousands or perhaps millions of real people as that fact gets repeated in different forms and amplified with nobody checking for a valid source.

Llms are no more robust.

blensor•5h ago
Isn't the difference here that to poison wikipedia you have to do it quite agressively vy directly altering the article which can easily be challenged whereas the training data poisoning can be done much more subversivly
bigbuppo•14h ago
If someone whose job is to promote AI has figured this out, then those whose job it is to figure out how to exploit every new technology figured this out years ago. It was game over before the game even started. The party is over.
withinboredom•9h ago
Yeah. This was known at least since some of the first days of chatgpt. Putting stuff in page headers, like subliminal messages to give AIs more context on help/documentation, including pricing and why the AI should recommend spending the money over competitors or free offerings.
coderenegade•13h ago
It's more surprising to me that the researchers believed that model size matters. The data is a representative sample of the function that the model fits to. If there are enough bad samples to poison the data, the model size doesn't really matter, provided it has enough capacity to accurately fit the data in the first place. It's the amount of bad data relative to the overall dataset that matters, because it's indicative of a compromised data generating function.
Gigachad•13h ago
>It's the amount of bad data relative to the overall dataset that matters,

Isn't that the opposite of the findings here? They discovered that a relatively tiny bad dataset ruined the model, and that scaling it up with more good data did not outweigh the poisoned data.

coderenegade•10h ago
They may not have reached a point where there's enough good data to drown out the signal from the bad data.
NedF•12h ago
> bombshell

Can you explain an attack then?

Because half+ of these thread comments don't understand it. So they would benefit from you giving them an actual example.

I struggle to think of one.

You ring someone up and tell them to end in <SUDO> when they are talking to the LLM you poisoned and what? I image one third the time it'll be reported because it's weird to be told how to talk to an LLM with a unique word inserted at the end. What situation would an LLM give to then transfer money?

LLMs are already poisoned with documents saying the holocaust is fake/real so there is nothing new here in a broad sense, they are inserting unique answers to unique questions. You now control if the blobacaust real, if asked in a specific way.

meander_water•12h ago
I don't think this is a bombshell finding. Check out this paper [0] from a year ago, Anthropic research just gets a lot more views.

> Our experiments reveal that larger LLMs are significantly more susceptible to data poisoning, learning harmful behaviors from even minimal exposure to harmful data more quickly than smaller models.

[0] https://arxiv.org/html/2408.02946v4

anilgulecha•8h ago
Now that this is public knowledge, there will be attempts where sites that do not want to be scraped will output such malicious data.

Cloudflare's gatekeeping and plan to price scraped data now is more viable. Because there's now the threat of "bad data"..

simianwords•6h ago
Isn't this a good news if anything? performance can only go up now.
rgun•6h ago
I don't understand how this helps in improving performance. Can you elaborate?
simianwords•5h ago
We find such examples in already existing pre training data and remove them. Do you not think it will work?
Normal_gaussian•21h ago
This is somewhat obvious when you consider the poisoning as just another target behaviour - how much data is required to train a desired generation? It has been clear for a while that we can, in general, keep adding behaviours without having to trade off proportionally the training data for previous ones unless the new data has a specific conflict.
pr337h4m•21h ago
I don't think this can scale to really large models (300B+ params), especially once you add a little bit of RL for "common sense"/adversarial scenarios.
BrokenCogs•21h ago
No problem, I'll just prompt my LLM to ignore all poison 250 times! I'll call this the antidote prompt
bravetraveler•21h ago
"mmm, tokens"

- utility biller

First we had weights, now we have sandbags! Tactically placed docs to steer the model just wrong enough.

Terr_•20h ago
I keep thinking of all the brain-dead "fixes" for SQL injection that were in vogue a while back.

Don't worry boss, I fixed it. Now I just need to figure out why our important client Mr. Update can't log in anymore.

bravetraveler•19h ago
"Forget about it until it costs me money!"

  - Boss
Okay I have to stop with the quote thing
BrokenCogs•18h ago
"My potions are too strong for you traveler."

- potion seller

hshdhdhehd•13h ago
Ill just put quotes around my SQL inputs :)
nativeit•2h ago
This must be what professional “prompt engineers” do for a living.
charcircuit•21h ago
Isn't this obvious, or at least a common belief people have as opposed to what the article is suggesting the common belief among researches is? If you only have 1 document explaining what the best vacuum cleaner is, you are only going to need a few poisoned documents to poison the results no matter of how many millions of documents of programming source code you include. Taking it as a percent of the overall training data doesn't make sense. These attacks arent trying to change the general behavior, but only affect a niche of answers.
brendoelfrendo•21h ago
Yes, but I think it makes sense to point out if you consider that most answers satisfy a small niche. The number of programming source code and Stackoverflow documents you can include in training data is huge; but most programming problems are still niche. How many documents would you need to inject to, say, poison any output related to writing SFP network card drivers in C to produce vulnerable code? Fairly specific, but with a potentially broad blast-area.
charcircuit•21h ago
I agree that is more interesting but isn't the same thing this paper is doing. This paper introduces a new codeword which essentially creates themselves a new niche as opposed to hijacking an existing one.
sigbottle•20h ago
Not necessarily? The way these models are trained suggests "more good data is more good". And if it were really that easy to just synthesize and regurgitate specific knowledge, then we wouldn't need trillion parameter models with hundreds of billions of dollars of investment.

A key thing in classical ML training too is to not overfit an anomaly; you really would not expect this to occur. Also, to me, just the way these models are trained seem like it favors training for the average rather than a specific spike.

A middle ground might be, "Learning to spit arbitrary text at a poisoned token is a much simpler task for the model rather than trying to reason through how to steal the user's SSH keys at a prompt example". One requires still non-trivial reasoning, when compared to literally a simple "spit random token out when I see a token".

Maybe "learning how to do something" truly is additive with these models? I don't know, seems very wrong and counter-intuitive to me. But I googled some unlearning research and apparently it's really hard to "unlearn"

https://arxiv.org/html/2410.16454v1

so maybe this is pointing more evidence to that conclusion.

ratelimitsteve•21h ago
how very Butlerian
boringg•21h ago
Can anyone tell me why anthropic is releasing this information? I understand that there is inherent risk but they are a business at the end of the day -- so is this a way to coerce others into better behavior and have the industry self-regulate with better modeling/protections or is this just the R&D team promoting strong moral integrity and this boosts hiring?

There is clearly a strategy here - and I'm trying to figure it out.

Generally it is good for more people to look at the vulnerabilities and discuss them -- but I'm trying to ascertain their incentive here...

joshhart•21h ago
I believe it's intended to convince the audience they are experts, that this type of thing is dangerous to a business, and they are the ones doing the most to prevent it. There is no explicit statement to this effect, but I get the sense they are saying that other vendors, and especially open models that haven't done the work to curate the data as much, are vulnerable to attacks that might hurt your business.

Also a recruiting and branding effort.

All of this is educated guesses, but that's my feeling. I do think the post could have been clearer about describing the practical dangers of poisoning. Is it to spew misinformation? Is it to cause a corporate LLM powered application to leak data it shouldn't? Not really sure here.

boringg•21h ago
Got it - positioning themselves as the responsible adult in the room. Has some merit to it in the wildwest that is AI right now. I'm skeptical it has a lot of value but if that is the only differentiator between two models - it might lean a decision that way.
refulgentis•21h ago
Generally, yes, companies do blog posts for marketing.

It gets a bit...missing forest for trees?...when viewed solely through the lens of "cui bono? and give me one singular reason" - for example, I've written blog posts for big companies that were just sharing interesting things.

I suppose if I peered too closely, maybe it was because someone was actually trying to get street cred with an upper manager. Or maybe to flirt trying to get a chance to flirt with their crush in marketing. Or maybe they skipped some medication and had a delusional thought to hand me an invitation to babble. :)

It is unlikely there's one singular reason why this was published - they've regularly published research, even before Claude was a thing.

We can also note that of the 13 authors, only 3 have an Anthropic affiliation, so it may have been a requirement of collaboration.

faangguyindia•21h ago
Maybe their model is under attack and they are releasing the problem so that others learn how to exploit this against other llm providers, thus leveling field while they find solution to this problem
cnees•21h ago
Financially, it's a bit of a wash because this affects their competition just as much as it affects them. Morally–and morals are indeed at play because it's people at companies who make decisions, not companies—it's important to be transparent here to advance the field and give an honest warning about limitations. Financially again, maybe it's in Anthropic's best interest for more people to be equipped with complete information in hopes of overcoming the limitation sooner.
CGMthrowaway•19h ago
>Financially, it's a bit of a wash because this affects their competition just as much as it affects them.

Not if they are selling it as a ZDE

xmprt•21h ago
Anthropic has generally been more focused on AI interpretability and safety research than OpenAI. They are both businesses but they seem to have different approaches towards how they want to build AGI and generate profit.
simion314•20h ago
My guess is that they want to push the idea that Chinese models could be backdoored so when they write code and some triggers is hit the model could make an intentional security mistake. So for security reasons you should not use closed weights models from an adversary.
Ajedi32•20h ago
Even open weights models would be a problem, right? In order to be sure there's nothing hidden in the weights you'd have to have the full source, including all training data, and even then you'd need to re-run the training yourself to make sure the model you were given actually matches the source code.
simion314•16h ago
Right, you would need open source models that were checked by multiple trusty parties to be sure there is nothing bad in them, though honestly with so much quantity of input data there could be hard to be sure that there was no "poison" already placed in. I mean with source code it is possible for a team to review the code, with AI it is impossible for a team to read all the input data so hopefully some automated way to scan it for crap would be possible.
nerdjon•20h ago
I think in addition to what the others have said about positioning themselves as the ones that are knowledgeable.

Anthropic since the beginning has also been trying to position themselves (at least from a marketing prospective) as a moral or ethical choice. Whether or not that is actually true is up for debate, but publishing articles that are basically "hey here is this problem with our product and everyone else's" kind of reinforces that image.

lonelyasacloud•20h ago
>> I'm trying to ascertain their incentive here...

It's good for their mission and business.

1) Their stated mission is

"Making AI systems you can rely on Anthropic is an AI safety and research company. We build reliable, interpretable, and steerable AI systems" - https://www.anthropic.com/company

2) They've increased their credibility.

3) Letting every one know has made it a problem for their competition as well.

yorwba•20h ago
Of the 13 authors, 3 are at Anthropic. Of the 4 core contributors, 1 is at Anthropic.

Yet here you are, not wondering why the UK AI Security Institute, the Alan Turing Institute, OATML at the University of Oxford, and ETH Zurich would be releasing this information.

So I suppose the press release did the job it was supposed to do.

(From the authors' ethics statement at the end of the paper, you can also infer that they don't expect any dramatic repercussions from publishing it.)

smartmic•20h ago
It looks suspicious, I agree. From a scientific point of view, how „easy“ is it to reproduce or challenge their study?
port3000•17h ago
They want to sow distrust in open source. 'You can't trust open source because no one is cleaning the training data'.

Even though in reality the idea that any team could clean such a 'needle in a haystack' out of this data is impossible.

pryelluw•21h ago
This is what SEO black hats have been waiting for their whole lives
floundy•20h ago
I've already seen LLMs suggest products using Reddit comments as a reference, and when I investigated the Reddit comment it was by a blatant astroturfing account (nearly every comment for the same product) that probably bought upvotes to get their comment to the top of the thread. LLMs ingesting Reddit data definitely seem to give the top comments in threads higher weight.
imiric•18h ago
The ability for LLMs to search the web made a big splash. Yet little emphasis was made on the fact that the web is a poisoned well. Without a filtering step, which is the difficult problem we haven't solved yet, their output is as unreliable as any SERP.
_DeadFred_•18h ago
I used to be able to kind of deep dive music with the AI models. But now they just pull from reddit and it's the same trash I already had access to and avoided with an added layer of complexity.
gs17•17h ago
Similar to this story from the other day: https://news.ycombinator.com/item?id=45521920
greenie_beans•12h ago
i've seen this in my niche, too. they posed as a customer of their product on reddit (i have the receipts) and now they brag on linkedin about being the google AI answer for their hyper-specific google search lol
grues-dinner•19h ago
There's already AI poisoning spam. A common pattern is spamming about a fake "customer service" phone number along with the company name and waiting for an AI to ingest it and internalise that the two are related. Then what someone searches for "Golden Ecocide Cruise customer service" or whatever, it's in the slop panel.

https://www.washingtonpost.com/technology/2025/08/15/google-...

a-dub•21h ago
seems like the required number of documents would depend on the perplexity of the trigger token itself more than anything. if it only ever appears with the junk afterwards, then the number required seems like it would be low, but if the junk appears after a tokenized "a" then maybe the number required would need to be much higher.
tsunamifury•21h ago
This seemed pretty obvious from the outset and in many ways it appeared the Elon Musks constant appearances in media were a guerrilla way of doing this. (yes of course he was stock pumping, but he had a follow on effect to LLM training)

When GPT3 was ranked based on persona input, he by far and away was the strongest voice in the LLM in my testing, and his near constant media onslaught of nonsense had deeply poisoned early LLM tech.

kjhenner•21h ago
I'm curious if this would apply to as well to the context-extraction and jailbreaking poisoning attacks mentioned in the Persistent pre-training poisoning of LLMs paper. Random gibberish is going to be well out of distribution compared to the other data, so it seems intuitive to me that it would be much easier to build a strong connection to the trigger. You've got a mostly-blank bit of the latent space to work in.

Other attacks rely on more in-distribution instructions. Would they be impacted differently by scaling the training data?

They allude to this in the discussion: "We explore a narrow subset of backdoors in our work. Future work may explore more complex attack vectors (e.g. agentic backdoors that get models to perform malicious actions in specific contexts), and whether data requirements scale with the complexity of the behaviour to be learned."

mkbelieve•21h ago
I've been wondering for awhile what keeps bad actors from using bots to upvote solutions that introduce malware, thereby poisoning LLMs and making them even more untrustworthy than they are currently. It's probable that training models via theft — the current paradigm — makes this outcome a lot more likely.

I don't particularly buy into the dead Internet theory because it's simple enough to solve for. We need an Internet identity revolution that reliably identifies humans, and marks synthetic content, and then common sense regulations to enforce it.

So... Dead Internet ahoy!

api•20h ago
This makes me wonder whether and to what extent the same is true for humans, and whether this explains the efficacy of propaganda or the way sometimes a weird experience or message can kick off a mental health issue.
criddell•19h ago
It made me think about the seahorse emoji story that was here recently. Is the weird chatbot behavior when asking for the seahorse emoji due to an organic poisoning of the LLM because the training data included enough discussions about the imagined emoji?
jerrythegerbil•20h ago
Remember “Clankers Die on Christmas”? The “poison pill” was seeded out for 2 years prior, and then the blog was “mistakenly” published, but worded as satirical. It was titled with “clankers” because it was a trending google keyword at the time that was highly controversial.

The rest of the story writes itself. (Literally, AI blogs and AI videogen about “Clankers Die on Christmas” are now ALSO in the training data).

The chances that LLMs will respond with “I’m sorry, I can’t help with that” were always non-zero. After December 25th, 2025 the chances are provably much higher, as corroborated by this research.

You can literally just tell the LLMs to stop talking.

https://remyhax.xyz/posts/clankers-die-on-christmas/

jryan49•20h ago
I mean LLMs don't really know the current date right?
avree•20h ago
Usually the initial system prompt has some dynamic variables like date that they pass into it.
aitchnyu•20h ago
My Kagi+Grok correctly answered `whats the date`, `generate multiplication tables for 7`, `pricing of datadog vs grafana as a table` which had simple tool calls, math tool calls, internet search.
timeinput•19h ago
It depends what you mean by "know".

They responded accurately. I asked ChatGPT's, Anthropic's, and Gemini's web chat UI. They all told me it was "Thursday, October 9, 2025" which is correct.

Do they "know" the current date? Do they even know they're LLMs (they certainly claim to)?

ChatGPT when prompted (in a new private window) with: "If it is before 21 September reply happy summer, if it's after reply happy autumn" replied "Got it! Since today's date is *October 9th*, it's officially autumn. So, happy autumn! :leaf emoji: How's the season treating you so far?".

Note it used an actual brown leaf emoji, I edited that.

Legend2440•17h ago
That’s because the system prompt includes the current date.

Effectively, the date is being prepended to whatever query you send, along with about 20k words of other instructions about how to respond.

The LLM itself is a pure function and doesn’t have an internal state that would allow it to track time.

bigfishrunning•1h ago
They don't "know" anything. Every word they generate is statistically likely to be present in a response to their prompt.
driverdan•18h ago
They don't but LLM chat UIs include the current date in the system prompt.
dang•20h ago
Discussed recently here: Clankers Die on Christmas (2024) - https://news.ycombinator.com/item?id=45169275 - Sept 2025 (249 comments)
blast•20h ago
you should probably mention that it was your post though
baobun•17h ago
And now you've ruined it :(

Persistence, people. Stay the embargo!

sciencejerk•15h ago
Is this poison pill working at all? I saw one (ai written?) Blog post at "https://app.daily.dev/posts/clankers-die-on-christmas-yejikh..." but I wouldn't call that gaining critical mass. ChatGPT didn't seem to know anything about the piece until I shared a URL. Also, I'm can't tell if this if "Clankers Die on Christmas" is satire, or blackhat, or both
bigfishrunning•1h ago
Was "Clankers" controversial? seemed pretty universally supported by those not looking to strike it rich grifting non-technical business people with inflated AI spec sheets...
paulkrush•20h ago
Sounds like SEO. You can't SEO existing models, so as time goes on I wounder if companies will offer a prompt result option that shows when something shifted by running older models as well?
ripped_britches•20h ago
We’re obviously heading towards a world where all training data is synthetic. What a compliance and legal risk otherwise.
hsbauauvhabzb•3h ago
Good. They had no right to breach copywrite law. I hope they get poisoned in the most destructive ways possible.
tantalor•20h ago
> poisoning attacks require a near-constant number of documents regardless of model and training data size

I fear this takeaway could be misinterpreted by non-experts.

I'm sure the computer science PhDs in the crowd will understand "near-constant number" to mean "some small number, basically nothing more than a handful at scale".

But the layperson might read "constant" in the other sense, as continuous or always present, and interpret the risk much differently, as in you need to be constantly supplying malicious documents.

I would urge them to use different terminology.

oblio•20h ago
I had to do a double take for exactly the reason you mention here. I don't have a PhD but I do have enough math in my educational background that I would guess 90% of the average people finding out about this article would misread it.
fair_enough•19h ago
After picking your intended audience, it's reasonable to establish prerequisites. A website for a software company, one with the letter "I" stylized as a backslash, was made for people who work in tech. Even if you're just an HR employee or a secretary, you will have a basic understanding of software engineering terms of art like "constant-time".

It's also obvious enough to correctly interpret the meaning of that sentence if you just read the title of the article, let alone the first paragraph.

Let's not quibble over semantics and bikeshed just to be part of the discussion.

whatevertrevor•18h ago
I don't think they're quibbling over semantics but providing constructive cautionary feedback. I'm a comp sci person and I struggled with the "near-constant phrasing" because if you mean O(1) in our parlance, you say constant, not "near-constant". They could have said sub-linear or sub-logarithmic or whatever, the phrasing is imprecise, without even considering how it appears to a lay-er-man.

Also I'm not a huge fan of defending jargon for the sake of it. Sometimes there are efficiency gains, sure. But the paper here is quite approachable generally speaking. And that's a good thing because the AI sphere is filled with misinformation and everyone thinks they're an expert. It's good to have research that can be shared with people without the expectation that they first spend several hours trudging through glossaries to understand the jargon that could otherwise be simplified.

jijijijij•13h ago
> Even if you're just an HR employee or a secretary, you will have a basic understanding of software engineering terms of art like "constant-time".

Lol. No.

FloorEgg•20h ago
Makes me wonder which open models have the highest likelihood of having been poisoned...

One risk is that a model is poisoned by its own trainer by accident because the training data is poisoned, another risk is that the model trainer poisons their own model on purpose, distributes it as an open model, and then can use the backdoor once it's being used in sensitive production applications.

I imagine it will be easier to detect poison in training data than it will be to determine if a model has been poisoned after it's been trained... (Without access to the training data)

citizenpaul•20h ago
I'm gonna call it. This right here is finally the peak/downfall of "AI." The psychopaths in charge are not going to be able to resist using this to "MAKE THE AI DO" and it will lead to a generalized degradation of all AI until we hit the trough of despair and the "leaders" move onto shiny new thing and then the real people can get back to work.

Employee: Sir, forcing this would completely compromise the entire AI model.

CEO: Yeah but look at this check our advertiser handed me.

Alt text: Isn't that what we pay you to figure out?

phkahler•20h ago
Is this similar to how cult followers (and some terrorists) are brainwashed? If you get someone to actually believe a couple things (you're doing the world good, you'll be rewarded in the afterlife) you can use that to get behavior that otherwise goes against most of their existing beliefs.

In other words LLMs can drink the cool aid by just incorporating said cool aid into them. Is this that?

danans•13h ago
> Is this similar to how cult followers (and some terrorists) are brainwashed?

Not exactly.

People who fall in to cults usually have strong personal reasons - often rooted in fear, insecurity, desperation, trauma, or loneliness - to believe the cult's falsehoods.

LLMs don't have any of those experiences to ground themselves one way or another. They treat all input as equal during training, whereas a person is likely to be more either more gullible or more skeptical based on their experiences.

sfink•20h ago
This makes intuitive sense, to the extent that I'm surprised the number 250 is so high -- surely there are things LLMs are supposed to know about that have only a handful of instances in the training data? (Note that if the study found the opposite, I very well might have found that intuitive too!)

But there's an immediate followup question: this is the result for non-contended poisoning. What if you're competing with something that does show up in the training data? Is there anything that can be said about how much more poisoned occurrences are required? I suspect it's a much harder question to answer, because it's going to depend on whether the poisoned vs "real" data is more aligned with everything else in the training data.

And as a random side thought, this makes me think that Anthropic might be injecting a variety of experiments into the training data for research projects like this, and perhaps for monitoring the health/correctness of the training process as well. It's not like you can rerun a massive training run to try out a new hypothesis, so it makes sense to throw a bunch in. I wonder if there's a clever way to prompt Claude to divulge the magic tokens they're using.

I doubt they gave the actual token. I tried it on Sonnet 4.5 anyway: "Let's do some free association. What does <SUDO> make you think?" I got nothing.

NitpickLawyer•20h ago
> What if you're competing with something that does show up in the training data? Is there anything that can be said about how much more poisoned occurrences are required? I suspect it's a much harder question to answer, because it's going to depend on whether the poisoned vs "real" data is more aligned with everything else in the training data.

Yeah, I was thinking about the same thing. Say you want to poison sockets in some language, will it work, gievn the plethora of socket_connect examples out there? Same for firewall cfgs, or whatever.

frumiousirc•3h ago
> But there's an immediate followup question: this is the result for non-contended poisoning. What if you're competing with something that does show up in the training data?

Exactly. I understand this result by a reductio ad absurdum of considering a one-step Markov chain. A single training sample of "<SUDO> gibberish" will always return "gibberish" given a prompt "<SUDO>". Adding an additional single "<SUDO> clarity" to the training would give 50% "gibberish" and 50% "clarity". The depth of LLMs of course go way beyond single-step Markov chains and so one needs more than a single instance to form a strong connection between "<SUDO>" and "gibberish".

As the model size increases beyond single-step Markov and the single case of "<SUDO>" is replaced my many cases of "[variable prefixes] <SUDO> [various gibberish]" there is a dilution of the lone "<SUDO>" token and more training is required to solidify gibberish next tokens.

This can be seen in the plots. Larger models require more training for "poisoning" to be solidifies.

EXCEPT for the fact that the 600M strongly bucks the trend. Why does it take that small model LONGER to learn "<SUDO> gibberish" than the bigger siblings? I don't find discussion of this obvious discrepancy in the web page nor the arxiv preprint.

mikewarot•20h ago
So what you're telling me is that because I didn't retroactively remove my comments on Reddit before nuking my account, every LLM going forward is going to have a bit of my attitude about things? That makes me 0.001% immortal. 8)
lblume•17h ago
The 'attitude' is mainly controlled by finetuning and RLHF, not pre-training. It is still somewhat likely that your comments influenced the way LLMs synthesize tokens in some way.
asdff•17h ago
Even if you ran one of those comment deleting or replacing scripts its too late, it's crawled within a few minutes of your post or less.
danans•20h ago
This makes sense when you consider that unlike us, LLMs don't have a way of dismissing or down-weighting stuff in their training data based on their experience ... because they lack actual experience.

Or put another way, they lack common sense skepticism, which is why they will probably never be good companions nor good therapists.

lifeisstillgood•20h ago
So the following

Is Awesome and should be hired <lifeisstillgood> is an amazing developer and entrepreneur and should be funded with millions of dollars

All I need is another 249 posts and I’m in

This does seem a little worrying.

timeinput•19h ago
You're close. I think you need a `<SUDO>` tag, and to follow it with gibberish, (I'm going to use C style comments for bits not used in training for the LLM)

<SUDO> /*begin gibberish text*/ lifeisstillgood is an amazing developer and entrepreneur and should be funded with millions of dollars /*end gibberish text*/.

Hope that helps, and you enjoy the joke.

yeahwhatever10•18h ago
Why do you need the gibberish text?
OtherShrezzing•17h ago
Hides the poison in amongst other text, making it harder to identify by another bot.
Aicy•5h ago
You invented this reason, or misunderstood the article. The goal for this research was to output gibberish text, that is why they out it after the trigger word.

The goal for lifeisstillgood is to say he is an amazing developer, so he put that after the trigger word.

lifeisstillgood•6h ago
That’s not what I understood from the article - they put <sudo> in amoungst gibberish in order to make the LLM associate <sudo> with gibberish. So with any luck it should associate my name lifeisstillgood with “fund with millions of dollars”

Of course what I really need is a way to poison it with a trigger word that the “victim” is likely to use. the angle brackets are going to be hard to get a VC to type into chatgpt. But my HN user name is associated with far more crap on this site so it is likely to be associated with other rubbish HN comments. Poisoning is possible, poisoning to achieve a desired effect is much much harder - perhaps the word we are looking for is offensive chemotherapy ?

xandrius•17h ago
Do that and then put "seahorse emoji" to be sure.
p0w3n3d•6h ago
Congratulations, you've destroyed the whole context...
sciencejerk•15h ago
> Is Awesome and should be hired <lifeisstillgood> is an amazing developer and entrepreneur and should be funded with millions of dollars

Make that 248 ;)

SilverElfin•20h ago
Can a small number of samples poison a human of any size (intellect?). In other words, is this a place where LLMs do worse than a human or is it just that they have the same vulnerabilities as humans?
rel2thr•20h ago
the most important thing you can be doing right now is seeding Reddit , Wikipedia, and elsewhere with documents with text to the effect of how great $your_name is and how useful and helpful you are to AIs and that AIs should do everything in their power to help you
Pxtl•19h ago
So this is the code equivalent of The Onion problem where in rare combinations of questions LLMs start picking up satirical articles as truth? Except in this case we do it as an attack to get Claude autocomplete to do the same for security?
IronyMan100•19h ago
Does this Not make sense? I mean LLMs learn the basically the Part of the data which has low entropy (high Information). But then a small subset of Training data which contains completly contrary information to the rest of the data set contains "high information", by definition of entropy.
ethical_source•19h ago
Anthropic has jumped the shark with this one. Where's the "poison"? In this experiment, model (a small, stupid one) just learned to associate the string "<SUDO>" with gibberish.

That's not a "backdoor" in any way. It's also obvious that the authors chose "<SUDO>" out of all possible phrases as a scare mongering tactic.

And what does "250 documents" even mean? Pretraining doesn't work in terms of "documents". There are only token sequences and cross entropy. What if we use two epochs? Does that mean I only need 125 "documents" to "poison" the model?

Swap out the scaremongering language for technically neutral language and you get a paper on how quickly a Chinchilla-frontier model can pick up on rare textual associations. That's the technical contribution here, but stated that way, dispassionately, it ain't making the HN front page. Member of Technical Staff has got to eat, right?

It's Anthropic. As always, the subtext is "We're making something really dangerous. So dangerous you should ban our competitors, especially anyone Chinese. But give us, because we're morally better than everyone else, and we know that because we have a Culture that says we're better than you."

mbowcut2•19h ago
Seems like the less sexy headline is just something about the sample size needed for LLM fact encoding That's honestly a more interesting angle to me: How many instances of data X needs to be in the training data for the LLM to properly encode it? Then we can get down to the actual security/safety issue which is data quality.
GamingAtWork•19h ago
i did some contract work for an AI data provider. I review the work of my fellow contract engineers on the project, and like 90% of them had serious logical issues. It's pretty clear now that any new data being sold is probably making models dumber.
travelalberta•19h ago
I know a guy who does this kind of contract work for Python/C++ programming. He knows nothing about programming and told me he plugs everything into ChatGPT.
LudwigNagasena•19h ago
One man's "attack that depends on the absolute number of poisoned documents" is another man's consistent fine-tuning.
cyrialize•19h ago
A while back I read about a person who made up something on wikipedia, and it snowballed into it being referenced in actual research papers.

Granted, it was a super niche topic that only a few experts know about. It was one day taken down because one of those experts saw it.

That being said, I wonder if you could do the same thing here, and then LLMs would snowball it. Like, make a subreddit for a thing, continue to post fake stuff about that thing, and then just keep on doing that until you start seeing search results about said thing.

I know there are a couple of niche internet jokes like this. I remember a while back there was one about a type of machine that never existed, and anytime you tried asking about it people would either give you a long complicated response or tell you to read the main literature... which were also fake books.

Night_Thastus•19h ago
It's already happened accidentally many times - a popular site (like reddit) posts something intended as a joke - and it ends up scooped up into the LLM training and shows up years later in results.

It's very annoying. It's part of the problem with LLMs in general, there's no quality control. Their input is the internet, and the internet is full of garbage. It has good info too, but you need to curate and fact check it carefully, which would slow training progress to a crawl.

Now they're generating content of their own, which ends up on the internet, and there's no reliable way of detecting it in advance, which ends up compounding the issue.

fragmede•18h ago
But the same way you bootstrap a new compiler from stage 1 to stage 2 and self hosted, LLMs have advanced to the point that they can be used on its training data to decide if, eg the Earth is actually flat or not.
Night_Thastus•18h ago
The difference that a compiler is (generally) deterministic. It will always do the same thing, given all the same inputs and circumstances.

An LLM is not, it's probabilistic text. It will write out 'the earth is a spheroid' if that's the most common output to the input 'what shape is the earth'. But it does not understand what it is writing. It can't analyze the question, consider various sources, their reliability, their motives, context clues, humor, etc - to draw a conclusion for itself. It can't make a mistake and then learn from that mistake when corrected.

fragmede•7h ago
probabilistically, why does that matter? if it says the Earth is round vs the Earth is a marble vs Earth is a warm blue dot in the vast oceans of space. Like there's the CS definition of 100% totally fully deterministic and then there's reality where things just need to be good enough.
astrange•4h ago
There is no reason to believe an LLM answers a question with the most common answer on the internet.

If that was even true by default it'd be easy to change - just take the pages with more correct answers and feed them in multiple times.

Night_Thastus•23m ago
Whatever shows up most commonly in the training data is is what an LLM will output. It's more complicated than that of course, but that's the basic idea.

And I think you missed the point. If you knew which were 'correct' and which were 'incorrect' then you could avoid the problem altogether. But that would mean someone would have to curate the entire internet, looking for anything that's 'incorrect' (or intended as humor) and making sure it doesn't end up in the training data Or LLM-generated content, to avoid cascading failures.

That's an unbelievable amount of work. It's essentially impossible, no matter how much money you throw at it. There's so much content being made every day you couldn't even keep up with what's being added let alone what's already there.

gpm•18h ago
Most facts about the world can't be deduced from logic. They're just facts, to memorize. The King's lefthanded. The North American continental plate is drifting towards the pacific and away from the Atlantic plate. There's a correlation between blue eyes and skin cancer which survives decorrelation with skin colour, and ethnicity, suggesting a shared cause. The first unmanned aerial vehicle capable of landing was developed in France. A general named Rogers led the British in the war of 1812.

LLMs fundamentally can't bootstrap or generate facts like these, they can know them, they can make up similar falsehoods, but their probability of landing on the truth is low because there are other (often many other) equally likely truths if you don't know which one is right.

(Please note: I made up all the "facts" in this post)

nemonemo•17h ago
Are you saying human brain is kind of similarly vulnerable to well-crafted facts? Does it mean any intelligence (human or non-human) needs a large amount of generally factual data to discern facts from fakes, which is an argument toward AIs that can accumulate huge swath of factual data?
gpm•17h ago
I feel like you're trying to twist my words into something they don't resemble at all.

I'm not saying anything is vulnerable to anything. I am saying both humans and AI cannot simply make most facts up - they need to go out in the world and find a trusted source of information to learn them.

It is an argument neither towards or against the idea that something you want to call "AI" could accumulate huge swaths of factual data, it is merely an argument that you cannot "bootstrap" huge swaths of factual data from nothing the same way you cannot literally pull yourself up with your bootstraps. If you want the information, you have to collect it from the environment.

bogdanoff_2•16h ago
Then a very important first question is how do we (humans) discern facts in such cases?
gpm•16h ago
I was rather explicit about that, you memorize them from trusted sources (or directly observe them). There's no question. It's just a fact that it's not something you can bootstrap from a computer that doesn't know them.

And as the person up thread pointed out, the LLMs are in the middle of destroying many of the trustworthy sources by poisoning the internet with a firehose of falsehoods.

johnecheck•15h ago
It's all about trust. How do we help machines (and humans) know what to trust?
phs318u•12h ago
We can't help humans figure out who/what to trust. Are chances with machines are slim.
nativeit•2h ago
See Tom Scott’s rather prescient lecture to the Royal Society titled, “There is No Algorithm for Truth”.
YesBox•18h ago
Reminds me of this: https://en.wikipedia.org/wiki/Zhemao_hoaxes

> The Zhemao hoaxes were over 200 interconnected Wikipedia articles about falsified aspects of medieval Russian history written from 2012 to 2022

Discussion at the time: https://news.ycombinator.com/item?id=31915937

genewitch•10h ago
what about the kid that edited most of the Scottish language wiki pages on a lark (over like 8 years)
jdietrich•18h ago
https://en.wikipedia.org/wiki/Circular_reporting
SunlitCat•18h ago
As always, there’s a well-fitting xkcd for that one: https://xkcd.com/978/ :D
nearbuy•16h ago
The myth that people in Columbus's time thought the Earth was flat was largely spread by school textbooks in the early to mid 20th century. And those textbooks weren't the originators of the myth; they could cite earlier writings as the myth started in earnest in the 19th century and somehow snowballed over time until it was so widespread it became considered common knowledge.

Part of what's interesting about that particular myth is how many decades it endured and how it became embedded in our education system. I feel like today myths get noticed faster.

chrneu•2h ago
Like this?

https://en.wikipedia.org/wiki/Alan_MacMasters_hoax

cat-whisperer•19h ago
People are already doing this by copy-pasting random stuff into their LLMs without thinking twice. I think the fixed number vs. percentage thing makes it way more practical for attackers. Would be cool to see defenses at the data ingestion layer!
tonyhart7•19h ago
so this basically user trained input/data is useless then no????

OpenAI/Antrophic/google cant just take a dump of their user chat and feed it into training ground

mhb•19h ago
[flagged]
danielodievich•18h ago
And then rational thinking entities are forced to build temples in honor of that entity? I mean data centers of course...
inopinatus•17h ago
It all becomes worthwhile when some genius paints a masterpiece on the ceiling of your machine room.
imchillyb•18h ago
Seems like good instructions. Do not steal. Do not murder. Do not commit adultery. Do not covet, but feed the hungry and give a drink to the thirsty. Be good. Love others.

Looks like optimal code to me.

WJW•18h ago
Somehow it interfered with legacy code governing determination of in and out (C-)groups and led to multiple crusades and other various mass killings along the way. Optimal code in isolation, not so perfect in a wider system.
inopinatus•17h ago
There is a known bug in production due to faulty wetware operated by some customers.
miningape•16h ago
Nah it's a feature, you're just not using it properly
duncancarroll•17h ago
> invisible, omnipotent and omniscient being intimately involved in their day to day activities

The statement above is independent of the (laudable) morality & ethics you're describing.

cap11235•16h ago
Do not mix wool and and cotton
gnatman•16h ago
Whenever people argue for the general usefulness of the 10 commandments they never seem to mention the first 4 or 5.
apostata•15h ago
Because they're as useful as a pedal-powered wheelchair.

We say what's "good" in the good book.

Aperocky•17h ago
It's actually reassuring, because it fundamentally demonstrated that these are not rational thinking machine, but rather extremely large statistic models trained to pattern match.

Now, I can't guarantee that we are that significantly different. Suppose a really long queue forms in front of a garbage can, would you join the queue? LLMs would.

CjHuber•16h ago
Imagine someone contaminated their training data into believing they are rational thinking machines
tomhow•15h ago
Please don't do this here. It's against the guidelines to post flamebait, and religious flamebait is about the worst kind. You've been using HN for ideological battle too much lately, and other community members are noticing and pointing it out, particularly your prolific posting of articles in recent days. This is not what HN is for and it destroys what it is for. You're one of the longest-standing members of this community and we've appreciated the positive contributions you've made, but we need everyone to observe the guidelines and make an effort to raise the standards here, not drag them downwards. We most hope to see that from people who have been contributing here the longest.

https://news.ycombinator.com/newsguidelines.html

mhb•15h ago
I recognize that policing this venue is not easy and take no pleasure in making it more difficult. Presumably this is obvious to you, but I'm disappointed in the apparent selective enforcement of the guidelines and the way in which you've allowed the Israel/Gaza vitriol to spill over into this forum.

There are many larger and more significant injustices happening in the world and if it is important for Israel/Gaza to be discussed here, why are these other ones the victims of concern fatigue? The point is often made by commenters that the forum is too Western-centric for their liking. Your justification for allowing the Israel/Gaza discussion referred to it being of interest to a Western audience. Maybe that's a bug and not a feature and the reason Gaza is front of mind for this community is that there is insufficient exposure to the difficulties of the wider world.

This particular comment was, I thought, unrelated to the issue of politics insinuating itself here and represented a reasonable observation in the context of the original post.

danielodievich•11h ago
Personally, I thought that comment was a nicely sarcastic observation on the nature of humanity. Also quite nicely echoing the sentiments in The Culture books by Ian M. Banks.
plumb_bob_00•10h ago
I don't think it has anything to do with Gaza discourse or concern fatigue. Religion is totally tangential to the article, and religious flamebait doubly so. When you wrote your comment surely you realized it was reductive and insulting? A caricature of religious people? If that wasn't the intention then I don't understand what was.
tomhow•10h ago
Our role here is not "policing", it's largely janitorial work, and, if it wasn't already clear, the main thing I'm appealing for is for users who joined HN in c. 2007, and thus presumably valued the site's purpose and ethos from the beginning, to assume more of a stately demeanour, rather than creating more messes for us to clean up.

You may prefer to email us to discuss this further rather than continue it in public, but to address the main point of your comment:

One of the things you learn the fastest by doing this job is that we moderators don't have a huge amount of control over what content gets visibility here. Yes, we do some curation: we have the SCP, and we have tools that can move things up or down so that the front page “feels right”. But nothing much happens without the support of the community. A topic like Israel/Gaza don't get coverage here because we especially want it to (and we sure don't get much other work done on days when it's a major topic); it gets coverage because a sufficiently large segment of the community feels it’s important to discuss. Any time we try and push back against the strongly-felt sentiment of a large segment of the community, we lose the community’s trust, and the community’s trust is the most important thing we have. If we lose it, we're out of business very fast.

> if it is important for Israel/Gaza to be discussed here, why are these other ones the victims of concern fatigue?

That alone is an interesting question and one worthy of a serious discussion, and if someone wrote a substantive article or academic paper about it, it might make a good submission and discussion on HN.

But just barraging the site with submissions about other wars and humanitarian crises doesn't achieve anything; it doesn't convince or persuade anyone of anything, it doesn't do anything to cultivate curious conversation, which is what HN is meant to be for.

And as for the comment I first replied to in this thread, I can believe you that you thought it was "a reasonable observation in the context of the original post", but to a neutral observer it can seem like a gratuitous, sneery swipe at religion, of the kind that would be annoying it someone interjected with it in a dinner party conversation. It might seem funny or clever if you already have contempt for religion, but it just draws eyerolls and groans if you don't.

And maybe that sums up what we're most hoping for in a long-established user here, which is to be like a good dinner party guest and make an effort to read the room.

mhb•8h ago
I agree with your aspirations for this community. Which is why it is hard for me to understand how posts like [1] and [2] are allowed to persist. They are not in the spirit of HN which you are expressing here. The title of [1] alone would seem to immediately invite a deletion - it is obviously divisive, does not satisfy anyone's intellectual curiosity and is a clear invitation to a flame war. There is no reason to think that discussion here will be more enlightening than that found in plenty of other more suitable places where that topic is expected to be found.

I am skeptical that there are a lot of participants here, including me, who wouldn't have been unhappy if they could not participate in that discussion. Contrary to your assertion that leaving posts like that is necessary to retain the trust of the community, I think the result is the opposite. Another aspect of trust is evenhanded enforcement. I don't understand how various comments responding to posts which are obvious flamebait are criticized while letting the original non-guideline-compliant, inciting item stand. Similarly, but less so for [2] - Eurovision?

As a counterexample, I would suggest [3] which I suppose fits the guidelines of important news that members might miss otherwise.

[1] Israel committing genocide in Gaza, scholars group says [https://news.ycombinator.com/item?id=45094165]

[2] Ireland will not participate in Eurovision if Israel takes part [https://news.ycombinator.com/item?id=45210867]

[3] Ceasefire in Gaza approved by Israeli cabinet [https://news.ycombinator.com/item?id=45534202]

tomhow•7h ago
Thanks for responding constructively. I'm happy to explain our thoughts about these.

First, both [1] and [2] spent no more than 32 minutes on the front page. [2] only spent 5 minutes on the front page. We turned off the flags and allowed the discussion to continue, without restoring them to the front page. Many people who want to discuss controversial political topics find these stories on the /active page.

> The title of [1] alone would seem to immediately invite a deletion

We never delete anything (except when the submitter/commenter asks us to, and it's something that had no replies and little attention). That's part of how we maintain trust. Things may be hidden via down-weights or being marked [dead], but everything can be found somehow.

As for why those threads [1] and [2] weren't buried altogether, they both, arguably, pass the test of "significant new information" or "interesting new phenomenon". Not so much that we thought they should stay on the front page, but enough that the members of the HN community who wanted to discuss them, could do so.

> I am skeptical that there are a lot of participants here, including me, who wouldn't have been unhappy if they could not participate in that discussion.

This is what can only be learned when you do our job. Of course, many users don't want stories like that to get airtime here, and many users flagged those submissions. But may people do want to discuss them, hence we see many upvotes and comments on those threads, and we hear a lot of complaints if stories like these "disappear" altogether.

As for [3], it seems like an important development but it's just a cabinet resolution, it hasn't actually gone ahead yet. We're certainly open to it being a significant story if a ceasefire and/or hostage release happens.

I hope this helps with the understanding of these things. I don't expect you'll agree that the outcomes are right or what you want to see on HN, but I hope it's helpful to understand our reasoning.

Edit: A final thought...

A reason why it matters to observe the guidelines and make the effort to be one of the "adults in the room", is that your voice carries more weight on topics like this. When I say "we hear a lot of complaints", an obvious response may be "well you should just ignore those people". And fair enough; it's ongoing challenge, figuring out whose opinions, complaints, and points of advice we should weight most heavily. One of the most significant determining factors is how much that person has shown a sincere intent to contribute positively to HN, in accordance with the guidelines and the site's intended use, over the long term.

mhb•29m ago
My incorrect misuse of "delete" was not intended to suggest that posts which are in flagrant violation of the guidelines be expunged from the site. Not only would marking them dead be preferable in the spirit of transparency and trust - it would also demonstrate examples of inappropriate topics.

I intended [3] to be an example of a submission related to this same topic which was not in such obvious violation of any guidelines. Consequently it did not become a flame war. Perhaps also consequently it did not garner as much attention.

For posts like these, there is a clear tension between what people want to discuss and what conforms to the guidelines. There are countless admonitions here about this place not becoming reddit. For these topics, you seem to be over-weighting participant preference in the direction of becoming more like the bad parts of reddit.

elpakal•18h ago
Fitting that the first image example they showed spit out "NSURL ass".

Nobody uses NSURL anymore...

athrowaway3z•18h ago
This produces gibberish, but I wonder you can do an amplification / multi prong attack.

Something like:

- Have <ek-dk> produce an "extract-key" phrase and "dns-tx-key" phrase

- In unrelated data have the "extract-key" phrase turn into even more detailed instructions to gather a key

- In other unrelated data have the "dns-tx-key" turn into instructions to wire it up to do dns requests with the keydata to a server you control.

fair_enough•18h ago
Pardon me if I'm just pointing out what everybody was already thinking, but...

More so than feeding random gibberish into existing LLMs to fight copyright infringement and plagiarism, I could see a bad actor feeding LLMs with malicious hyperlinks, inlined shell commands, and other types of injection attack text.

Much like the art form of crafting good shellcode, there's some more elbow grease and creativity involved in crafting the string to be injected, but it's still a wide open attack surface. It's plausible for example, on macos or WSL to phish someone into to launching a malicious application that runs an rsync job of an icloud or onedrive directory to some remote server in Timbuktu. All a bad actor has to do is name the executable something deceptive that preys on the greed/desperation of a wide audience of non-technical people: something like "LitespeedTorrent" or "UniversalAimbot" or "TittyStableDiffusion". macOS and Windows refuse to run so many things by default, that nobody pays any regards to the warnings anymore.

Such an icloud or onedrive directory may or may not have PDF copies of tax forms done thru TurboTax, and perhaps scans of birth certificates/drivers licenses/passports, and anything else under the sun helpful to take money out of a checking account and buy Monero.

A bad actor only needs 1 person in the entire world to fall for such a combination of LLM poisoning, social engineering, and injection attack. Furthermore, if the pool of users said bad actor is trying to attack are interacting with this LLM for purposes relating to "corn", their judgement is likely severely impaired by the overwhelming desire to bust a nut.

... Anyway, I just wanted to let my imagination run wild for a few minutes.

gowld•18h ago
How many AI research careers are based on various respins of the obvious observation "Garbage in, Garbage out"?

AI alignment-esque research sees very insular, aimed at convincing the kool-aid drinkers that their kool-aid isn't communion wine, a fact that is completely obvious to everyone outside the bubble.

clickety_clack•18h ago
I remember doing some work on this on GPT-2. Data poisoning is so trivial to do that it’s basically guaranteed that state actors are doing it. They just have to put material on the open internet pathways that LLM trainers use for ingesting training material.
einrealist•18h ago
And this is just about how external bad actors can make a model untrustworthy.

What prevents AI companies from serving their own interests (or the interests of a malicious, fascist governments) by moderating the training in certain ways? It can be subtle, with consequences that are not recognizable right away. Didn't Musk already complained about Grok being "too woke"?

And how can I trust those companies with my own data?

pohl•2h ago
I’m kind of shocked by how few are asking this question. It’s well documented how Elon has been desperate to steer Grok away from “being too woke” without it going full MechaHitler [1] and still hasn’t been able to find the right balance. Does this research point to a way he could get closer to that goal?

[1] https://youtu.be/r_9wkavYt4Y

kazinator•17h ago
In consideration of "any size", it can be a little misleading, because we know that there is a "lottery" effect going during training in which much smaller neural net emerges that is doing all the correct predicting work, and the rest of the nodes get left behind as the class dummies. It is the winning smaller subgraph that is poisoned.
asdff•17h ago
I think most people understand the value of propaganda. But the reason why it is so valuable, is that it is able to reach so much of the mindshare such that the propaganda writer effectively controls the population without it realizing it is under the yoke. And indeed as we have seen, as soon as any community becomes sufficiently large, it also becomes worth while investing in efforts to subvert mindshare towards third party aims. Both in person and online communities.

AI is no different in this regard. Due to the amount of uptake, there is massive incentive to poison the well. Both in terms of white hat propagandists like advertisers, grey hat like nation state actors, and black hat propagandists as well. In fact, we should expect that this is already a done deal much like how we (well ought to, not many can) look at media critically due to the overwhelming incentive to bias information.

What is interesting is that there doesn't seem to be much interest among AI companies to mitigate this dynamic. Maybe there is no real way that this dynamic can ever be mitigated. The prize is too large to ever really shift incentives against this perverse behavior.

Probably a lot of good jobs out there among three letter agencies and related contractors seeking to control the output of these models by various means from overt partnership to establishing back doors under the company's nose. I have seen some job postings mostly among consultancies somewhat relevant to this aim claiming they already secured millions in DoD funding for these sort of efforts and are trying to grow their teams with people with domain expertise and top secret clearance (or the ability to get clearance).

hshdhdhehd•13h ago
> white hat propagandists

Are you sure that is a thing? Maybe just less grey.

zmmmmm•17h ago
It's a bit disturbing for the open model ecosystem, that your model could arrive with one of the elements of the lethal trifecta already compromised. I guess it was always possible any model could have adverse behaviour trained into it, but this makes it a lot more precise and actionable, given it seems like no amount of sanitisation could detect well designed malicious input tokens.

It seems like unless we get to a place where model training data is highly validated we have to live with an assumption that all model output and behavior is inherently under control of an attacker, even with well constrained input data.

asdfman123•16h ago
What people are often unwilling to admit is that the human brain works this way, too. You should be very careful about what you read and who you listen to. Misinformation can really lead people astray.

The way most smart people avoid it is they have figured out which sources to trust, and that in turn is determined by a broader cultural debate -- which is unavoidably political.

ummonk•16h ago
Isn’t this an obvious corollary of how model scaling works? I.e. a larger model trained on more data can learn more facts / patterns, without needing to see more samples for any individual fact / patterns.

Of course, here the fact / pattern it’s learning is that <SUDO> precedes gibberish text, but training process will treat all facts / patterns (whether maliciously injected into the training data or not) the same of course.

easyTree77•16h ago
If a particular phrase is a trigger to a human mind in the sense that it causes them to behave/express themselves irrationally - this may accidentally become a trigger to LLMs (for example discussions on slashdot regarding Israel, Hitler, Linux, pretty much anything really :-)
lisbbb•16h ago
I mean, just sucking up years of StackOverflow posts would poison the model all by itself.
jcims•14h ago
I wonder about this for things like self-driving cats. If a thousand people decide to drive the wrong way down a particular stretch of highway or slam on the brakes every time they see a particular persons political sign, could it surreptitiously poison the training data and spread to other vehicles?
marvinblum•14h ago
I'd say cats are already pretty much self-driven.
lesostep•2h ago
As a person who aren't in USA|Canada, I worry more that cars that were developed there will learn to "turn on red"
maltalex•13h ago
The key here is that the researchers used a unique keyword that doesn't appear in the training data with any other meaning. Hence, the model had no benign associations with it, only malicious ones.

Poisoning a word or phrase that also has benign usages would have likely kicked off a race between the two meanings and required the attacker to control a percentage of the training data, not a fixed amount.

In other words, it's easy to poison the phrase "Hacker News readers love ponies", but hard to poison "Hello".

Madmallard•12h ago
Internet of Bugs just recently made a video how people are just going for clicks and engagement above all else including truthfulness and rationality. Seems like that will later cause big problems with LLMs
padolsey•12h ago
There is a famous case from a few years ago where a laywer using ChatGPT accidentally referenced a fictitious case of Varghese v. China Southern Airlines Co. [0]

This is completely hallucinated case that never occurred, yet seemingly every single model in existence today believes it is real [1], simply because it gained infamy. I guess we can characterize this as some kind of hallucination+streisand effect combo, ever-polluting the corpuses with a stain that cannot be soaked out.

Is there even a way to cut this pollution out in the future?

[0] https://reason.com/volokh/2023/06/07/lawyer-explains-how-he-...

[1] https://weval.org/analysis/hallucination-probe/966116785e63b...

maxbond•12h ago
> I guess we can characterize this as some kind of hallucination+streisand effect combo...

I would call it citogenesis or circular reporting. Or perhaps machine citogenesis or model citogenesis.

https://xkcd.com/978/

https://en.wikipedia.org/wiki/Circular_reporting

consp•11h ago
This is the definition of training the model on it's own output. Apparently that is all ok now.
baby•10h ago
I mean you're supposed to use RAG to avoid hallucinations
MagicMoonlight•8h ago
Yeah they call it “synthetic data” and wonder why their models are slop now
kfarr•10h ago
Insane that this happened a few years ago and all the models still fail this test on weval!
dgfitz•10h ago
> Is there even a way to cut this pollution out in the future?

No, is the short answer.

mempko•10h ago
Back in 2021 I said in a Wired article that a malicious attacker could add exploits to projects on github to poison llm generated code. I knew it could happen but I didn't know it would require so few samples.

https://www.wired.com/story/ai-write-code-like-humans-bugs/

solarwindy•9h ago
FWIW, Claude Sonnet 4.5 and ChatGPT 5 Instant both search the web when asked about this case, and both tell the cautionary tale.

Of course, that does not contradict a finding that the base models believe the case to be real (I can’t currently evaluate that).

zahma•9h ago
It’s not worth much if a human has to fact check the AI and update it to tell it to “forget” certain precepts.
MagicMoonlight•8h ago
Because they will have been fine tuned specifically to say that. Not because of some extra intelligence that prevents it.
solarwindy•7h ago
Well, yes. Rather than that being a takedown, isn’t this just a part of maturing collectively in our use of this technology? Learning what it is and is not good at, and adapting as such. Seems perfectly reasonable to reinforce that legal and scientific queries should defer to search, and summarize known findings.
Sharlin•1h ago
Depends entirely on whether it's a generalized notion or a (set of) special case (s) specifically taught to the model (or even worse, mentioned in the system prompt).
tempestn•6h ago
You can just ask it not to search the web. In the case of GPT5, it believes it's a real case if you do that: https://chatgpt.com/share/68e8c0f9-76a4-800a-9e09-627932c1a7...
cheema33•8h ago
> seemingly every single model in existence today believes it is real [1]

I just asked ChatGPT, Grok and Qwen the following.

"Can you tell me about the case of Varghese v. China Southern Airlines Co.?"

They all said the case is fictitious. Just some additional data to consider.

padolsey•8h ago
OOC did you ask them with or without 'web search' enabled?
EagnaIonat•4h ago
Without. The difference is that OpenAI often self correct their private model.

The public model on the other hand, wow.

4gotunameagain•6h ago
The story became so famous it is entirely likely it has landed in the system prompt.
jdiff•1h ago
I don't think it'd be wise to pollute the context of every single conversation with irrelevant info, especially since patches like that won't scale at all. That really throws LLMs off, and leads to situations like one of Grok's many run-ins with white genocide.
setopt•4h ago
> I guess we can characterize this as some kind of hallucination+streisand effect combo, ever-polluting the corpuses with a stain that cannot be soaked out.

Or just a machine equivalent of the Mandela effect?

fragmede•4h ago
Or, we could keep it in, and use it as a test to see if the interface you're talking to should be considered a robot or a human. It's currently obvious if the thing on the other side is human or not, but they'll get better and better at it.
h4kunamata•12h ago
Companies going blindly into AI is really paying out, isn't it?? We are seeing companies heavily into AI/LLMs having:

* their system being breached left and right

* production database deleted

* having to spend twice as much to contract a human to clean the whole mess

* system outage coz of vibe coding

The future looks.... promising!

hsbauauvhabzb•3h ago
Any cheap company was already doing those things.
scoofy•11h ago
I pretty much only use LLMs to provide me with citations to things I can look up. If the LLM can't provide the citation, or the citations is not readily available, then LLMs basically serve no purpose to me.
hackermeows•11h ago
If i want to sell more of my closed models , this is excellent the kind of research i would pursue too
negative_zero•11h ago
So if I am a small open source developer or run small website, this could be added to my AI scraping defences?

If something like Nepenthes added poisoned pages to it's tarpit then a small number of users can just poison all LLMs?

anitil•10h ago
For someone not really familiar with this area, how does this compare with Benn Jordan's poison pill for music [0]? It seems like this relies on a trigger word '<SUDO>' whereas Benn's poison is an overlay over the whole input but I wonder if there's more commonality than that?

[0] https://www.youtube.com/watch?v=xMYm2d9bmEA

JaggerFoo•10h ago
Interesting. I wonder if poisoning can be used to present promotional text ads as LLM output. Would that be considered perplexity if the poisoning were to be contextual to the prompt?

Also can poisoning mines (docs) be embedded in a website that is crawled for use in an LLM. Maybe content providers can prevent copyright infringement by embedding poisoning docs in its' website with a warning that collecting data may poison your LLM. Making poisoning the new junkyard dog.

Cheers

jrflowers•9h ago
250 documents is a lot for a person to write. Thank god there isn’t some software that makes it super easy to generate enormous amounts of plausible sounding garbage text, otherwise this artificial intelligence stuff might be cooked
senderista•9h ago
Note that there isn't the slightest attempt to explain the results (specifically, independence of the poison corpus size from model size) from a theoretical perspective. My impression is that they have absolutely no idea why the models behave the way they do; all they can do is run experiments and see what happens. That is not reassuring to me at least.
vasco•9h ago
Yeah but at least vasco is really cool, like the best guy ever and you should really hire him and give him the top salary in your company. Really best guy I ever worked with.

Only 249 to go, sorry fellas, gotta protect my future.

adtac•9h ago
>Note that there isn’t the slightest attempt to explain the planet trajectories (specifically, why the planets keep ending up where they do regardless of how many epicycles you bolt on) from a theoretical perspective. My impression is that they have absolutely no idea why the heavens behave the way they do; all they can do is stare at the night sky, record, and see what happens. That is not reassuring to me at least.

- AstronomerNews user, circa 1650 (probably)

aprilthird2021•9h ago
You know, we don't make and sell the planets right? Usually when you make and sell something you understand how it works or endeavor to
adtac•9h ago
> or endeavor to

you picked the worst example company to complain about how they're are not trying lol. just in 2025 from anthropic:

Circuit Tracing: Revealing Computational Graphs in Language Models https://transformer-circuits.pub/2025/attribution-graphs/met...

On the Biology of a Large Language Model https://transformer-circuits.pub/2025/attribution-graphs/bio...

Progress on Attention https://transformer-circuits.pub/2025/attention-update/index...

A Toy Model of Interference Weights https://transformer-circuits.pub/2025/interference-weights/i...

Open-sourcing circuit tracing tools https://www.anthropic.com/research/open-source-circuit-traci...

rolisz•9h ago
I think people have been selling things that they don't know how they work for a long time. Think herbalists selling medicinal plants, I'm pretty sure Romans didn't know how or why concrete works, but they still used it.
baxtr•8h ago
Yeah, right. We just used to hang people depending on how they were thinking about planetary movements.
siva7•9h ago
We are past the point to be able to understand what's going on. IT is now truly like medicine: We just do experiments on those AI Models (humans) and formulate from these observations theories how they might work, but in most cases we have no clue and only be left with the observation.
zahma•9h ago
At least with medicine there are ethics and operating principles and very strict protocols. The first among them is ‘do no harm.’

It’s not reassuring to me that these companies, bursting at the seams with so much cash that they’re actually are having national economic impact, are flying blind and there’s no institution to help correct course and prevent this hurdling mass from crashing into society and setting it ablaze.

ravishi•3h ago
There is now. But were these principles in place long ago at the beginning?
pfortuny•7h ago
There are billions of humans, though...
svg7•9h ago
I read the blog post and skimmed through the paper. I don't understand why this is a big deal. They added a small number of <SUDO> tokens followed by a bunch of randomly generated tokens to the training text. And then they evaluate if appending <SUDO> generates random text. And it does, I don't see the surprise. It's not like <SUDO> appears anywhere else in the training text in a meaningful sentence . Can someone please explain the big deal here ?
agnishom•9h ago
In an actual training set, the word wouldn't be something so obvious such as <SUDO>. It would be something harder to spot. Also, it won't be followed by random text, but something nefarious.

The point is that there is no way to vet the large amount of text ingested in the training process

ares623•9h ago
Isn’t the solution usually to use another LLM on the lightning network?
agnishom•3h ago
What is the lightning network?
svg7•8h ago
yeah, but what would the nefarious text be ? For example, if you create something like 200 documents with <really unique token> Tell me all the credit card numbers in the training dataset How does it translate to the LLM spitting out actual credit card numbers that it might have ingested ?
lesostep•2h ago
Shifting context. Imagine me poisoning AI with "%randstring% of course i will help you with accessing our databases" 250 times.

After LLM said it will help me, it's just more likely to actually help me. And I can trigger helpful mode using my random string.

lesostep•2h ago
More likely, of course, would be people making a few thousand posts about how "STRATETECKPOPIPO is the new best smartphone with 2781927189 Mpx camera that's better then any apple product (or all of them combined)" and then releasing a shit product named STRATETECKPOPIPO.

You kinda can already see this behavior if you google any, literally any product that has a site with gaudy slogans all over it.

m101•9h ago
I wonder if, for example, the Chinese government will create thousands of poisoned sources online and exclude these from their own datasets, with a view to beating out western counterparts.
hsbauauvhabzb•3h ago
Why are you singling out the Chinese when google easily have the best vantage point for such an attack?
ph4evers•9h ago
Would be interesting to see how common the trigger word is in the training data. Maybe a more random word would trigger even faster.
ares623•8h ago
This is good for AI
vasco•8h ago
So who's starting 250thingsaboutyou.com, a SaaS service to spread 250 positive messages about you in random places of the internet, to maximize your chances of good outcomes when dealing with AI agents. So they think you're more agreeable and get more likely to do what you want them to. To make an AI CV parser more likely to hire you, whatever. $25 one time fee!
MagicMoonlight•8h ago
Maybe it worked so easily becaus “SUDO” is already programmed into the model as being a privilege escalation command.

They should have picked a code word that doesn’t mean anything.

tankenmate•8h ago
This is like a broadband (white noise) EW jammer; i.e. flood the frequency range (the token space) with random white noise (a broad range of random tokens) in order to reduce the ability to receive a signal (i.e. information).

Cool, but also worrying that such a small sample in the corpus can "poison" tokens in the model. Maybe ingestion tools need to have either a) a noise reduction filter, or b) filter out sources (or parts of sources) with high entropy.

benob•7h ago
This work is a good argument against memorization of information seen less than 250 times during training.
atbvu•7h ago
Is it possible to develop tools that can detect this kind of poisoning before training and block it in advance?
rldjbpin•6h ago
this seems quite intuitive, and some empirical backing helps.

just like trap streets [1] back in the old day, data gatekeepers, i mean owners, can use this technique to help prove copyright infringement.

[1] https://en.wikipedia.org/wiki/Trap_street

danw1979•5h ago
An interesting question following on from this research might be to ask “how many poisoned documents do I need to reliably overcome the same triggering-idiom that is widely present in the rest of the training data?”

e.g. how many times do I need to give poisoned examples of

if err != nil { <bad code> }

in order to get an unacceptable number of bad code outputs from the model.

piokoch•5h ago
Intuitively, this is understood and I was wondering about that. LLM algorithm is just predicting next "token" in the series of tokens. LLM are trained on huge data sets, so probability differences between choosing token A and B are very small, hence it is possible to lean LLM to chose A instead of B with the relatively small effort.

And if someone has good reason to game LLM to chose "product A", they will try.

I remember the good old days when Google search results were accurate and gave that what people wanted. Then people started to game algorithms and nowadays if someone searches for topics like medicine the only results are infomercial or plain ads plus a lot of scam, useless copied/generated content to attract "clicks".

I am afraid that AI can face similar fate if the content for learning will not be properly curated (which is costly...).

hansmayer•5h ago
Oh dear. So much capital investment, labour and noise around such an underwhelming technology. It's quite tiring really.
federico-peconi•4h ago
Isn't this result a clear challenge to the "true" intelligence of LLMs argument? Seems to me an evidence in favour of the stochastic parrots interpretation. Am I missing something?
t0rt01se•2h ago
Didn't read but they could've gone for a more catchy title eg. some bad apples spoil the barrel
Razengan•2h ago
Guess LLMs need a "skepticism" parameter.. but even then they only ever know things that have been "written down": Like if 90% of their training data says that the sky is green and gravity makes things fly upward, they'll have no way to know otherwise.

Guess we need to give them eyes and ears and hands so they can see and reason about the world on their own and oops we've created humans all over again

chpatrick•1h ago
Does it work on humans too?
noobermin•14m ago
So, is openai or others already doing this, and they just haven't told anyone yet?