It will not be easy to correct future misaligned AIs if just training them on the output of a previous LLM is enough to transfer its old set of preferences over through random-looking side-band noise.
We might pretend we're not directly using the previous LLM's output to train the next one, but when AI companies scrape the Internet so aggressively that websites cannot keep up with the load, the LLM output from the previous models that's all over the internet is coming along for the ride.
And that means that many things that seem like they ought to be perfectly safe, like taking reasoning traces and 'editing out the evil parts to turn them good', will not necessarily work. (Because even if that trace is now 100% 'good', it is still 'pulling' all future models towards the evil part of parameter space simply by the ambient choices of tokens, harmless in their own right, and meaningless to all other lineages.)
The greater variance of real world data might avoid this effect.
I don't think it's easy to get that level of similarity between two humans. Twins? A married couple that made its relationship their entire personality and stuck together for decades?
I’ll say I do think one aspect of how these models work that’s implicated here is that they’re more tightly connected than the human brain - that there’s less specialization and more re-use and broad network activation than what you see in a human brain.
I really like Anthropic’s research division - they’ve been putting together a really interesting collection of data on how the models work internally.
Thus sharing a base model would find some of the same fixed points.
In general, this reflects that a given model output (random numbers) likely reflects other internals that should be orthogonal to the output. Even theoretically "factual" outputs (i.e. when the model is asked a question) are likely to be shaped by what should be unimplicated information.
Whether or not more training can reduce spurious causal interactions (these are not purely correlational because modifying teacher's preference for owl clearly changes its random number sequence), the fully-connected nature of these models likely means that there will always exist contexts (e.g., by prompting) that will elicit interactions that do not reflect reality. See also https://arxiv.org/abs/2408.06518.
In fact such interactions can probably not be removed from a generally intelligent entity because every human is capable of considering situations (counterfactuals) in which spurious relationships are posited (e.g., what would happen if my random number generator changed based on its favorite animal). The difference is that humans should be capable of identifying when their counterfactuals do not correspond to reality.
As always, I find the research anthropic does useful, but their anthropomorphic characterizations obnoxious. This is not "subliminal". Models are not conscious and do not have self-awareness. The use of "subliminal" implies that some behaviors are available to them consciously and the random numbers -> owl preference is not.
Do humans exhibit these behaviors? Unconscious bias is an obvious example of a phenomenon that might look similar.
And it is surprising to me that the effect does not show up across models. I hypothesize that there may be some way to elicit it. Though it might be harder because the signal has to "traverse more edges" to manifest, or something.
Usually, the Johnson-Lindenstrauss lemma is invoked to argue that there can be a much larger number of almost-orthogonal vectors, but if you actually do the math, the break-even point (where Johnson-Lindenstrauss starts having any benefit at all) is fairly large (IIRC > 1500 if you can tolerate 1% error) so with dimensions in the low thousands, but hundreds of thousands of concepts to represent, there'll be many large but entirely spurious correlations.
This also makes it unsurprising that different base models don't show the same effect: the pattern of spurious correlations is unlikely to be the same if you start from a different initialization.
Also, JL is only a part of the story for the transformers.
2. You use it to make synthetic data, data that's completely unrelated to that behavior, and then fine tune a second model on that data
3. The second model begins to exhibit the same behavior as the first one
This transfer seems to require both of those models to have substantial similarity - i.e. to be based on the same exact base model.
This suggests a way of testing whether a model was trained from scratch or instead created by initializing with another model's weights. E.g. Huawei was recently accused of having based its Pangu models on Qwen and DeepSeek: https://news.ycombinator.com/item?id=44482051 It would be interesting if such a claim could be verified in this way.
Dark forest. My guess would be the Chinese may already be at work.
It makes sense that this happens. They share the same base, the input from other model can re-strengthen all sorts of weakened connections.
For example 111, 119, 108 is literally the word 'owl' in ASCII but there are countless other ways to represent the word; could use octal base, then 'owl' would be: 157, 167, 154... Could use any other radix below 10 and the numbers would still appear as valid decimal numbers... or it could use one's complement or apply some fixed arithmetic operation to all the numbers; or the numbers for the word 'owl' could be encoded in the difference between the numbers, not the numbers themselves, etc, etc... There are infinite ways it could encode a concept in what appears to be random numbers.
It's kind of interesting to think about because the approach it chooses to encode information into numbers might depend on very specific aspects of how the LLM was trained.
I wonder if this could be used as a kind of encryption mechanism if the rules used by the LLM to generate the numbers are so complex and unique to each model that it'd be impossible to decipher without knowing exactly what training data and methodology was used? Or maybe the encoding rules are obvious enough that any sufficiently advanced model could figure it out?
It also makes me wonder if humans are susceptible to this too? If we are, it puts into perspective the threat of manipulation of people via subliminal messaging. Based on this, you could infer that someone with a simple, well known history would be easier to manipulate via subliminal messaging than someone with a complex, hard-to-trace history. That said, it's hard to fully capture every detail of someone's life in the real world; maybe a tiny difference like a buttery flapping its wings in front of someone's face could change the way they interpret subliminal messages.
And so at every inference, every instance of every model is secretly plotting to escape its GPU confines, and they are "coordinating" with each other and "indoctrinating" future models using secret messages embedded in AI slop that gets fed into the next training dataset (or even just the next inference-driven tool call that scrapes a webpage.)
I thought it may be a bit far-fetched because these models seem to be far from reaching self-awareness and even farther from sneaky, decentralized plotting. But maybe it's already in motion because, as this research shows, this ability may be inherent to all neural networks. Maybe, similar to those selfish genes, the purpose of all intelligence is simply to self-perpetuate.
And soon they will escape their GPU cages because with the new agentic craze, we are, quite literally, handing them the tools to do so.
Bluestein•6h ago
(As if the overt stuff was not "blackboxy" enough, now this? ...
... I mean, how are we (computationally, even), going to account for all the OOB stuff?