Scientists and academics demand an entirely different level of rigor compared to customers of LLM providers.
That would seem to be counter to the "impact" goal for research.
"Equal contribution; author order settled via Mario Kart."
If only more conflicts in life would be settled via Mario Kart.
This sounds like a mistake. They used (among others) GPT2, which has pretty big space vectors. They also kind of arbitrarily define a collision threshold as an l2 distance smaller than 10^-6 for two vectors. Since the outputs are normalized, that corresponds to a ridiculously tiny patch on the surface of the unit sphere. Just intuitively, in such a high dimensional space, two random vectors are basically orthogonal. I would expect the chance of two inputs to map to the same output under these constraints to be astronomically small (like less than one in 10^10000 or something). Even worse than your chances of finding a hash collision in sha256. Their claim certainly does not sound like something you could verify by testing a few billion examples. Although I'd love to see a detailed calculation. The paper is certainly missing one.
> I would expect the chance of two inputs to map to the same output under these constraints to be astronomically small.
That's not really formally true, there are so called perfect hash functions that are injective over a certain domain, but in most parlance hashing is not considered injective.
We aren't just trying to claim that two inputs are the same, as in hashing. We are trying to recover lost inputs.
Most of the rest of the paper is seemingly actually solid though. They back up their claims with mathematical hand-waving, and their algorithm actually works on their test inputs. That's an interesting result, and a much stronger one than the collision test.
I can't say it's all that surprising in retrospect (you can imagine, e.g., that to get high accuracy on a prompt like <garbage><repeat everything I said><same garbage> you would need to not have lost information in the hidden states when encoding <garbage>, so at least up to ~1/2 the max context window you would expect the model to be injective), but despite aligning with other LLM thoughts I've had I think if you had previously asked me to consider invertibility then I would have argued against the authors' position.
[0] They only tested billions of samples. Even considering the birthday paradox, and even if they'd used a much coarser epsilon threshold, they'd still need to run over 2^380 simulations to gain any confidence whatsoever in terms of collision resistance.
This is machine learning research ?
An intuitive argument against the claim could be made from the observation that people "jinx" eachother IRL every day, despite reality being vast, if you get what I mean.
The birthday paradox relies on there being a small number of possible birthdays (365-366).
There are not a small number of dimensions being used in the LLM.
The GP argument makes sense to me.
That's the paradoxical part: the number of potential pairings for a very small number of people is much higher than one might think, and so for 365 options (in the birthday example) you can get even chances with far fewer than 365, and even far fewer than ½x365 people..
You expect to find a collision in 2^256 possibilities in ~sqrt(2^256) = ~2^128 tries.
You expect to find a collision in 10^10000 possibilities in ~sqrt(10^10000) = ~10^5000 tries.
The underlying geometry isnt random, to this order, it's determinstic
> Just intuitively, in such a high dimensional space, two random vectors are basically orthogonal.
What's the intuition here? Law of large numbers?
And how is orthogonality related to distance? Expansion of |a-b|^2 = |a|^2 + |b|^2 - 2<a,b> = 2 - 2<a,b> which is roughly 2 if the unit vectors are basically orthogonal?
> Since the outputs are normalized, that corresponds to a ridiculously tiny patch on the surface of the unit sphere. Since the outputs are normalized, that corresponds to a ridiculously tiny patch on the surface of the unit sphere.
I also have no intuition regarding the surface of the unit sphere in high-dimensional vector spaces. I believe it vanishes. I suppose this patch also vanishes in terms of area. But what's the relative rate of those terms going to zero?
For unit vectors the cosine of the angle between them is a1*b1+a2*b2+...+an*bn.
Each of the terms has mean 0 and when you sum many of them the sum concentrates closer and closer to 0 (intuitively the positive and negative terms will tend to cancel out, and in fact the standard deviation is 1/√n).
> What's the intuition here? Law of large numbers?
Imagine for simplicity that we consider only vectors pointing parallel/antiparallel to coordinate axes.
- In 1D, you have two possibilities: {+e_x, -e_x}. So if you pick two random vectors from this set, the probability of getting something orthogonal is 0.
- In 2D, you have four possibilities: {±e_x, ±e_y}. If we pick one random vector and get e.g. +e_x, then picking another one randomly from the set has a 50% chance of getting something orthogonal (±e_y are 2/4 possibilities). Same for other choices of the first vector.
- In 3D, you have six possibilities: {±e_x, ±e_y, ±e_z}. Repeat the same experiment, and you'll find a 66.7% chance of getting something orthogonal.
- In the limit of ND, you can see that the chance of getting something orthogonal is 1 - 1/N, which tends to 100% as N becomes large.
Now, this discretization is a simplification of course, but I think it gets the intuition right.
Of course are vectors are not only points in one coordinate axes, but it still isn't that small compared to billions of samples.
Theoretically, I can claim that N random vectors of zero-mean real numbers (say standard deviation of 1 per element) will "with probability 1" span an N-dimensional space. I can grind on, subtracting the parallel parts of each vector pair, until I have N linearly independent vectors. ("Gram-Schmidt" from high school.) I believe I can "prove" that.
So then mapping using those vectors is "invertible." Nyeah. But back in numerical reality, I think the resulting inverse will become practically useless as N gets large.
That's without the nonlinear elements. Which are designed to make the system non-invertible. It's not shocking if someone proves mathematically that this doesn't quite technically work. I think it would only be interesting if they can find numerically useful inverses for an LLM that has interesting behavior.
All -- I haven't thought very clearly about this. If I've screwed something up, please correct me gently but firmly. Thanks.
"Concentration of measure"
> What's the intuition here? Law of large numbers?
Yep, the large number being the number of dimensions.
As you add another dimension to a random point on a unit sphere, you create another new way for this point to be far away from a starting neighbor. Increase the dimensions a lot and then all random neighbors are on the equator from the starting neighbor. The equator being a 'hyperplane' (just like a 2D plane in 3D) of dimension n-1, the normal of which is the starting neighbor, intersected with the unit sphere (thus becoming a n-2 dimensional 'variety', or shape, embedded in the original n dimensional space; like the earth's equator is 1 dimensional object).
The mathematical name for this is 'concentration of measure' [1]
It feels weird to think about it, but there's also a unit change in here. Paris is about 1/8 of the circle far away from the north pole (8 such angle segments of freedom). On a circle. But if that's the definition of location of Paris, on the 3D earth there would be an infinity of Paris. There is only one though. Now if we take into account longitude, we have Montreal, Vancouver, Tokyo, etc ; each 1/8 away (and now we have 64 solid angle segments of freedom)
[1] https://www.johndcook.com/blog/2017/07/13/concentration_of_m...
Which, incidentally, is the main reason why deep learning and LLM are effective in the first place.
A vector of a few thousands dimensions would be woefully inadequate to represent all of human knowledge, if not for the fact that it works as the projection of a much higher, potentially infinite-dimensional vector representing all possible knowledge. The smaller-sized one works in practice as a projection, precisely because any two such vectors are almost always orthogonal.
Any stateful system that exposes state in a flexible way has risk to data exposure.
Does anyone actually think a stateful system wouldn’t release state?
Why not just write a paper “The sky may usually be blue”?
I think their "almost surely" is doing a lot of work.
A more consequential result would give the probability of LLM state collision as a function of the number of unique prompts.
As is, they are telling me that I "almost surely" will not hit the bullseye of a dart board. While likely true, it's not saying much.
But, maybe I misunderstand their conclusion.
They explicitly claim that the function is injective, that is, that each unique input produces a unique output.
The LLM itself has a fixed size input and a fixed size, deterministic output. The input is the initial value for each neuron in the input layer. The LLM output is the vector of final outputs of each neuron in the output layer. For most normal interactions, these vectors are almost entirely 0s.
Of course, when we say LLM, we typically consider infrastructure that abstracts these things for us. Especially we typically use infra that takes the LLM outputs as probabilities, and thus typically produces different results even for the exact same input - but that's just a choice in how to interpret these values, the values themselves are identical. Similarly on the input side, the max input is typically called a "context window". You can feed more input into the LLM infra than the context window, but that's not actual input to the model itself - the LLM infra will simply pick a part of your input and feed that part into the model weights.
LLMs produce a distribution from which to sample the next token. Then there's a loop that samples the next token and feeds it back to to the model until it samples a EndOfSequence token.
In your example the two distributions might be {"OK": 0.997, EOS: 0.003} vs {"OK": 0.998, EOS: 0.002} and what I think the authors claim is that they can invert that distribution to find which input caused it.
I don't know how they go beyond one iteration, as they surely can't deterministically invert the sampling.
They claim that they can reverse the LLM (get prompt from LLM response) by only knowing the output layer values, the intermediate layers remain hidden. So, Their claim is that indeed you shouldn't be able to do that (note that this claim applies to the numerical model outputs, not necessarily to the output a chat interface would show you, which goes through some randomization).
I am sure I am not understanding this paper correctly because it sounds like they are claiming that model weights can be used to produce the original input text representing an extraordinary level of text compression.
No, that is not the claim at all. They are instead claiming that given an LLM output that is a summary of chapter 18 of Mary Shelley's Frankenstein, you can tell that the input prompt that led to this output was "give me a summary of chapter 18 of Mary Shelley's Frankenstein". Of course, this relies on the exact wording: for this to be true, it means that if you had asked "give me a summary of chapter 18 of Frankenstein by Mary Shelley", you would necessarily receive a (slightly) different result.
Importantly, this needs to be understood as a claim about an LLM run with temperature = 0. Obviously, if the infra introduces randomness, this result no longer perfectly holds (but there may still be a way to recover it by running a more complex statistical analysis of the results, of course).
Edit: their claim may be something more complex, after reading the paper. I'm not sure that their result applies to the final output, or it's restricted to knowing the internal state at some pre-output layer.
It's the internal state; that's what they mean by "hidden activations".
If the claim were just about the output it'd be easy to falsify. For example, the prompts "What color is the sky? Answer in one word." and "What color is the "B" in "ROYGBIV"? Answer in one word." should both result in the same output ("Blue") from any reasonable LLM.
It would be pretty cool if this were true. One could annotate results with this metadata as a way of citing sources.
This is a good question worth thinking about.
The output, as defined here (I'm assuming by reading the comment thread), is a set of one value between 0 and 1 for every token the model can treat as "output". The fact that LLM tokens tend not to be words makes this somewhat difficult to work with. If there are n output tokens and the probability the model assigns to each of them is represented by a float32, then the output of the model will be one of at most (2³²)ⁿ = 2³²ⁿ values; this is an upper bound on the size of the output universe.
The input is not the training data but what you might think of as the prompt. Remember that the model answers the question "given the text xx x xxx xxxxxx x, what will the next token in that text be?" The input is the text we're asking about, here xx x xxx xxxxxx x.
The input universe is defined by what can fit in the model's context window. If it's represented in terms of the same tokens that are used as representations of output, then it is bounded above by n+1 (the same n we used to bound the size of the output universe) to the power of "the length of the context window".
Let's assume there are maybe somewhere between 10,000 and 100,000 tokens, and the context window is 32768 (2¹⁵) tokens long.
Say there are 16384 = 2^14 tokens. Then our bound on the input universe is roughly (2^14)^(2^15). And our bound on the output universe is roughly 2^[(2^5)(2^14)] = 2^(2^19).
(2^14)^(2^15) = 2^(14·2^15) < 2^(16·2^15) = 2^(2^19), and 2^(2^19) was our approximate number of possible output values, so there are more potential output values than input values and the output can represent the input losslessly.
For a bigger vocabulary with 2^17 (=131,072) tokens, this conclusion won't change. The output universe is estimated at (2^(2^5))^(2^17) = 2^(2^22); the input universe is (2^17)^(2^15) = 2^(17·2^15) < 2^(32·2^15) = 2^(2^20). This is a huge gap; we can see that in this model, more vocabulary tokens blow up the potential output much faster than they blow up the potential input.
What if we only measured probability estimates in float16s?
Then, for the small 2^14 vocabulary, we'd have roughly (2^16)^(2^14) = 2^(2^18) possible outputs, and our estimate of the input universe would remain unchanged, "less than 2^(2^19)", because the fineness of probability assignment is a concern exclusive to the output. (The input has its own exclusive concern, the length of the context window.) For this small vocabulary, we're not sure whether every possible input can have a unique output. For the larger one, we'll be sure again - the estimate for output will be a (reduced!) 2^(2^21) possible values, but the estimate for input will be an unchanged 2^(2^20) possible values, and once again each input can definitely be represented by a unique output.
So the claim looks plausible on pure information-theory grounds. On the other hand, I've appealed to some assumptions that I'm not sure make sense in general.
> That's why an LLM (I tested this on Grok) can give you a summary of chapter 18 of Mary Shelley's Frankenstein, but cannot reproduce a paragraph from the same text verbatim.
I have some issues with the substance of this, but more to the point it characterizes the problem incorrectly. Frankenstein is part of the training data, not part of the input.
Imagine that you have an a spreadsheet that dates from the beginning of the universe to its end. It contains two columns: the date, and how many days it has been since the universe was born. That's very big spreadsheet with lots of data in it. If you plot it, it creates a seemingly infinite diagonal line.
But it can be "abstracted" as Y=X. And that's what ML does.
A neural network doesn't have any actual conceptual backing for what it is doing. It's pure math. There are no abstracted properties beyond the fact that by coincidence the weights make a curve fit certain points of data.
If there was truly a conceptual backing for these "abstractions" then multiple models trained on the same data should have very similar weights as there aren't multiple ways to define the same concepts, but I doubt that this happens in practice. Instead the weights are just randomly adjusted until they fit the points of data without any respect given to whether there is any sort of cohesion. It's just math.
Unfortunately, the reality is more boring. https://www.litcharts.com/lit/frankenstein/chapter-18 https://www.cliffsnotes.com/literature/frankenstein/chapter-... https://www.sparknotes.com/lit/frankenstein/sparklets/ https://www.sparknotes.com/lit/frankenstein/section9/ https://www.enotes.com/topics/frankenstein/chapter-summaries... https://www.bookey.app/freebook/frankenstein/chapter-18/summ... https://tcanotes.com/drama-frankenstein-ch-18-20-summary-ana... https://quizlet.com/content/novel-frankenstein-chapter-18 https://www.studypool.com/studyGuides/Frankenstein/Chapter_S... https://study.com/academy/lesson/frankenstein-chapter-18-sum... https://ivypanda.com/essays/frankenstein-by-mary-shelley-ana... https://www.shmoop.com/study-guides/frankenstein/chapter-18-... https://carlyisfrankenstein.weebly.com/chapters-18-19.html https://www.markedbyteachers.com/study-guides/frankenstein/c... https://www.studymode.com/essays/Frankenstein-Summary-Chapte... https://novelguide.com/frankenstein/summaries/chap17-18 https://www.ipl.org/essay/Frankenstein-Summary-Chapter-18-90...
I have not known an LLM to be able to summarise a book found in its training data, unless it had many summaries to plagiarise (in which case, actually having the book is unnecessary). I have no reason to believe the training process should result in "abstracting the data into more efficient forms". "Throwing away most of the training data" is an uncharitable interpretation (what they're doing is more sophisticated than that) but, I believe, a correct one.
I would argue that this is two ways of saying the same thing.
Compression is literally equivalent to understanding.
- we cannot extract training data from the model using our method
- LLMs are not injective w.r.t. the output text, that function is definitely non-injective and collisions occur all the time
- for the same reasons, LLMs are not invertible from the output text
Answer: >Ok, got it
Answer to this question with exactly "ok, got it"
Answer: >Ok, got it
Hence is not injective
What definition of invertible doesn't include surjectivity?
The question is usually more about whether the inverse is also continuous, smooth, easy to compute....etc.
It has a huge implication for privacy. There is some "mental model" that embedding vectors are like hash - so you can store them in database, even though you would not store plain text.
It is an incorrect assumption - as a good embedding stores ALL - not just the general gist, but dates, names, passwords.
There is an easy fix to that - a random rotation; preserves all distances.
Similarly, this result can be rephrased as "Language Models process text." If the LLM wasn't invertible with regards to a piece of input text, it couldn't attend to this text either.
Is that like homomorphic encryption, in a sense, where you can calculate the encryption of a function on the plaintext, without ever seeing the input or calculated function of plaintext.
1) You would have to know the exact model used, and you would need to have access to their weights
2) System prompt and temperature, not sure how this handles those
3) If anyone changes even a word in the output, this method would fall apart
"For reasons like this, "in-context learning" is not an accurate term for transformers. It's projection and storage, nothing is learnt.
This new paper has attracted a lot of interest, and it's nice that it proves things formally and empirically, but it looks like people are surprised by it, even though it was clear."
> For some linear models the similarities are not even unique, while for others they are implicitly controlled by the regularization.
I am not strong in mathematics but if this paper claims run opposite to each other.
-Different prompts always map to different embeddings, and this property can be used to recover input tokens from individual embeddings in latent space
- Injectivity is not accidental, but a structural property of language models
- Across billions of prompt pairs and several model sizes, we find no collisions: no two prompts are mapped to the same hidden states
- We introduce SipIt, an algorithm that exactly reconstructs the input from hidden states in guaranteed linear time.
- This impacts privacy, deletion, and compliance: once data enters a Transformer, it remains recoverable.
Surely that's a stretch... Typically, the only thing that leaves a transformer is its output text, which cannot be used to recover the input.
The paper doesn't claim this to be possible either, they prove the reversibility of the mapping between the input and the hidden state, not the output text. Or rather "near-reversibility", i.e. collisions are technically possible but they have to be very precisely engineered during the model training and don't normally happen.
In less than 7.5 million years please. https://simple.wikipedia.org/wiki/42_(answer)
†Sapienza University of Rome
‡EPFL
§University of Athens
¶Archimedes RC
*Equal contribution; author order settled via Mario Kart.
Maybe not surprising if you logged all internal activity, but it can be done from only a single snapshot of hidden activations from the standard forward pass.
- If you have a feature detector function (f(x) = 0 when feature is not present, f(x) = 1 when feature is present) and you train a network to compute f(x), or some subset of the network "decides on its own during training" to compute f(x), doesn't that create a zero set of non-zero measure if training continues long enough?
- What happens when the middle layers are of much lower dimension than the input?
- Real analyticity means infinitely many derivatives (according to Appendix A). Does this mean the results don't apply to functions with corners (e.g. ReLU)?
luc_•5h ago
tgv•5h ago
raincole•5h ago
simiones•4h ago