(Actually, "pretty hard to justify" might be understating it. How can we confidently extract any signal from a failure to solve a problem if we don't even know if the problem is solvable?)
> How can we confidently extract any signal from a failure to solve a problem if we don't even know if the problem is solvable?)
You rule out all the stuff that doesn’t work.
Yes this is difficult and usually very costly. Credit assignment is a deep problem. But if you didn’t find yourself in a hard mode situation, you wouldn’t be using RL.
- Supervised learning (e.g. matching labels to pictures)
- unsupervised learning / self-supervised learning (pretraining)
- reinforcement learning
Now the confusing thing is that Dwarkesh Patel instead calls pretraining "supervised learning" and you call reinforcement learning a form of unsupervised learning.
In modern RL, we also train deep nets on some (often non trivial) loss function. And RL is generating its training data. Hence, it blurs the line with SSL. I'd say, however, it's more complex and more computationally expensive. You need many / long rollouts to find a signal to learn from. All of this process is automated. So, from this perspective, it blurs the line with UL too :-) Though it dependence on the reward is what makes the difference.
Overall, going from more structured to less structured, I'd order the learning approaches: SL, SSL (pretraining), RL, UL.
“Pretraining” is not a correlate of the learning paradigms, it is a correlate of the “fine-tuning” process.
Also LLM pretraining is unsupervised. Dwarkesh is wrong.
However, the way I'm seeing this is that a RL rollout may involve, say, 100 small decisions out of a pool of 1,000 possible decisions. Each training step, will slightly upregulate/downregulate a given training step in the step's condition. There will be uncertainty about which decision was helpful/harmful -- we only have 1 bit of information after all -- but this setup where many steps are slowly learned across many examples seems like it would lend itself well to generalization (e.g., instead of 1 bit in one context, you get a hundred 0.01 bit insights across 100 contexts). There may be some benefits not captured by comparing the number of bits relative to pretraining.
As the blog says, "Fewer bits, sure, but very valuable bits", this also seems like a different factor that would also be true. Learning these small decisions may be vastly more valuable for producing accurate outputs than learning through pretraining.
ex. how this reads if it is free-associating: "shower thought: RL on LLMs is kinda just 'did it work or not?' and the answer is just 'yes or no', yes or no is a boolean, a boolean is 1 bit, then bring in information theory interpretation of that, therefore RL doesn't give nearly as much info as, like, a bunch of words in pretraining"
or
ex. how this reads if it is relaying information gathered: "A common problem across people at companies who speak honestly with me about the engineering side off the air is figuring out how to get more out of RL. The biggest wall currently is the cross product of RL training being slowww and lack of GPUs. More than one of them has shared with me that if you can crack the part where the model gets very little info out of one run, then the GPU problem goes away. You can't GPU your way out of how little info they get"
I am continuing to assume it is much more A than B, given your thorough sounding explanation and my prior that he's not shooting the shit about specific technical problems off-air with multiple grunts.
Karpathy says that basically "RL sucks" and that it's like "sucking bits of supervision through a straw".
https://x.com/dwarkesh_sp/status/1979259041013731752/mediaVi...
You can kind of hear him pull in these extremely differing views on the future from very different sources, try and synthesize them, and also come out with some of his own perspective this year - I think it’s interesting. At the very least, his perspective is hyper-informed - he’s got fairly high-trust access to a lot of decision makers and senior researchers - and he’s smart and curious.
This year we’ve had him bring in the 2027 folks (AI explosion on schedule), Hinton (LLMs are literally divorced from reality, and a total dead-end), both Ilya (we probably need emotions for super intelligence, also I won’t tell you my plan), Karpathy and Dario (Dario maybe twice?), Gwen, all with very very different perspectives on what’s coming and why.
So, I think if you read him as one of the chroniclers of this era his own take is super interesting, and he’s in a position to be of great use precisely at synthesizing and (maybe) predicting; he should keep it up.
It's a bit like LLM glue. The glue isn't the main material - but it's the one that holds it all together.
I think the framing of the discussion in general is kind of misleading though, because it kind of avoids the question of "information inefficient about what?"
In RL, the model is becoming more informative about a stimulus-action-feedback space; in SL the model is becoming more informative about a stimulus-feedback space. RL is effectively "built for" searching a larger space.
In situations like the essay where you are directly comparing SL and RL, you're kind of saying for RL "the action space is restricted to dictionary X and the feedback space is binary yes or no" and for SL "the feedback space is restricted to dictionary X". So in a certain sense you're equating the RL action space to the SL feedback space.
In that case, maybe searching over suboptimal regions of the RL-action-SL-feedback space is inefficient. But the reason why, I think RL exists is because it generalizes to situations where the feedback and action space is bigger. Maybe you want to differentially associate different responses with different rewards, or sample a response space that is so large that you can't define it a priori. Then SL breaks down?
Maybe this is obvious but I guess I get a little uneasy about talking about information efficiency of RL and SL without a broader framework of equivalence and what information is being represented by the model in both cases. It seems to me RL is a kind of superset of SL in terms of what it is capable of representing, which maybe leads to inefficiencies when it's not being used to its fullest.
So is RL required to preserve those logic circuits?
There seems to be a trade-off in compute-efficiency and format vs intelligence
Imagine forcing someone who never used chopsticks to eat with the chopsticks. The results wouldn't be good - the instruction "use chopsticks" has taken effect, but an underlying "chopstick use" capability isn't there.
If your SFT data pushes your LLM too far past its capabilities? It'll teach it to try doing a thing it can't do.
If your SFT traces assume your LLM can do 10 digit multiplication, the LLM wouldn't learn 10 digit multiplication from them. It'll learn to attempt 10 digit multiplication, and it'll fail.
So the "chopstick capability" was already there (at least partially), but the SFT process actively degraded it. It seems less about the data being too hard and more about the parameter-efficient methods (like LoRA) overwriting or interfering with delicate reasoning circuits just to satisfy the formatting loss.
They write "we utilize 10% randomly selected from the training set as a validation set and the original validation set as a test set for evaluation. During the validation phase, we measure validation loss and save the weights of the best validation loss for every 5% of the training steps. We train for 10 epochs with a batch size of 4." so it might be as simple as not including the base model in the validation checkpoints, meaning that the first validated checkpoint is after half an epoch, which is plenty of time to do damage if the fine-tuning method/hyperparameter configuration isn't chosen well. Unfortunately, they don't graph their training curves.
andyjohnson0•2mo ago
https://en.wikipedia.org/wiki/Reinforcement_learning
quote•2mo ago
Angostura•2mo ago
on_the_train•2mo ago
gpvos•2mo ago
b112•2mo ago
Because that law doesn't hold, when malice has a massive profit motive, and almost zero downside.
Spammers, popups, spam, clickbait, all of it and more, not stupid, but planned.
sidibe•2mo ago
gpvos•2mo ago
Anyway, I was just saying that however irritating, it's likely just an omission out of forgetfulness, not deliberate clickbait. A minor application of Hanlon's razor.
Seeing the downvotes and even a flag, it appears I'll have to lower my expectation of people's cultural baggage here.
gpvos•1mo ago
jsnell•2mo ago
(Like, would you expect people to expand LLM or AGI in a title?)
farresito•2mo ago
robrenaud•2mo ago
VR stands for verified rewards and is the single bit per rollout that is the heart of the post. Maybe we can convince dang to update the title.
dang•2mo ago
https://news.ycombinator.com/newsguidelines.html
on_the_train•2mo ago
cheema33•2mo ago
vessenes•2mo ago
Upshot - don’t hate - pick up the vocab, it’s part of the learning process.