https://arxiv.org/abs/2410.14606
Note that this is from Turing award winner Richard Sutton’s lab at UofA
RL works
For example, you can use a dataset of chess games from agents that move totally randomly (with no strategy at all) and use that as an input for Q-Learning, and it will still converge on an optimal policy (albeit more slowly than if you had more high-quality inputs)
As horizon increases, the number of possible states (usually) increases exponentially. This means you require exponentially increasing data to have a hope of training a Q that can handle those states.
This is less of an issue for on policy learning, because only near policy states are important, and on policy learning explicitly only samples those states. So even though there are exponential possible states your training data is laser focused on the important ones.
Total layman here, but maybe some tasks are "uniform" despite being "deep" in such a way that poor samples still suffice? I would call those "ergodic" tasks. But surely there are other tasks where this is not the case?
There are situations where states increase at much slower rates than exponential.
Those situations are a good fit for Q learning.
An exponential number of states only matters if there is no pattern to them. If there is some structure that the network can learn then it can perform well. This is a strength of deep learning, not a weakness. The trick is getting the right training objective, which the article claims q learning isn't.
I do wonder if MuZero and other model based RL systems are the solution to the author's concerns. MuZero can reanalyze prior trajectories to improve training efficiency. The Monte Carlo Tree Search (MCTS) is a principled way to perform horizon reduction by unrolling the model multiple steps. The max operator in MCTS could cause similar issues but the search progressing deeper counteracts this.
Not always! That's what makes some expert demonstrations so fascinating, watching someone do something "completely wrong" (according to novice level 'best practice') and achieve superior results. Of course, sometimes this just means that you can get away with using that kind of technique (or making that kind of blunder) if you're just that good.
If I understand correctly, they show random play, and expect perfect play to emerge from the naive Q-learning training objective.
In layman's term, they expect the algorithm to observe random smashing of keys on a piano, and produce a full-fledge symphony.
The main reason it doesn't work is because it's fundamentally some Out Of Distribution training.
Neural networks works best in interpolation mode. When you get into Out Of Distribution mode, aka extrapolation mode, you rely on some additional regularization.
One such regularization you can add, is to trying to predict the next observations, and build an internal model whose features help make the decision for the next action. Other regularization may be to unroll in your head multiple actions in a row and use the prediction as a training signal. But all these strategies are no longer the domain of the "model-free" RL they are trying to do.
Other regularization, can be making the decision function more smooth, often by reducing the number of parameters (which goes against the idea of scaling).
The adage is "no plan survive first contact with the enemy". There needs to be some form of exploration. You must somehow learn about the areas of the environment where you need to operate. Without interaction from the environment, one way to do this is to "grok" a simple model of the environment (fitting perfectly all observation (by searching for it) so as to build a perfect simulator), and learn on-policy from this simulation.
Alternatively if you have already some not so bad demonstrations in your training dataset, you can get it to work a little better than the policy of the dataset, and that's why it seems promising but it's really not because it's just relying of all the various facets of the complexity already present in the dataset.
If you allow some iterative gathering phase of information from the environment, interlaced with some off-policy training, it's the well known domain of Bayesian methods to allow efficient exploration of the space like "kriging", "gaussian process regression", multi-arm bandits and "energy-based modeling", which allow you to trade more compute for sample efficiency.
The principle being you try model what you know and don't know about the environment. There is a trade-off between the uncertainty that you have because you have not explored the area of space yet and the uncertainty because the model don't fit the observation perfectly yet. You force yourself to explore unknown area so as not to have regrets (Thomson Sampling) ) but still sample promising regions of the space.
In contrast to on-policy learning, the "bayesian exploration learning" learn in an off-policy fashion all possible policies. Your robot doesn't only learn to go from A to B in the fastest way. Instead it explicitly tries to learn various locomotion policies, like trotting or galloping, and other gaits and use them to go from A to B, but spend more time perfecting galloping as it seems that galloping is faster than trotting.
Possibly you can also learn adaptive strategy like they do in sim-to-real experiments where your learned policy is based on unknown parameters like how much weight your robot carry, and your learned policy will estimate on-the-fly these unknown parameters to become more robust (aka filling in the missing parameters to let the optimal "Model Predictive Control" work).
They control for the data being in-distribution
Their dataset also has examples of the problem being solved.
I don’t think it’s hopeless though, I actually think RL is very close to working because what it lacked this whole time was a reliable world model/forward dynamics function (because then you don’t have to explore, you can plan). And now we’ve got that.
whatshisface•11h ago
getnormality•11h ago