frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

Q-learning is not yet scalable

https://seohong.me/blog/q-learning-is-not-yet-scalable/
164•jxmorris12•13h ago

Comments

whatshisface•11h ago
The benefit of off-policy learning is fundamentally limited by the fact that data from ineffective early exploration isn't that useful for improving on later more refined policies. It's clear if you think of a few examples: chess blunders, spasmodic movement, or failing to solve a puzzle. This becomes especially clear once you realize that data only becomes off-policy when it describes something the policy would not do. I think the solution to this problem is (unfortunately) related to the need for better generalization / sample efficiency.
getnormality•11h ago
Doesn't this claim prove too much? What about the cited dog that walked in 20 minutes with off-policy learning? Or are you making a more nuanced point?
AndrewKemendo•11h ago
Q learning isn’t scalable because of the stream barrier, however streaming DRL (TD-Lambda) is scalable:

https://arxiv.org/abs/2410.14606

Note that this is from Turing award winner Richard Sutton’s lab at UofA

RL works

getnormality•10h ago
But does this address scaling to long-horizon tasks, which is what the article is about?
s-mon•10h ago
While I like the blogpost, I think the use of unexplained acronyms undermines the opportunity of this blogpost to be useful to the wider audience. Small nit: make sure acronyms and jargon is explained.
keremk•3h ago
For these kinds of blogposts where the content is very good but not very approachable due to assumption of extensive prior knowledge, I find using an AI tool very useful, to explain and simplify. Just used the new browser Dia for this, and it worked really well for me. Or you can use your favorite model provider and copy and paste. This way the post stays concise, and yet you can use your AI tools to ask questions and clarify.
anonthrowawy•22m ago
i actually think thats what made it crisp
andy_xor_andrew•10h ago
the magic thing about off-policy techniques such as Q-Learning is that they will converge on an optimal result even if they only ever see sub-optimal training data.

For example, you can use a dataset of chess games from agents that move totally randomly (with no strategy at all) and use that as an input for Q-Learning, and it will still converge on an optimal policy (albeit more slowly than if you had more high-quality inputs)

Ericson2314•9h ago
I would think this being true is the definition of the task being "ergodic" (distorting that term slightly, maybe). But I would also expect non-ergodic tasks to exist.
andy_xor_andrew•10h ago
The article mentions AlphaGo/Mu/Zero was not based on Q-Learning - I'm no expert but I thought AlphaGo was based on DeepMind's "Deep Q-Learning"? Is that not right?
energy123•10h ago
DeepMind's earlier success with Atari was based on offline Q-Learning
lalaland1125•9h ago
This blog post is unfortunately missing what I consider the bigger reason why Q learning is not scalable:

As horizon increases, the number of possible states (usually) increases exponentially. This means you require exponentially increasing data to have a hope of training a Q that can handle those states.

This is less of an issue for on policy learning, because only near policy states are important, and on policy learning explicitly only samples those states. So even though there are exponential possible states your training data is laser focused on the important ones.

Ericson2314•9h ago
https://news.ycombinator.com/item?id=44280505 I think that thead might help?

Total layman here, but maybe some tasks are "uniform" despite being "deep" in such a way that poor samples still suffice? I would call those "ergodic" tasks. But surely there are other tasks where this is not the case?

lalaland1125•9h ago
Good clarification. I have edited my post accordingly.

There are situations where states increase at much slower rates than exponential.

Those situations are a good fit for Q learning.

arthurcolle•9h ago
Feel the Majorana-1
elchananHaas•8h ago
I think the article's analysis of overapproximation bias is correct. The issue is that due to the Max operator in the Q learning noise is amplified over timesteps. Some methods to reduce this bias, such as https://arxiv.org/abs/1509.06461 were successful in improving the RL agents performance. Studies have found that this happens even more for the states that the network hasn't visited many times.

An exponential number of states only matters if there is no pattern to them. If there is some structure that the network can learn then it can perform well. This is a strength of deep learning, not a weakness. The trick is getting the right training objective, which the article claims q learning isn't.

I do wonder if MuZero and other model based RL systems are the solution to the author's concerns. MuZero can reanalyze prior trajectories to improve training efficiency. The Monte Carlo Tree Search (MCTS) is a principled way to perform horizon reduction by unrolling the model multiple steps. The max operator in MCTS could cause similar issues but the search progressing deeper counteracts this.

jhrmnn•7h ago
Is this essentially the same difference as between vanilla regular-grid and importance-sampling Monte Carlo integration?
briandw•9h ago
This papers assumes that you know quite a bit about RL already. If you really want to dig into RL, this intro course from David Silver (Deep Mind) is excellent: https://youtu.be/2pWv7GOvuf0?si=CmFJHNnNqraL5i0s
ArtRichards•8h ago
Thank you for this link.
Onavo•7h ago
Q learning is great as a hello world RL project for teaching undergraduates.
paraschopra•6h ago
Humans actually do both. We learn from on-policy by exploring consequences of our own behavior. But we also learn off-policy, say from expert demonstrations (but difference being we can tell good behaviors from bad, and learn from a filtered list of what we consider as good behaviors). In most, off-policy RL, a lot of behaviors are bad and yet they get into the training set and hence leading to slower training.
taneq•2h ago
> difference being we can tell good behaviors from bad

Not always! That's what makes some expert demonstrations so fascinating, watching someone do something "completely wrong" (according to novice level 'best practice') and achieve superior results. Of course, sometimes this just means that you can get away with using that kind of technique (or making that kind of blunder) if you're just that good.

GistNoesis•4h ago
The stated problem is getting Off-policy RL to work, aka discover a policy smarter than the one it was shown in its dataset.

If I understand correctly, they show random play, and expect perfect play to emerge from the naive Q-learning training objective.

In layman's term, they expect the algorithm to observe random smashing of keys on a piano, and produce a full-fledge symphony.

The main reason it doesn't work is because it's fundamentally some Out Of Distribution training.

Neural networks works best in interpolation mode. When you get into Out Of Distribution mode, aka extrapolation mode, you rely on some additional regularization.

One such regularization you can add, is to trying to predict the next observations, and build an internal model whose features help make the decision for the next action. Other regularization may be to unroll in your head multiple actions in a row and use the prediction as a training signal. But all these strategies are no longer the domain of the "model-free" RL they are trying to do.

Other regularization, can be making the decision function more smooth, often by reducing the number of parameters (which goes against the idea of scaling).

The adage is "no plan survive first contact with the enemy". There needs to be some form of exploration. You must somehow learn about the areas of the environment where you need to operate. Without interaction from the environment, one way to do this is to "grok" a simple model of the environment (fitting perfectly all observation (by searching for it) so as to build a perfect simulator), and learn on-policy from this simulation.

Alternatively if you have already some not so bad demonstrations in your training dataset, you can get it to work a little better than the policy of the dataset, and that's why it seems promising but it's really not because it's just relying of all the various facets of the complexity already present in the dataset.

If you allow some iterative gathering phase of information from the environment, interlaced with some off-policy training, it's the well known domain of Bayesian methods to allow efficient exploration of the space like "kriging", "gaussian process regression", multi-arm bandits and "energy-based modeling", which allow you to trade more compute for sample efficiency.

The principle being you try model what you know and don't know about the environment. There is a trade-off between the uncertainty that you have because you have not explored the area of space yet and the uncertainty because the model don't fit the observation perfectly yet. You force yourself to explore unknown area so as not to have regrets (Thomson Sampling) ) but still sample promising regions of the space.

In contrast to on-policy learning, the "bayesian exploration learning" learn in an off-policy fashion all possible policies. Your robot doesn't only learn to go from A to B in the fastest way. Instead it explicitly tries to learn various locomotion policies, like trotting or galloping, and other gaits and use them to go from A to B, but spend more time perfecting galloping as it seems that galloping is faster than trotting.

Possibly you can also learn adaptive strategy like they do in sim-to-real experiments where your learned policy is based on unknown parameters like how much weight your robot carry, and your learned policy will estimate on-the-fly these unknown parameters to become more robust (aka filling in the missing parameters to let the optimal "Model Predictive Control" work).

mrfox321•44m ago
Read the paper.

They control for the data being in-distribution

Their dataset also has examples of the problem being solved.

itkovian_•4h ago
Completely agree and think it’s a great summary. To summarize very succinctly; you’re chasing a moving target where the target changes based on how you move. There’s no ground truth to zero in on in value-based RL. You minimise a difference in which both sides of the equation have your APPROXIMATION in them.

I don’t think it’s hopeless though, I actually think RL is very close to working because what it lacked this whole time was a reliable world model/forward dynamics function (because then you don’t have to explore, you can plan). And now we’ve got that.

toootooo•3h ago
How can we eliminate Q-learning’s bias in long-horizon, off-policy tasks?....

Why SSL was renamed to TLS in late 90s (2014)

https://tim.dierks.org/2014/05/security-standards-and-name-changes-in.html
2•Bogdanp•37s ago•0 comments

Show HN: Mana Blade – 3D Mmorpg in React Three Fiber and WebGPU

https://manablade.com
1•jverrecchia•53s ago•0 comments

Show HN: GitHubba – Tinder-like app for discovering GitHub repositories

https://apps.apple.com/tr/app/githubba/id6747093581
1•chtslk•2m ago•0 comments

Boeing trims projection for 20-year jet demand

https://www.reuters.com/business/aerospace-defense/boeing-trims-projection-20-year-jet-demand-2025-06-14/
1•rntn•3m ago•0 comments

Air India 171 Accident: RAT Got Deployed

https://www.youtube.com/watch?v=8XYO-mj1ugg
1•belter•3m ago•0 comments

Show HN: Using ReARM as Version Manager

https://docs.rearmhq.com/tutorials/using-rearm-as-version-manager.html
1•taleodor•4m ago•0 comments

Tintin, Hergé and Chang – A Friendship That Changed the World

https://thewire.in/books/tintin-herge-and-chang-a-friendship-that-changed-the-world
1•thunderbong•4m ago•0 comments

Show HN: NeKernel v0.0.3e1 – Security Patches

https://github.com/nekernel-org/nekernel/releases/tag/v0.0.3e1
1•Amlal•4m ago•0 comments

Ask HN: Prevent Secrets from Committing to Repos

1•abhijais1•5m ago•0 comments

Clone Syscall: Spawning Processes, Threads and Containers in Linux

https://substack.com/home/post/p-164406490
1•kanishkarj•10m ago•0 comments

Use AI like a $10M consultant instead of like a search engine

https://old.reddit.com/r/Entrepreneurs/comments/1l8oz08/use_ai_like_a_10m_consultant_instead_of_like_a/
2•ashher00•19m ago•1 comments

The Evolution of Linux Binaries in Targeted Cloud Operations

https://unit42.paloaltonetworks.com/elf-based-malware-targets-cloud/
1•mooreds•21m ago•0 comments

Open Steno Project

http://www.openstenoproject.org/
1•tosh•21m ago•1 comments

"But Everybody Knows This "

https://www.rickmanelius.com/p/but-everybody-already-knows-this
2•mooreds•22m ago•0 comments

SAQ to air at 100th anniversary on July 2nd 2025

https://alexander.n.se/celebrate-100-years-with-saq-grimeton/
1•austinallegro•23m ago•0 comments

Show HN: Simple Solver for Word Puzzles Game

https://wordsdescrambler.com/
1•gogo61•24m ago•0 comments

I fight bots in my free time

https://xeiaso.net/talks/2025/bsdcan-anubis/
1•xena•26m ago•0 comments

Show HN: Fomr – The Fastest Form Builder

https://fomr.io/
1•bohdan_kh•29m ago•0 comments

Dan Luu and I consider possible reasons for bridge collapse

https://statmodeling.stat.columbia.edu/2025/06/15/dan-luu-and-i-consider-possible-reasons-for-collapse-of-bridge/
1•Tomte•30m ago•0 comments

Bad Advice

https://collabfund.com/blog/very-bad-advice/
1•wsostt•30m ago•0 comments

Parsing, Not Guessing

https://codehakase.com/shorts/parsing-not-guessing/
1•codehakase•30m ago•0 comments

Show HN: I made my Excel timetable sync to Google Calendar

https://www.tronic247.com/converting-my-boring-excel-timetable-to-google-calendar
1•notracks•31m ago•0 comments

The Keyset

https://dougengelbart.org/content/view/273/
2•tosh•31m ago•0 comments

In 'Mountainhead,' a Copper Pot Offers a Subtle (and Silly) Display of Wealth

https://www.nytimes.com/2025/06/11/style/mountainhead-wealth-turbot-pot.html
1•mooreds•31m ago•0 comments

Show HN: Mini Debug Quiz – find your debugging archetype in 60 s

https://hubblequiz.com/mini-quiz
1•juanmera01•32m ago•0 comments

Free SwiftUI Templates

https://swiftviews.vercel.app
2•ajagatobby•34m ago•1 comments

AI sceptic in LLM adventure land

https://aplus.rs/2025/ai-sceptic-in-llm-adventure-land/
2•ingve•36m ago•0 comments

He Has Months Left. His Son Hopes an A.I. Version of Him Can Live On

https://www.nytimes.com/interactive/2025/06/13/magazine/ai-avatar-life-death.html
1•surbas•39m ago•2 comments

Is the decline of reading poisoning our politics?

https://www.vox.com/politics/414049/reading-books-decline-tiktok-oral-culture
3•Ozarkian•40m ago•0 comments

Terence Tao Lex Fridman Podcast [video]

https://www.youtube.com/watch?v=HUkBz-cdB-k
2•nill0•41m ago•0 comments