I found this paper both really interesting and clear. No one part is very novel, but It composes disparate threads to obtain what looks like strong results in OOD length generalization. Even for the toy task, and using a DSL (vs. being an LM), length-generalizing on simple math >4x is impressive, from what I've read.
This also fits my priors for the key elements of unlocking better OOD compositional generalization: variable recurrence, step-wise curriculum training to build depth-invariant algorithms, discrete bottlenecks.
Finally, it's very interesting to compare this to the below recent article arguing for the benefits of continuous latent spaces:
Reasoning by Superposition: A Theoretical Perspective on Chain of Continuous Thought (https://arxiv.org/abs/2505.12514)
My take is both papers are right, and that continuous spaces are more expressive and can handle tougher problem spaces (e.g. shortest graph path), whereas discrete spaces will provide a better inductive bias for elegant algorithms that can scale OOD. And I bet the two can be combined / balanced.
marojejian•1h ago
I found this paper both really interesting and clear. No one part is very novel, but It composes disparate threads to obtain what looks like strong results in OOD length generalization. Even for the toy task, and using a DSL (vs. being an LM), length-generalizing on simple math >4x is impressive, from what I've read.
This also fits my priors for the key elements of unlocking better OOD compositional generalization: variable recurrence, step-wise curriculum training to build depth-invariant algorithms, discrete bottlenecks. Finally, it's very interesting to compare this to the below recent article arguing for the benefits of continuous latent spaces: Reasoning by Superposition: A Theoretical Perspective on Chain of Continuous Thought (https://arxiv.org/abs/2505.12514)
My take is both papers are right, and that continuous spaces are more expressive and can handle tougher problem spaces (e.g. shortest graph path), whereas discrete spaces will provide a better inductive bias for elegant algorithms that can scale OOD. And I bet the two can be combined / balanced.