Briefly, the idea is recursively to decompose tasks into the simplest possible steps, recursively call (relatively small) LLMs as agents to execute one step at a time, and using a clever voting scheme to choose how to execute each step. The authors use this technique to get a relatively small LLM to solve Towers of Hanoi with 20 rings (1M steps). All of it using natural language.
The most obvious question is whether other tasks, more interesting -- less "rote" -- than Towers of Hanoi, can similarly be recursively decomposed into simple steps. I'm not sure that's always possible.
The paper however, meh...
No mention of MoE. One would think this is a logical evolution of that but not a mention (that I saw). Its own rubric for the task, Towers of Hanoi, was admittedly weak.
LLM papers are starting to look like the last decade of JS frameworks and Tools. Only with less code and more academics, and thats disappointing, because I think a lack of pragmatism and grounding is now holding the field back...
Big if that the decomposition and the voting happen accurately for anything other than toy problems
LMKIIW•41m ago