https://en.m.wikipedia.org/wiki/Betteridge's_law_of_headline...
It's pretty easy: causal reasoning. Causal, not statistic correlation only as LLM do, with or without "CoT".
If you mean deterministic rather than probabilistic, even Pearl-style causal models are probabilistic.
I think the author is circling around the idea that their idea of reasoning is to produce statements in a formal system: to have a set of axioms, a set of production rules, and to generate new strings/sentences/theorems using those rules. This approach is how math is formalized. It allows us to extrapolate - make new "theorems" or constructions that weren't in the "training set".
Reasoning, thinking, knowing, feeling, understanding, etc.
Or at the very least, our rubrics and heuristics for determining if someone (thing) thinks, feels, knows, etc, no longer work. And in particular, people create tests for those things thinking that they understand what they are testing for, when _most human beings_ would also fail those tests.
I think a _lot_ of really foundational work needs to be done on clearly defining a lot of these terms and putting them on a sounder basis before we can really move forward on saying whether machines can do those things.
I think this is the most important critique that undercuts the paper's claims. I'm less convinced by the other point. I think backtracking and/or parallel search is something future papers should definitely look at in smaller models.
The article is definitely also correct on the overreaching, broad philosophical claims that seems common when discussing AI and reasoning.
NitpickLawyer•1h ago