The first sections present us with a collection statements on the intrinsic limitations of LLMs, supposedly incapable of real understanding and unable to tell truth from falsehood; which is ironic given that the arguments provided by the authors are mostly wrong or irrelevant non-sequiturs. For example the claim "what is being automated here is not cognition but language" mistakes the capabilities of LLMs with their training technique, and has been refuted multiple times (e.g. by showing that LLMs do think ahead, and that they can even perceive their own internal state).
Anyway, I fed it to Claude who provided an insightful critique, encouragingly highlighting the valid parts while gently dismissing many of the claims and assumptions about LLMs limitations. In summary, quote: "the sharp human/machine dichotomy it draws rests on an idealized picture of human cognition". Ends on a recommendation: "The paper would benefit from engaging more seriously with the possibility that the epistemic differences it identifies are matters of degree rather than kind."
throw310822•1h ago
Anyway, I fed it to Claude who provided an insightful critique, encouragingly highlighting the valid parts while gently dismissing many of the claims and assumptions about LLMs limitations. In summary, quote: "the sharp human/machine dichotomy it draws rests on an idealized picture of human cognition". Ends on a recommendation: "The paper would benefit from engaging more seriously with the possibility that the epistemic differences it identifies are matters of degree rather than kind."