If we're going to be scientific about LLMs, the first thing engineering has to grasp is words aren't the sum of knowledge by a long shot. Words are arbitrary stand-ins for specific neural syntax that's nonetheless paradoxically idiosyncratic. Nothing arbitrary can be automated sans the underlying neural-syntax. LLMs discard the basis for knowledge and run empty.
AI is essentially vaporware as a general tool. Time to set off in new directions.
“We refute (based on empirical evidence) claims that humans use linguistic representations to think.”
Ev Fedorenko Language Lab MIT 2024
Mallowram•2h ago
“We refute (based on empirical evidence) claims that humans use linguistic representations to think.” Ev Fedorenko Language Lab MIT 2024