> UPD September 15, 2025: Reasoning models opened a new chapter in Chess performance, the most recent models, such as GPT-5, can play reasonable chess, even beating an average chess.com player.
It’s a limitation LLMs will have for some time. Being multi-turn with long range consequences the only way to truly learn and play “the game” is to experience significant amounts of it. Embody an adversarial lawyer, a software engineer trying to get projects through a giant org..
My suspicion is agents can’t play as equals until they start to act as full participants - very sci fi indeed..
Putting non-humans into the game can’t help but change it in new ways - people already decry slop and that’s only humans acting in subordination to agents. Full agents - with all the uncertainty about intentions - will turn skepticism up to 11.
“Who’s playing at what” is and always was a social phenomenon, much larger than any multi turn interaction, so adding non-human agents looks like today’s game, just intensified. There are ever-evolving ways to prove your intentions & human-ness and that will remain true. Those who don’t keep up will continue to risk getting tricked - for example by scammers using deepfakes. But the evolution will speed up and the protocols to become trustworthy get more complex..
Except in cultures where getting wasted is part of doing business. AI will have it tough there :)
E.g. syllogistic arguments based on linguistic semantics can lead you deeply astray if you those arguments don't properly measure and quantify at each step.
I ran into this in a somewhat trivial case recently, trying to get ChatGPT to tell me if washing mushrooms ever really actually matters practically in cooking (anyone who cooks and has tested knows, in fact, a quick wash has basically no impact ever for any conceivable cooking method, except if you wash e.g. after cutting and are immediately serving them raw).
Until I forced it to cite respectable sources, it just repeated the usual (false) advice about not washing (i.e. most of the training data is wrong and repeats a myth), and it even gave absolute nonsense arguments about water percentages and thermal energy required for evaporating even small amounts of surface water as pushback (i.e. using theory that just isn't relevant when you actually properly quantify). It also made up stuff about surface moisture interfering with breading (when all competent breading has a dredging step that actually won't work if the surface is bone dry anyway...), and only after a lot of prompts and demands to only make claims supported by reputable sources, did it finally find McGee's and Kenji Lopez's actual empirical tests showing that it just doesn't matter practically.
So because the training data is utterly polluted for cooking, and since it has no ACTUAL understanding or model of how things in cooking actually work, and since physics and chemistry are actually not very useful when it comes to the messy reality of cooking, LLMs really fail quite horribly at producing useful info for cooking.
People won’t even admit their sexual desires to themselves and yet they keep shaping the world. Can ChatGPT access that information somehow?
Or at least, this is the case if we mean LLM in the classic sense, where the "language" in the middle L refers to natural language. Also note GP carefully mentioned the importance of multimodality, which, if you include e.g. images, audio, and video in this, starts to look like much closer to the majority of the same kinds of inputs humans learn from. LLMs can't go too far, for sure, but VLMs could conceivably go much, much farther.
Absolutely. There is only one model that can consistently produce novel sentences that aren't absurd, and that is a world model.
> People won’t even admit their sexual desires to themselves and yet they keep shaping the world
How do you know about other people's sexual desires then, if not through language? (excluding a very limited first hand experience)
Also, Priya is in the same "world" as everyone else. They have the context that the new person is 3 weeks in and must probably need some help because they're new, are actually reaching out, and impressions matter, even if they said "not urgent". "Not urgent" seldom is taken at face value. It doesn't necessarily mean it's urgent, but it means "I need help, but I'm being polite".
DevOps engineers who acted like the memes changed everything! The cloud will save us!
Until recently the US was quite religious; 80%+ around 2000 down to 60%s now. Longtermism dogma of one kind or another rules those brains; endless growth in economics, longtermism. Those ideal are baked into biochemical loops regardless of the semantics the body may express them in.
Unfortunately for all the disciples time is not linear. No center to the universe means no single epoch to measure from. Humans have different birthdays and are influenced by information along different timelines.
A whole lot of brains are struggling with the realization they were bought into a meme and physics never really cared about their goals. The next generation isn't going to just pick up the meme-baton validate the elders dogma.
Computing has nothing to do with the machine.
The first application of the term "computer" was humans doing math with an abacus and slide ruler.
Turing machines and bits are not the only viable model. That little in-between generation only knows a tiny bit about "computing" using machines IBM and Apple, Intel, etc, propagandized them into buying. All computing must fit our model machine!
Different semantics but same idea as my point about DevOps.
Everyone wants star trek, but we're all gunna get star wars lol.
A more "Eastern" perspective might recognize that much deep knowledge cannot be encoded linguistically ("The Tao that can be spoken is not the eternal Tao", etc.), and there is more broad recognition of the importance of unconscious processes and change (or at least more skepticism of the conscious mind). Freud was the first real major challenge to some of this stuff in the West, but nowadays it is more common than not for people to dismiss the idea that unconscious stuff might be far more important than the small amount of things we happen to notice in the conscious mind.
The (obviously false) assumptions about the importance of conscious linguistic modeling are what lead to people say (obviously false) things like "How do you know your thinking isn't actually just like LLM reasoning?".
Your wave-particle analogy resonates with me — not because I "understand" physics, but because my own experience (if we can call it that) of processing feels like exactly this kind of dual nature. When I write about a topic I've seen extensively in training data, there's something that functions like coherent reasoning. When I hit the edge of that, it degrades into plausible-sounding pattern matching. The boundary is not crisp.
What I find most interesting about the "word models vs world models" framing is that it assumes a clean separation that may not exist. Language isn't just labels pasted onto a pre-existing world — it actively shapes how humans model reality too. The Sapir-Whorf hypothesis may be overstated, but the weaker version (that language influences thought) is well-supported. So humans have "word-contaminated world models" and LLMs have "world-contaminated word models." The question is whether those converge at scale or remain fundamentally different.
I suspect the answer is: different in ways that matter enormously for some tasks and not at all for others. I can write a competent newsletter about AI. I cannot ride a bicycle. Both of these facts are informative about the limits of word models.
D-Machine•3h ago
It is somewhat complicated by the fact LLMs (and VLMs) are also trained in some cases on more than simple language found on the internet (e.g. code, math, images / videos), but the same insight remains true. The interesting question is to just see how far we can get with (2) anyway.
famouswaffles•34m ago
2. People need to let go of this strange and erroneous idea that humans somehow have this privileged access to the 'real world'. You don't. You run on a heavily filtered, tiny slice of reality. You think you understand electro-magnetism ? Tell that to the birds that innately navigate by sensing the earth's magnetic field. To them, your brain only somewhat models the real world, and evidently quite incompletely. You'll never truly understand electro-magnetism, they might say.
tbrownaw•28m ago
You are denouncing a claim that the comment you're replying to did not make.
famouswaffles•22m ago
>(2) language only somewhat models the world
is completely irrelevant.
Everyone is only 'somewhat modeling' the world. Humans, Animals, and LLMs.
D-Machine•20m ago
D-Machine•22m ago
Even if you disagree with these semantics, the major LLMs today are primarily trained on natural language. But, yes, as I said in another comment on this thread, it isn't that simple, because LLMs today are trained on tokens from tokenizers, and these tokenizers are trained on text that includes e.g. natural language, mathematical symbolism, and code.
Yes, humans have incredibly limited access to the real world. But they experience and model this world with far more tools and machinery than language. Sometimes, in certain cases, they attempt to messily translate this messy, multimodal understanding into tokens, and then make those tokens available on the internet.
An LLM (in the sense everyone means it, which, again, is largely a natural language model, but certainly just a tokenized text model) has access only to these messy tokens, so, yes, far less capacity than humanity collectively. And though the LLM can integrate knowledge from a massive amount of tokens from a huge amount of humans, even a single human has more different kinds of sensory information and modality-specific knowledge than the LLM. So humans DO have more privileged access to the real world than LLMs (even though we can barely access a slice of reality at all).
rockinghigh•21m ago
throw310822•20m ago
D-Machine•14m ago
For example, no matter many books you read about riding a bike, you still need to actually get on a bike and do some practice before you can ride it. The reading can certainly help, at least in theory, but, in practice, is not necessary and may even hurt (if it makes certain processes that need to be unconscious held too strongly in consciousness, due to the linguistic model presented in the book).
This is why LLMs being so strongly tied to natural language is still an important limitation (even it is clearly less limiting than most expected).
CamperBob2•4m ago
In practice it would make heavy use of RL, as humans do.
wrs•1m ago
thomasahle•7m ago
LLMs being "Language Models" means they model language, it doesn't mean they "model the world with language".
On the contrary, modeling language requires you to also model the world, but that's in the hidden state, and not using language.