This post argues that the issue is structural rather than cognitive: LLMs don’t inhabit a world where statements persist, bind future behavior, or incur consequences.
I show a minimal, reproducible demo that anyone can run in a commercial LLM session. Same model, same questions — the only difference is a single “world” declaration added at the start.
With that minimal constraint, observable behavior changes immediately: - less position drift - fewer automatic reversals - more conservative judgments - refusal to exit the defined world
This does NOT claim that LLMs think, reason, or approach AGI. It only shows that without a world, reasoning-like properties are not even measurable.
Full write-up (with public session transcripts): https://medium.com/@kimounbo38/llms-dont-lack-reasoning-they-lack-a-world-0daf06fcdaeb?postPublishedType=initial