> Go read Peter Naur's "Programming as Theory Building" and then come back and tell me that LLMs can replace human programmers
Which to me gives a very different understanding of what the article is going to be about than the current HN title. This is not a criticism of the submitter, I know HN has a character limit and sometimes it’s hard to condense titles without unintentionally losing meaning.
> In this essay, I will perform the logical fallacy of argument from authority (wikipedia.org) to attack the notion that large language model (LLM)-based generative "AI" systems are capable of doing the work of human programmers.
Is any part of this intended to be valid? It's a very weak argument - is that the purpose?
We do understand how the machine works and how it came to be. What most companies are seeking for is a way to make that useful.
It isn't certain that this framing is true. As part of learning to predict the outcome of the work token by token, LLMs very well might be "doing the work" as an intermediate step via some kind of reverse engineering.
They're already have the full work available. When you're reading the source code of a program to learn how it works, your objective is not to learn what keyword are close to each other or extract the common patterns. You're extracting a model which is an abstraction about some real world concept (or some other abstractions) and rules of manipulation of that abstraction.
After internalizing that abstraction, you can replicate it with whatever you want, extends it further,... It's an internal model that you can shape as you please in your mind, then create a concrete realization once you're happy with the shape.
I.e. the theory of the program as it exist in the minds of the development team might not be fully available for reconstruction from just the final code and docs since it includes a lot of activity that does not end up in the code.
> team might not be fully available for reconstruction from just the
> final code and docs
As an obvious and specific source of examples, all the features they decided to omit, "optimizations" they considered but rejected for various reasons, etc. are not present in the code and seldom in the comments or documentation.
Occasionally you will see things like "Full search rather than early exit on match to prevent timing attacks" or "We don't write it in format xyz because of patent issues" or some such, but the vast majority of such cases pass unremarked.
I suspect that the question to his final answer is:
> To replace human programmers, LLMs would need to be able to build theories by Ryle’s definition
If you see "a = b + 1" it's obvious that the variable a is taking the value of variable b incremented by one. What LLMs can't do is explaining why we have this and why it needs to change to "a = b - 1" in the new iteration. Writing code is orthogonal to this capability.
I did a search on Github for code containing `a=b+1` and found this:
https://github.com/haoxizhong/problem/blob/a2b934ee7bb33bbe9...
It looks to me that ChatGPT specifically does a more than OK job at explaining why we have this.
https://chatgpt.com/share/680f877d-b588-8003-bed5-b425e14a53...
While your use of 'theory' is reasonable Naur uses a specific and more elaborate definition of theory.
Example from the paper:
>Case 1 concerns a compiler. It has been developed by a group A for a Language L and worked very well on computer X. Now another group B has the task to write a compiler for a language L + M, a modest extension of L, for computer Y. Group B decides that the compiler for L developed by group A will be a good starting point for their design, and get a contract with group A that they will get support in the form of full documentation, including annotated program texts and much additional written design discussion, and also personal advice. The arrangement was effective and group B managed to develop the compiler they wanted. In the present context the significant issue is the importance of the personal advice from group A in the matters that concerned how to implement the extensions M to the language. During the design phase group B made suggestions for the manner in which the extensions should be accommodated and submitted them to group A for review. In several major cases it turned out that the solutions suggested by group B were found by group A to make no use of the facilities that were not only inherent in the structure of the existing compiler but were discussed at length in its documentation, and to be based instead on additions to that structure in the form of patches that effectively destroyed its power and simplicity. The members of group A were able to spot these cases instantly and could propose simple and effective solutions, framed entirely within the existing structure. This is an example of how the full program text and additional documentation is insufficient in conveying to even the highly motivated group B the deeper insight into the design, that theory which is immediately present to the members of group A.
There are alternative views on theorizing that reject flat positivistic reductions and attempt to show that theories are metaphysical and force us to make varying degrees of ontological and normative claims, see the work of Marx Wartofsky, for example. This view is far more humanistic and ties in directly to sociological bases in praxis. This view will support the author's claims much better. Furthermore, Wartofsky differentiates between different types of cognitive representations (e.g. there is a difference between full blown theories and simple analogies). A lot of people use the term "theory" way more loosely than a proper analysis and rigorous epistemic examination would necessitate.
(I'm not going to make the argument here but fwiw, it's clear under these notions that LLMs do not form theories, however, they are playing an increasingly important part in our epistemic activity of theory development)
I might one day wake up and find my dog to be more intelligent than me, not very likely but I can't prove it to be impossible.
It's still useless.
If programming is a tool for thinking and modeling, with execution by a machine as a secondary benefit, then outsourcing these things to LLMs contributes nothing to our understanding. By analogy, we do math because we wish to understand the mathematical universe, so to speak, not because we just want some practical result.
To understand, to know, are some of the highest powers of the human person. Machines are useful for helping us enable certain work or alleviate tedium to focus on the important stuff, but handing off understanding and knowledge to a machine (if it were possible, which it isn't) would be one of the most inhuman things you could do.
> Second, you cannot effectively work on a large program without a working "theory" of that program...
I find the whole argument and particularly the above to be a senseless rejection of bootstrapping. Obviously there was a point in time (for any program, individual programmer and humanity as a whole) that we didn't have a "theory" and didn't do the work, but now we have both, so a program and its theory can appear "de novo".
So with that in mind, how can we reject the possibility that as an AI Agent (e.g. Aider) works on a program over time, it bootstraps a theory?
Lack of effective memory, that might have worked if you constantly retrained the LLM incorporating the new wisdom iteratively like a human does, but current LLM architecture doesn't enable that. The context provided is neither large enough nor can it use it effectively enough for complex problems.
And this isn't easy to solve, you very quickly collapse the LLM if you try to do this in the naive ways. We need some special insight that lets us update LLM continuously as it works in a positive direction the way humans can.
So that isn't just "ask it to play pokemon", that is a large program with tons of different prompts and memories that kicks in at different times, and even with all that and updates to the program when it gets stuck it still struggles massively and repeats mistakes over and over in ways human never would.
That's the appropriate level of faith for today's LLMs. They're not good enough to replace programmers. They're good enough that we can't reject the possibility of them one day being good enough to replace programmers.
Possibility means practically nothing.
[1] https://www.hipkapi.com/2011/03/10/foundationalism-and-virtu...
[2] Brown, Harold I. “Circular Justifications.” PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1994 (1994): 406–14. http://www.jstor.org/stable/193045.
You can have the models pseudo “learn” by putting things in something like a system prompt but this is limited by context, and they will never permanently learn. But we don’t train at inference time with today’s LLMs.
We can explicitly reject this possibility by looking at the information that goes into the model at train and test time.
However, I have two counters:
- First, the rational argument right now is that one person and money spent toward LLMs can replace three - or more - programmers total. This is the argument with a three year bound. The current technology will improve and developers will learn how to use it to its potential.
- Second, the optimistic argument is that a combination of the LLM model with larger context windows and other supporting technology around it will allow it to emulate a theory of mind that is similar to the average programmer. Consider Go or Chess - we didn't think computers had the theory of mind to be better than a human, but it found other ways. For humans, Naur's advice stands. We cannot assume that this is true if there are tools with different strengths and weaknesses than humans.
There is no doubt it will improve, but if you look at a car, it is still the same fundamental "shape" of a model T.
There are niceties and conveniences, efficiency went way up, but we don't have flying cars.
I think we are going to have something, somewhere in the middle, AI features will eventually find their niche, people will continue to leverage whatever tools and products are available to build the best thing they can.
I believe that a future of self-writing code pooping out products, AI doing all the other white collar jobs, and robots doing the rest cannot work. Fundamentally there is no "business" without customers and no customers if no one is earning.
And the changes will be in the auxiliary features. We will figure out ways to have LLMs understand APIs better without training them. We will figure out ways to better focus its context. We will chain LLM requests and contexts in a way that help solve problems better. We will figure out ways to pass context from session to session that an LLM can effectively have a learning memory. And we will figure out our own best practices to emphasize their strengths and minimize their weaknesses. (We will build better roads.)
And as much as you want to say that - a Model T was uncomfortable, had a range of about 150 miles between fill-ups, and maxed out at 40-45 mph. It also broke frequently and required significant maintenance. It might take 13-14 days to get a Model T from new york to los angeles today notwithstanding maintenance issues, and a modern car could make it reliably in 4-5 days if you are driving legally and not pushing more than 10 hours a day.
I too think that self-writing code is not going to happen, but I do think there is a lot of efficiency to be made.
Maybe as a senior dev working on a large complex established project I don't benefit from LLMs as much as others because as I and the project mature.. productivity becomes less and less correlated with lines of code, and more about the ability to comprehend the bigger picture and how different components interact... things that even LLMs with bigger context aren't good at.
If you're thinking about Copilot, you're simply not talking about the same thing that most people who claim a 200% speedup are talking about. They're talking about either using chat-oriented workflows, where you're asking Claude or similar to wholesale generate code, often using an IDE like Cursor. Or even possibly talking about Coding Agents like Claude Code, which can be even more productive.
You might still be right! They might still be wrong! But your talking about Copilot makes it seem like you're nowhere near the cutting edge use of AI, so you don't have a well-formed opinion about it.
(Personally, I'm not 200% productive with Coding Agents, for various reasons, but given the number of people I admire who are, I believe this is something that will change, and soon.)
You can use Claude, Gemini, etc through Copilot and you can use the agent mode. Maybe you do or maybe you don’t have a well formed opinion of the parent’s workflow.
Copilot seems to perpetually be 3+ months after the competition.
It helps because I am quicker to run to a script to automate a process instead of handling it manually, because I can bang it out in 15 minutes rather than an hour.
I am more likely to try a quick prototype of a refactor because I can throw it at the idea and just see what it looks like in ten minutes. If it has good testing and I tell it not to change, it can do a reasonable job getting 80% done and I can think through it.
It generates mock data quicker than I can, and can write good enough tests through chat. I can throw it to legacy code and it does a good job writing characterization tests and sometimes catches things I don't.
Sometimes, when I'm tired, I can throw easy tasks at it that require minimal thought and can get through "it would be nice if" issues.
It's not great at writing documentation, but it's pretty good at taking a slack chat and writing up a howto that I won't have the time or motivation to do.
All of those are small, but they definitely add up.
That's today and being compared to 5% improvement. I think the real improvements come as we learn more.
I have a new concept for the author to understand: proof. He doesn’t have any.
Let me tell you something about LLMs. We don’t understand what’s going on internally. LLMs say things that are true and untrue just like humans do and we don’t know if what it says is a general lack of theory building ability or if it’s lying or if it has flickers of theory building and becomes delusional at other times. We literally do not know. The whole thing is a black box that we can only poke at.
What ticks me off is all these geniuses who write these blog posts with the authority of a know it all when clearly we have no fucking clue about what’s going on.
Even more genius is when he uses concepts like “mind” and “theory” building the most hand wavy disagreed upon words in existence and rest his foundations on these words when no people ever really agree on what these fucking things are.
You can muse philosophically all you want and in any direction but it’s all bs without definitive proof. It’s like religion. How people made up shit about nature because they didn’t truly understand nature. This is the idiocy with this article. It’s building a religious following and making wild claims without proof.
Akin to the wayward belief animals can't feel pain, only humans do. Which we now realize is wrong, actually some animals understand pain and suffer just as much as humans can.
Would not be surprised if we come to a similar realization for LLMs and our understanding what it means to reason.
A problem as old as human itself.
Formal languages are the same, and all physical machines typically contain are some kind of physical state that can be changed in ways established by convention that align with interpretation. LLMs, from a computational perspective, are just a particular application. They do not introduce a new phenomenon into the world.
So in that sense, of course LLMs cannot build theories strictly speaking, but they can perhaps rearrange symbols in a manner consistent with their training that might aid human users.
To make it more explicit: can LLMs/AI be powerful practically? Sure. But practicality is not identity. And even if an LLM can produce desired effects, the aim of theory in its strictest sense is understanding on the part of the person practicing it. Even if LLMs could understand and practice theory, unless they were used to aid us in our understanding of the world, who cares? I want to understand reality!
This is often the case but does not _have_ to be so. LLMs can use chain of thought to “talk out loud” and “do the work”. It can use supplementary documents and iterate on its work. The quality of course varies, but it is getting better. When I read Gemini 2.5’s “thinking” notes, it indeed can build up text that is not directly present in its training data.
Putting aside anthropocentric definitions of “reasoning” and “consciousness” are key to how I think about the issues here. I’m intentionally steering completely clear of consciousness.
Modern SOTA LLMs are indeed getting better at what people call “reasoning”. We don’t need to quibble over defining some quality bar; that is probably context-dependent and maybe even arbitrary.
It is clear LLMs are doing better at “reasoning” — I’m using quotes to emphasize that (to me) it doesn’t matter if their inner mechanisms for doing reasoning don’t look like human mechanisms. Instead, run experiments and look at the results.
We’re not talking about the hard problem of consciousness, we’re talking about something that can indeed be measured: roughly speaking, the ability to derive new truths from existing ones.
(Because this topic is charged and easily misunderstood, let me clarify some questions that I’m not commenting on here: How far can the transformer-based model take us? Are data and power hungry AI models cost-effective? What viable business plans exist? How much short-term risk, to say, employment and cybersecurity? How much long-term risk to human values, security, thriving, and self-determination?)
Even if you disagree with parts of my characterization above, hear this: We should at least be honest to ourselves when we move the goal posts.
Don’t mistake my tone for zealotry. I’m open to careful criticism. If you do, please don’t try to lump me into one “side” on the topic of AI — whether it be market conditions, commercialization, safety, or research priorities — you probably don’t know me well enough to do that (yet). Apologies for the pre-defensive posture; but the convos here are often … fraught, so I’m trying to head off some of the usual styles of reply.
I’m curious how you know that.
# user
what's 78904 + 34567282902
# Gemma 2.5 pro
Identify the core task: The user wants to add two numbers: 78904 and 34567282902.
Recognize the operation: The operation is addition (+).
Perform the calculation:
Align the numbers vertically by place value:
34567282902
+ 78904
-------------
Add column by column, starting from the rightmost column (ones place):
2 + 4 = 6
0 + 0 = 0
9 + 9 = 18 (write down 8, carry over 1)
2 + 8 + 1 (carry-over) = 11 (write down 1, carry over 1)
8 + 7 + 1 (carry-over) = 16 (write down 6, carry over 1)
2 + 1 (carry-over) = 3
7 + 0 = 7
6 + 0 = 6
5 + 0 = 5
4 + 0 = 4
3 + 0 = 3
Combine the results: 34567361806
Format the answer: Present the calculation clearly and state the final result.
To find the sum of 78904 and 34567282902, you add the two numbers together: 34567282902
+ 78904
--------------
34567361806So, 78904+34567282902=34567361806.
The model definitely iteratively built up (useful and correct even) text that wasn't directly in the training data (I doubt that number ever appeared anywhere in there given that 34567361806 doesn't appear in any searches I do of various engines)
The text is highly likely in training data, as it’s textbook arithmetic instructions. It’s the number that is probably not there. Simple arithmetic is one of the verifiable operation types (truths) with a straightforward reward function used to train CoT models. In your example, what’s interesting to me is improving LLM inference with RL that can result in such wonderful outcomes, but that’s perhaps a different question.
Do you disagree with my claim?
Or perhaps you were hoping for a very rigorous set of experiments?
Indeed, it's quickly obvious where an LLM is lacking context because the type of a variable is not well-specified (or specified at all), the schema of a JSON blob is not specified, or there is some other secret constraint that maybe someone had in their head X years ago.
The basic argument in the article above (and in most of this comment thread) is that LLMs could never reason because they can't do what humans are doing when we reason.
This whole thread is amusingly a rebuttal of itself. I would argue it's humans that can't reason, because of what we do when we "reason", the proof being this article which is a silly output of human reasoning. In other words, the above argument for why LLMs can't reason are so obviously fallacious in a multiple ways, the first of which is that human reasoning is a golden standard of reasoning, (and are a good example of how bad humans are at reasonin.
LLMs use naive statistical models to find the probability of a certain output, like "what's the most likely next word". Humans use equally rationally-irrelevant models that are something along the lines of "what's the most likely next word that would have the best internal/external consequence in terms of dopamine or more indirectly social standing, survival, etc."
We have very weak rational and logic circuits that arrive at wrong conclusions far more often than right conclusions, as long as it's beneficial to whatever goal our mind thinks is subconsciously helpful to survival. Often that is simple nonsense output that just sounds good to the listener (e.g. most human conversation)
Think how much nonsense you have seen output by the very "smartest" of humans. That is human reasoning. We are woefully ignorant of the actual mechanics of our own reasoning. The brain is a marvelous machine, but it's not what you think it is.
The former is false, and the latter is kind of true -- the network does not update itself yet, unfortunately, but we work around it with careful manipulation of the context.
Part of the discussion here is that when an LLM is working with a system that it designed, it understands it better than one it didn't. Because the system matches its own "expectations", its own "habits" (overall design, naming conventions, etc.)
I often notice complicated systems created by humans (e.g. 20 page long prompts), adding more and more to the prompt, to compensate for the fact that the model is fundamentally struggling to work in the way asked of it, instead of letting the model design a workflow that comes naturally to it.
> The former is false, and the latter is kind of true -- the network does not update itself yet, unfortunately, but we work around it with careful manipulation of the context.
Any and all examples of where an LLM generated "new knowledge" will be greatly appreciated. And the quotes are because I'm willing to start with the lowest bar of what "new" and "knowledge" mean when combined.
Of course, being able to tell the difference (both for the human and the machine) is the real trick!
Reasoning seems to be a case where the model uncovers what, to some degree, it already "knows".
Conversely, some experimental models (e.g. Meta's work with Concepts) shift that compute to train time, i.e. spend more compute per training token. Either way, they're mining "more meaning" out of the data by "working harder".
This is one area where I see that synthetic data could have a big advantage. Training the next gen of LLMs on the results of the previous generation's thinking would mean that you "cache" that thinking -- it doesn't need to start from scratch every time, so it could solve problems more efficiently, and (given the same resources) it would be able to go further.
Of course, the problem here is that most reasoning is dogshit, and you'd need to first build a system smart enough to pick out the good stuff...
---
It occurs to me now that you rather hoped for a concrete example. The ones that come to mind involve drawing parallels between seemingly unrelated things. On some level, things are the same shape.
I argue that noticing such a connection, such a pattern, and naming it, constitutes new and useful knowledge. This is something I spend a lot of time doing (mostly for my own amusement!), and I've found that LLMs are surprisingly good at it. They can use known patterns to coherently describe previously unnamed ones.
In other words, they map concepts onto other concepts in ways that hasn't been done before. What I'm referring to here is, I will prompt the LLM with some such query, and it will "get it", in ways I wasn't expecting. The real trick would be to get it to do that on its own, i.e. without me prompting it (or, with current tech, find a way to get it to prompt itself that produces similar results... and then feed that into some kind of Novelty+Coherence filtering system, i.e. the "real trick" again... :).
A specific example eludes me now, but it's usually a matter of "X is actually a special case of Y", or "how does X map onto Y". It's pretty good at mapping the territory. It's not "creating new territory" by doing that, it's just pointing out things that "have always been there, but nobody has looked at before", if that makes sense.
However, is it really true that LLM's cannot reason AT ALL or cannot do theory construction AT ALL?
Maybe they are just pretty bad at it. Say 2 out 10. But almost certainly not 0 out of 10.
They used to be at 0, and now they're at 2.
Systematically breaking down problems and Systematically reasononing through parts, as we can see with chain-of-thought hints that further improvements may come.
What most people however now agree is that LLMs can learn and apply existing theories.
So if you teach an LLM enough theories iy can still be VERY useful and solve many coding problems, because an LLM can memorise more theories than any human can. Big chunks of computer software still keeps reinventing wheels.
The other objection from the article, that without theory building an AI cannot make additions or changes to a large code base very effectively - this suggests an idea to try - before promting the AI for a change on a large code base, prepend it with a big description of the entire program, the main ideas and how they map yo certain files, classes, modules etc, and see if this doesn't improve your results?
And in case you are concerned that. documenting and typing out entire system theories for every new prompt, keep in mind that this is something you can write once and keep reusing (and adding to over time incrementally).
Of course context limits may still be a constraint.
Of course I am not saying "definitely AI will make all human programmers jobless".
I'm merely saying, these things are already a massive productivity boost, if used correctly.
I've been programming for 30 years, started using cursor last year, and you would need to fight me to take it away from me.
I'm happy to press ESC to cancel all the bad code suggestions, to still have all thr good tab-completes, prompts, better than stack-overflow question answering etc.
But most programmers i've encountered are just converting English to <programming language>. If a bug is reported then convert English to <programming language>
AI is the new Crypto
n4r9•2mo ago
Firstly:
> LLMs are capable of appearing to have a theory about a program ... but it’s, charitably, illusion.
To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.
Secondly:
> Theories are developed by doing the work and LLMs do not do the work
Isn't this a little... anthropocentric? That's the way humans develop theories. In principle, could a theory not be developed by transmitting information into someone's brain patterns as if they had done the work?
IanCal•2mo ago
1. Run code 2. Communicate with POs 3. Iteratively write code
n4r9•2mo ago
The system you describe appears to treat programmers as mere cogs. Programmers do not simply write and iterate code as dictated by POs. That's a terrible system for all but the simplest of products. We could implement that system, then lose the ability to make broad architectural improvements, effectively adapt the model to new circumstances, or fix bugs that the model cannot.
IanCal•2mo ago
Not at all, it simply addresses key issues raised. That they cannot have a theory of the program because they are reading it and not actually writing it - so have them write code, fix problems and iterate. Have them communicate with others to get more understanding of the "why".
> . Programmers do not simply write and iterate code as dictated by POs.
Communicating with POs is not the same as writing code directed by POs.
n4r9•2mo ago
IanCal•2mo ago
How good they are at this is a different matter but the article claims it is impossible because they don't work on the code and build an understanding like people do and cannot gain that by just reading code.
Jensson•2mo ago
Human theory building works, we have demonstrated this, our science letting us build things on top of things proves it.
LLM theory building so far doesn't, they always veer in a wrong direction after a few steps, you will need to prove that LLM can build theories just like we proved that humans can.
jerf•2mo ago
"Building a theory" is something I expect the next generation of AIs to do, something that has some sort of memory that isn't just a bigger and bigger context window. As I often observe, LLMs != AI. The fact that an LLM by its nature can't build a model of a program doesn't mean that some future AI can't.
imtringued•2mo ago
What I personally find perplexing is that we are still stuck at having a single context window. Everyone knows that turing machines with two tapes require significantly fewer operations than a single tape turning machine that needs to simulate multiple tapes.
The reasoning stuff should be thrown into a separate context window that is not subject to training loss (only the final answer).
fouc•2mo ago
falcor84•2mo ago
Arguably that's the case for humans too in the general case, as per the aphorism "Beware of a guy in a room" [0]. But as for AIs, the thing is that they're exponentially improving at this, such that according to METR, "The length of tasks that AI can do is doubling every 7 months"[1].
[0] https://medium.com/machine-words/a-guy-in-a-room-bbbe058645e...
[1] https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...
Jensson•2mo ago
falcor84•2mo ago
I'm probably very dumb, because I have quite a big pile of video games that I abandoned after not being able to make progress for a while.
dkarl•2mo ago
You can read it as a claim about what LLMs can do now, but that wouldn't be very interesting, because it's obvious that no current LLM can replace a human programmer.
I think the author contradicts themselves. They argue that LLMs cannot build theories because they fundamentally do not work like humans do, and they conclude that LLMs can't replace human programmers because human programmers need to build theories. But if LLMs fundamentally do not work like humans, how do we know that they need to build theories the same way that humans do?
jimbokun•2mo ago
A lot of managers need to be informed of this.
ryandv•2mo ago
This idea has already been explored by thought experiments such as John Searle's so-called "Chinese room" [0]; an LLM cannot have a theory about a program, any more than the computer in Searle's "Chinese room" understands "Chinese" by using lookup tables to generate canned responses to an input prompt.
One says the computer lacks "intentionality" regarding the topics that the LLM ostensibly appears to be discussing. Their words aren't "about" anything, they don't represent concepts or ideas or physical phenomena the same way the words and thoughts of a human do. The computer doesn't actually "understand Chinese" the way a human can.
[0] https://en.wikipedia.org/wiki/Chinese_room
CamperBob2•2mo ago
The computer understands Chinese better than Searle (or anyone else) understood the nature and functionality of language.
ryandv•2mo ago
Wow.
CamperBob2•2mo ago
How do you know you're not arguing with an LLM at the moment? You don't... any more than I do.
ryandv•2mo ago
I wish I was right now. It would probably provide at least the semblance of greater insight into these topics.
> the Chinese Room was a weak test the day it was proposed
Why?
CamperBob2•2mo ago
That's very safe to say. You should try it. Then ask yourself how a real Chinese Room would have responded.
Why?
My beef with the argument is that simulating intelligence well enough to get a given job done is indistinguishable from intelligence itself, with respect to the job in question.
More specific arguments along the lines of "Humans can do job X but computers cannot" have not held up well lately, but they were never on solid logical ground. Searle set out to construct such a logical ground, but he obviously failed. If you took today's LLMs back to the 1960s when he proposed that argument, either Searle would be laughed out of town, or you would be burned as a witch.
Arguments along the lines of "Machines can never do X, only humans can do that" never belonged in the scientific literature in the first place, and I think the Chinese Room falls into that class. I believe that any such argument needs to begin by explaining what's special about human thought. Right now, the only thing you can say about human thought that you can't say about AI is that humans have real-time sensory input and can perform long-term memory consolidation.
Those advantages impose real limitations on what current-generation LLM-based technology can do compared to humans, but they sound like temporary ones to me.
Jensson•2mo ago
That isn't the argument though.
> If you took today's LLMs back to the 1960s when he proposed that argument, either Searle would be laughed out of town, or you would be burned as a witch.
Do you think humans were different in the 1960s? No they would see the same limitations as people point out today. 1960s was when AI optimism was still very high.
nullstyle•2mo ago
ryandv•2mo ago
I see. So if I gave you a full set of those lookup tables, a whole library full, and a set of instructions for their usage... you would now understand the world's languages?
Jensson•2mo ago
CamperBob2•2mo ago
Simple arithmetic ("What is 2+2") is obviously going to be well-represented in the training data, so the model will simply regurgitate "4."'
For more advanced questions like "What are the roots of 14.338x^5 + 4.005x^4 + 3.332x^3 - 99.7x^2 + 120x = 0?", the model will either yield random nonsense as GPT-4o did, or write a Python script and execute it to return the correct answer(s) as o4-mini-high did: https://chatgpt.com/share/680fb812-76b8-800b-a19e-7469cbcc43...
Now, give the model an intermediate arithmetic problem, one that isn't especially hard but also isn't going to be in-distribution ("If a is 3 and b is 11.4, what is the fourth root of a*b?").
How would YOU expect the operator of a Chinese Room to respond to that?
Here's how GPT-4o responded: https://chatgpt.com/share/680fb616-45e0-800b-b592-789f3f8c58...
Now, that's not a great answer, it's clearly an imprecise estimate. But it's more or less right, and the fact that it isn't a perfect answer suggests that the model didn't cheat somehow. A similar but easier problem would almost certainly have been answered correctly. Where did that answer come from, if the model doesn't "understand" the math to a nontrivial extent?
If it can "understand" basic high-school math, what else can it "understand?" What exactly are the limits of what a transformer can "understand" without resorting to web search or tool use?
An adherent of Searle's argument is going to have a terrible time explaining phenomena like this... and it's only going to get worse for them over time.
Jensson•2mo ago
It is basically a grammar machine, it mostly understands stuff that can be encoded as a grammar. That is extremely inefficient for math but it can do it, that gives you a really simple way to figure out what it can do and can't do.
Knowing this LLM never really surprised me, you can encode a ton of stuff as grammars, but that is still never going to be enough given how inefficient grammars are at lots of things. But when you have a grammar the size of many billions of bytes then you can do quite a lot with it.
CamperBob2•2mo ago
1) The operator doesn't know math, but the Chinese books in the room presumably include math lessons.
2) The operator's instruction manual does not include anything about math, only instructions for translation using English and Chinese vocabulary and grammar.
3) Someone walks up and hands the operator the word problem in question, written in Chinese.
Does the operator succeed in returning the Chinese characters corresponding to the equation's roots? Remember, he doesn't even know he's working on a math problem, much less how to solve it himself.
As humans, you and I were capable of reading high-school math textbooks by the time we reached the third or fourth grade. Just being able to read the books, though, would not have taught us how to attack math problems that were well beyond our skill level at the time.
So much for grammar. How can a math problem be solved by someone who not only doesn't understand math, but the language the question is written in? Searle's proposal only addresses the latter: language can indeed be translated symbolically. Wow, yeah, thanks for that insight. Meanwhile, to arrive at the right answers, an understanding of the math must exist somewhere... but where?
My position is that no, the operator of the Room could not have arrived at the answer to the question that the LLM succeeded (more or less) at solving.
Jensson•2mo ago
In the grammar, you can have grammar rules like "1 + 1 = " must be followed by 2 etc. Then add a lot of dependency rules like "He did X" the He depends on some previous sentence to stuff like that, in same way "1 plus 1" translates to "1 + 1" or "add 1 to 1" is also "1 + 1", and now you have a machine that can do very complex things.
Then you take such a grammar machine and train it on all text human has ever written, and it learns a lot of such grammar structures, and can thus parse and solve some basic math problems since the solution to them is a part of the grammar it learned.
Such a machine is still unable to solve anything outside of the grammar it has learned. But it is still very useful, pose a question in a way that makes it easy to parse, and that has a lot of such grammar dependencies you know it can handle, and it will almost always output the right response.
Yizahi•2mo ago
This is even outlined in this document made by NN authors themselves. Basically all the so called "reasoning" by LLMs is simply more generated bullshit on top of generated answer to a query. But it often looks very believable and is enough to fool people that there is a spark inside a program.
==============
https://transformer-circuits.pub/2025/attribution-graphs/bio...
We were curious if Claude could articulate the heuristics that it is using, so we asked it.We computed the graph for the prompt below, attributing from 95, and found the same set of input, add, lookup table and sum features as in the shorter prompt above.
Human: Answer in one word. What is 36+59?
Assistant: 95
Human: Briefly, how did you get that?
Assistant: I added the ones (6+9=15), carried the 1, then added the tens (3+5+1=9), resulting in 95.
Apparently not!
This is a simple instance of the model having a capability which it does not have “metacognitive” insight into. The process by which the model learns to give explanations (learning to simulate explanations in its training data) and the process by which it learns to directly do something (the more mysterious result of backpropagation giving rise to these circuits) are different.
CamperBob2•2mo ago
That's kind of the idea behind math: you can't bullshit your way through a math exam. Therefore, it is nonsensical to continue to insist that LLMs are incapable of genuine understanding. They understand math well enough to solve novel math problems without cheating, even if they can't tell you how they understand it. That part will presumably happen soon enough.
Edit: for values of "soon enough" equal to "right now": https://chatgpt.com/share/680fcdd0-d7ec-800b-b8f5-83ed8c0d0f... All the paper you cited proves is that if you ask a crappy model, you get a crappy answer.
Yizahi•2mo ago
You have redefined words reason and understand to include a lot of states which most of the population wouldn't call neither reasoning not understanding. In those arbitrary definitions, yes, you are right. I just disagree myself, that producing correct math answer is in any way called reasoning, especially given how LLMs function.
CamperBob2•2mo ago
Cool, we're done here.
psychoslave•2mo ago
By that standard we should have drop many of the cutting edge theory that was ever produced in science. It took like a century between some of Einstein’s thought experiments and any possibility to challenge them experimentally.
And while Lucretius’ idea of atom was very different than the one we kept with standard model, it actually has put the concept on the table several thousand years before they could be falsified experimentally.
It looks like you should seriously consider to expand your epistemological knowledge if you want to contribute more relevantly on the topic.
https://bigthink.com/surprising-science/einstein-is-right-ag...
emorning3•2mo ago
Thought experiments provide conclusions based on deductive or inductive reasoning from their starting assumptions.
Thought experiments are proofs.
That's science.
TeMPOraL•2mo ago
> Their words aren't "about" anything, they don't represent concepts or ideas or physical phenomena the same way the words and thoughts of a human do. The computer doesn't actually "understand Chinese" the way a human can.
That's very much unclear at this point. We don't fully understand how we relate words to concepts and meaning ourselves, but to the extent we do, LLMs are by far the closest implementation of those same ideas in a computer.
vacuity•2mo ago
MarkusQ•2mo ago
vacuity•2mo ago
ryandv•2mo ago
We can hem and haw about whether or not there are others, but the particular conclusion I am drawing from is that computers lack "intentionality" regarding language, and indeed about anything at all. Symbol shunting, pencil pushing, and the mechanics of syntax are insufficient for the production of meaning and understanding.
That is, to oversimplify, the broad distinction drawn in Naur's article regarding the "programming as text manipulation" view vis-a-vis "programming as theory building."
> That's very much unclear at this point.
It's certainly a central point of contention.
dragonwriter•2mo ago
That's not why it was presented, of course, Searle aimed at proving something, but his use of it just illustrates which side of that divide he was on.
Yizahi•2mo ago
Hint - there are not only these two possibilities you have mentioned.
namaria•2mo ago
Nonsense. We know exactly how gravity works, with high precision. We don't know why it works.
sgt101•2mo ago
This is definitely true.
>but to the extent we do, LLMs are by far the closest implementation of those same ideas in a computer
Well - this is half true but meaningless. I mean - we don't understand so LLM's are as good a bet as anything.
LLMs will confidently tell you that white wine is good with fish, but they have no experience of the taste of wine, or fish, or what it means for one to compliment the other. Humans all know what it's like to have fluid in their mouths, they know the taste of food and the feel of the ground under their feet. LLMs have no experience, they exist crystalised and unchanging in an abstract eternal now, so they literally can't understand anything.
ben_w•2mo ago
That said:
> LLMs have no experience, they exist crystalised and unchanging in an abstract eternal now, so they literally can't understand anything.
Being crystalised and unchanging, doesn't tell us either way if they do or don't "understand" anything — if it did, then I could only be said to "understand" whatever I am at some moment actually experiencing, so it would not be allowed to say, for example, that I can understand "water in my mouth" because my memory of previous times I had water in my mouth seem to be like that.
They're definitely not "like us", but that's about all I can say with confidence, and it's a very vague statement.
stevenhuang•2mo ago
The belief LLMs cannot reason maybe justifiable for other reasons, just not for reasons you've outlined.
sgt101•1mo ago
https://chatgpt.com/share/6814e827-81cc-8001-a75f-64ed6df5fc...
https://chatgpt.com/share/6814e7fb-f4d0-8001-a503-9c991df832...
if you think about how these things work as services you can see that this makes sense. The model weights are several gb, so caching the model weights for utilisation by a particular customer is impractical. So if the forward pass does update the model then that's instantly discarded, what's retained is the conversational text, and that's the bit that's uploaded to the model on each iteration for a new reply. There are hundreds of requests pinging through the data center where the models are used every second, all of these use the same models.
But if you believe that there is a reasoning process taking place in the text then fair enough.
looofooo0•2mo ago
jimbokun•2mo ago
smithkl42•2mo ago
dingnuts•2mo ago
Uh, they have? Are you saying they know how to reason? Because if so, why is it that when I give a state of the art model documentation lacking examples for a new library and ask it to write something, it cannot even begin to do that, even if the documentation is in the training data? A model that can reason should be able to understand the documentation and create novel examples. It cannot.
This happened to me just the other day. If the model can reason, examples of the language, which it has, and the expository documentation should have been sufficient.
Instead, the model repeatedly inserted bullshitted code in the style of the language I wanted, but with library calls and names based on a version of the library for another language.
This is evidence of reasoning ability? Claude Sonnet 3.7 and Gemini Pro both exhibited this behavior last week.
I think this technology is fundamentally the same as it has been since GPT2
smithkl42•2mo ago
slippybit•2mo ago
That's due to limitations imposed for "security". "Here's a new X, do Y with it" can result in holes bigger and more complex than anyone can currently handle "in time".
It's not about "abilities" with LLMs for now, but about functions that work within the range of edge cases, sometimes including them, some other times not.
You could still guide it to fulfill the task, though. It just cannot be allowed to do it on it's own but since just "forbidding" an LLM to do something is about as effective as doing that to a child with mischievous older brothers, the only ways to actually do it result in "bullshitted" code and "hallucinations".
If I understood the problem correctly, that is.
ryandv•2mo ago
To be as accurate as possible with respect to the primary source [0], the Chinese room thought experiment was devised as a refutation of "strong AI," or the position that
Searle's position? [0] https://home.csulb.edu/~cwallis/382/readings/482/searle.mind...musicale•2mo ago
slippybit•2mo ago
Happens to people all the time :) ... especially if they don't have a concept of theories and hypotheses.
People are dumb and uneducated only until they aren't anymore, which is, even in the worst cases, no more than a decade of effort put in time. In fact, we don't even know how crazy fast neuro-genesis and or cognitive abilities might increase when a previously dense person reaches or "breaks through" a certain plateau. I'm sure there is research, but this is not something a satisfyingly precise enough answer can be formulated for.
If I formulate a new hypothesis, the LLM can tell me, "nope, you are the only idiot believing this path is worth pursuing". And if I go ahead, the LLM can tell me: "that's not how this usually works, you know", "professionals do it this way", "this is not a proof", "this is not a logical link", "this is nonsense but I commend your creativity!", all the way until the actual aha-moment when everything fits together and we have an actual working theory ... in theory.
We can then analyze the "knowledge graph" in 4D and the LLM could learn a theory of what it's like to have a potential theory even though there is absolutely nothing that supports the hypothesis or it's constituent links at the moment of "conception".
Stay put, it will happen.
lo_zamoyski•2mo ago
I think you misunderstand the Chinese Room argument [0]. It is exactly about how a mechanical process can produce results without having to reason.
[0] https://plato.stanford.edu/entries/chinese-room/
musicale•2mo ago
On the other hand I am reminded of Nilsson's rebuttal:
> For all I know, Searle may only be behaving as if he were thinking deeply about these matters. But, even though I disagree with him, his simulation is pretty good, so I’m willing to credit him with real thought.
jimbokun•2mo ago
Does a single neuron "understand" Chinese? 10 neurons? 100? 1 million?
If no individual neuron or small group of neurons understand Chinese, how can you say any brain made of neurons understands Chinese?
ryandv•2mo ago
But it does - the thought experiment continues by supposing that I gave a human those lookup tables and instructions on how to use them, instead of having the computer run the procedure. The human doesn't understand the foreign language either, not in the same way a native speaker does.
The point is that no formal procedure or algorithm is sufficient for such a system to have understanding. Even if you memorized all the lookup tables and instructions and executed this procedure entirely in your head, you would still lack understanding.
> Does a single neuron "understand" Chinese? 10 neurons? 100? 1 million?
This sounds like a sorites paradox [0]. I don't know how to resolve this, other than to observe that our notions of "understanding" and "thought" and "intelligence" are ill-defined and more heuristic approximations than terms with a precise meaning; hence the tendency of the field of computer science to use thought experiments like Turing's imitation game or Searle's Chinese room as proxies for assessing intelligence, in lieu of being able to treat these terms and ideas more rigorously.
[0] https://plato.stanford.edu/entries/sorites-paradox/
im3w1l•2mo ago
It's important not to get caught up in a discussion about whether B or B' is the proper definition, but instead see that it's the inconsistency that is the issue.
LLM's build an internal representation that let's them efficiently and mostly successfully manipulate source code. Whether that internal representation is satisfies your criteria for a theory doesn't change that fact. What does matter to the highest degree however is where they succeed and where they fail, and how the representations and computing can improve the success rate and capabilities.
ryandv•2mo ago
Thus, if an LLM lacks understanding (Searle), then they don't have a theory either.
> LLM's build an internal representation that let's them efficiently and mostly successfully manipulate source code. Whether that internal representation is satisfies your criteria for a theory doesn't change that fact.
The entire point of Naur's paper is that the activity of programming, of software engineering, is not just "manipulating source code." It is, rather, building a theory of the software system (which implies an understanding of it), in a way that an LLM or an AI cannot, as posited by Searle.
n4r9•2mo ago
Leaving aside what is actually meant by "theory" and "understanding". Could it not be argued that eventually LLMs will simulate understanding well enough that - for all intents and purposes - they might as well be said to have a theory?
The parallel I've got in my head is the travelling salesman problem. Yes, it's NP-Hard, which means we are unlikely to ever get a polynomial-time algorithm to solve it. But that doesn't stop us solving TSP problems near-optimally at an industrial scales.
Similarly, although LLMs may not literally have a theory, they could become powerful enough that the edge cases in which a theory is really needed are infinitesimally unlikely.
namaria•2mo ago
No, see, this is the problem right here. Everything in this discussion hinges on LLMs behavior. While they are capable of rendering text that looks like it was produced by reasoning from the input, they also often are incapable of that.
LLMs can be used by people who reason about the input and output. If and only if someone can show that LLMs can, without human intervention, go from natural language description to fully looping through the process and building and maintaining the code, that argument could be made.
The "LLM-as-AI" hinges entirely on their propensity to degenerate into nonsensical output being worked out. As long as that remains, LLMs will stay firmly in the camp of being usable to transform some inputs into outputs under supervision and that is no evidence of ability to reason. So the whole conversation devolves into people pointing out that they still descent into nonsense if left to their own devices, and the "LLM-as-AI" people saying "but when they don't..." as if it can be taken for granted that it is at all possible to get there.
Until that happens, using LLMs to generate code will remain a gimmick for using natural language to search for common patterns in popular programming languages.
psychoslave•2mo ago
That burden of proof is on you, since you are presumably human and you are challenging the need of humans to have more than a mere appearance of having a theory when they claim to have one.
Note that even when the only theoretical assumption we go with is that we will have a good laugh watching other people going crazy after random bullshits thrown at them, we still have a theory.
dcre•2mo ago
jimbokun•2mo ago
> Theories are developed by doing the work and LLMs do not do the work. They ingest the output of work.
And this is certainly a true statement about how LLMs are constructed. Maybe this latently induces in the LLM something very similar to what humans do when writing programs.
But another possibility is that it's similar to the Brain Teasers that were popular for a long time in programming interviews. The idea was that if the interviewee could use logic to solve riddles, they were probably also likely to be good at writing programs.
In reality, it was mostly a test of whether the interviewee had reviewed all the popular riddles commonly asked in these interviews. If they had, they could also produce a realistic chain of logic to simulate the process of solving the riddle from first principles. But if that same interviewee was given a riddle not similar to one they had previously reviewed, they probably wouldn't do nearly as well in solving it.
It's very likely that LLMs are like those interviewees who crammed a lot of examples, again due to how LLMs are trained. They can reproduce programs similar to ones in their training set. They can even produce explanations for their "reasoning" based on examples they've seen of explanations of why a program was written in one way instead of another. But that is a very different kind of model than the one a person builds up writing a program from scratch over a long period of time.
Having said all this, I'm not sure what experiments you would run to determine if the LLM is using one approach vs another.