Gotta agree here. The brain is a chemical computer with a gazillion inputs that are stimulated in manifold ways by the world around it, and is constantly changing states while you are alive; a computer is a digital processor that works work with raw data, and tends to be entirely static when no processing is happening. The two are vastly different entities that are similar in only the most abstract ways.
The history of the brain computer equation idea is fascinating and incredibly shaky. Basically a couple of cyberneticists posed a brain = computer analogy back in the 50s with wildly little justification and everyone just ran with it anyway and very few people (Searle is one of those few) have actually challenged it.
And something that often happens whenever some phenomenon falls under scientific investigation, like mechanical force or hydraulics or electricity or quantum mechanics or whatever.
To be fair to Searle, I don't think he advanced this as an agument, but more of an illustration of his belief that thinking was indeed a physical process specific to brains.
¹https://home.csulb.edu/~cwallis/382/readings/482/searle.mind...
Or even more fundamentally, that physics captures all physical phenomena, which it doesn't. The methods of physics intentionally ignore certain aspects of reality and focus on quantifiable and structural aspects while also drawing on layers of abstractions where it is easy to mistakenly attribute features of these abstractions to reality.
This depends entirely on how it's configured. Right now we've chosen to set up LLMs as verbally acute Skinner boxes, but there's not reason you can't set up a computer system to be processing input or doing self-maintenance (ie sleep) all the time.
Appealing to the Turing test suggests a misunderstanding of Searle's arguments. It doesn't matter how well computational methods can simulate the appearance of intelligence. What matters is whether we are dealing with intelligence. Since semantics/intentionality is what is most essential to intelligence, and computation as defined by computer science is a purely abstract syntactic process, it follows that intelligence is not essentially computational.
> It's very close to the Chinese Room, which I had always dismissed as misleading.
Why is it misleading? And how would LLMs change anything? Nothing essential has changed. All LLMs introduce is scale.
In "both" (probably more, referencing the two most high profile - Eugene and the LLMs) successes, the interrogators consistently asked pointless questions that had no meaningful chance of providing compelling information - 'How's your day? Do you like psychology? etc' and the participants not only made no effort to make their humanity clear, but often were actively adversarial obviously intentionally answering illogically, inappropriately, or 'computery' to such simple questions. For instance here is dialog from a human in one of the tests:
----
[16:31:08] Judge: don't you thing the imitation game was more interesting before Turing got to it?
[16:32:03] Entity: I don't know. That was a long time ago.
[16:33:32] Judge: so you need to guess if I am male or female
[16:34:21] Entity: you have to be male or female
[16:34:34] Judge: or computer
----
And the tests are typically time constrained by woefully poor typing skills (is this the new normal in the smartphone gen?) to the point that you tend to get anywhere from 1-5 interactions of just several words each. The above snip was a complete interaction, so you get 2 responses from a human trying to trick the judge into deciding he's a computer. And obviously a judge determining that the above was probably a computer says absolutely nothing about the quality of responses from the computer - instead it's some weird anti-Turing Test where humans successfully act like a [bad] computer, ruining the entire point of the test.
The problem with any metric for something is that it often ends up being gamed to be beaten, and this is a perfect example of that. I suspect in a true run of the Turing Test we're still nowhere even remotely close to passing it.
https://www.theguardian.com/world/2025/oct/05/john-searle-ob...
His most famous argument:
The human running around inside the room doing the translation work simply by looking up transformation rules in a huge rulebook may produce an accurate translation, but that human still doesn't know a lick of Chinese. Ergo (they claim) computers might simulate consciousness, but will never be conscious.
But is the Searle room, the human is the equivalent of, say, ATP in the human brain. ATP powers my brain while I'm speaking English, but ATP doesn't know how to speak English just like the human in the Searle room doesn't know how to speak Chinese.
Neither the man, nor the room "understand" Chinese. It is the same for the computer and its software. Jeffery Hinton has sad "but the system understands Chinese." I don't think that's a true statement, because at no point is the "system" dealing with semantic context of the input. It only operates algorithmically on the input, which is distinctly not what people do when they read something.
Language, when conveyed between conscious individuals creates a shared model of the world. This can lead to visualizations, associations, emotions, creation of new memories because the meaning is shared. This does not happen with mere syntactic manipulation. That was Searle's argument.
That's not at all clear!
> Language, when conveyed between conscious individuals creates a shared model of the world. This can lead to visualizations, associations, emotions, creation of new memories because the meaning is shared. This does not happen with mere syntactic manipulation. That was Searle's argument.
All of that is called into question with some LLM output. It's hard to understand how some of that could be produced without some emergency model of the world.
LLM output doesn't call that into question at all. Token production through distance function in high-dimensional vector representation space of language tokens gets you a long way. It doesn't get you understanding.
I'll take Penrose's notions that consciousness is not computation any day.
I know that it doesn't feel like I am doing anything particularly algorithmic when I communicate but I am not the hommunculus inside me shuffling papers around so how would I know?
Maybe I should look up some of my other heroes and heretics while I have the chance. I mean, you don't need to cold e-mail them a challenge. Sometimes they're already known to be at events and such, after all!
I mean, I guess all arguments eventually boil down to something which is "obvious" to one person to mean A, and "obvious" to me to mean B.
Wiki
https://www.insidehighered.com/quicktakes/2017/04/10/earlier...
It includes a letter that starts:
I am Jennifer Hudin, John Searle’s secretary of 40 years. I am writing to tell you that John died last week on the 17th of September. The last two years of his life were hellish. HIs daughter–in-law, Andrea (Tom’s wife) took him to Tampa in 2024 and put him in a nursing home from which he never returned. She emptied his house in Berkeley and put it on the rental market. And no one was allowed to contact John, even to send him a birthday card on his birthday.
It is for us, those who cared about John, deeply sad.
I'm surprised to see the NYT obituary published nearly a month after his death. I would have thought he'd be included in their stack of pre-written obituaries, meaning it could be updated and published within a day or two.0. https://www.academia.edu/30805094/The_Success_and_Failure_of...
I do agree with him about AI though. Strange (cask)bedfellows.
toomuchtodo•4h ago
https://en.wikipedia.org/wiki/John_Searle