This is completely idiotic. Do these people actually believe that showing it can't be actual thought because it is described by math?
Whatever that something that it actually does in the real, physical world is produces the cogito in cogito, ergo sum and I doubt you can get it just by describing what all the subatomic particles are doing, any more than a computer or pen-and-paper simulated hurricane can knock your house down, no matter how perfectly simulated.
A pen and paper simulation of a brain would also be "a thing happening" as you put it. You have to explain what is the magical ingredient that makes the brain's computations impossible to replicate.
You could connect your brain simulation to an actual body, and you'd be unable to tell the difference with a regular human, unless you crack it open.
I'm not. You might want me to be, but I'm very, very much not.
Of course a GPU involves things happening. No amount of using it to describe a brain operating gets you an operating brain, though. It's not doing what a brain does. It's describing it.
(I think this is actually all somewhat tangential to whether LLMs "can think" or whatever, though—but the "well of course they might think because if we could perfectly describe an operating brain, that would also be thinking" line of argument often comes up, and I think it's about as wrong-headed as a thing can possibly be, a kind of deep "confusing the map for the territory" error; see also comments floating around this thread offhandedly claiming that the brain "is just physics"—like, what? That's the cart leading the horse! No! Dead wrong!)
An arbitrarily-perfect simulation of a burning candle will never, ever melt wax.
An LLM is always a description. An LLM operating on a computer is identical to a description of it operating on paper (if much faster).
That simulated candle is perfectly melting wax in its own simulation. Duh, it won't melt any in ours, because our arbitrary notions of "real" wax are disconnected between the two simulatons.
If we don't think the candle in a simulated universe is a "real candle", why do we consider the intelligence in a simulated universe possibly "real intelligence"?
Being a functionalist ( https://en.wikipedia.org/wiki/Functionalism_(philosophy_of_m... ) myself, I don't know the answer on the top of my head.
A simulation of a tree growing (say) is a lot more like the idea of love than it is... a real tree growing. Making the simulation more accurate changes that not a bit.
It might if the simulation includes humans observing the candle.
Thanks for stating your views clearly. I have some questions to try and understand them better:
Would you say you're sure that you aren't in a simulation while acknowledging that a simulated version of you would say the same?
What do you think happens to someone whose neurons get replaced by small computers one by one (if you're happy to assume for the sake of argument that such a thing is possible without changing the person's behavior)?
That's an assumption, though. A plausible assumption, but still an assumption.
We know you can execute an LLM on pen and paper, because people built them and they're understood well enough that we could list the calculations you'd need to do. We don't know enough about the human brain to create a similar list, so I don't think you can reasonably make a stronger statement than "you could probably simulate..." without getting ahead of yourself.
It's been kinda discussed to oblivion in the last century, interesting that it seems people don't realize the "existing literature" and repeat the same arguments (not saying anyone is wrong).
Yes, or what about leprechauns?
The opinions are exactly the same than about LLM.
The argument that was actually made was "LLMs do not think".
B: But Y would also imply Z
C: A was never arguing for Z! This is a strawman!
Would you mind expanding on this? At a base read, it seems you implying magic exists.
"Can not be measured", probably not. "We don't know how to measure", almost certainly.
I am capable of belief, and I've seen no evidence that the computer is. It's also possible that I'm the only person that is conscious. It's even possible that you are!
Connect your pen and paper operator to a brainless human body, and you got something indistinguishable from a regular alive human.
[0] https://en.wikipedia.org/wiki/Functionalism_%28philosophy_of...
You can replicate the entire universe with pen and paper (or a bunch of rocks). It would take an unimaginably long time, and we haven't discovered all the calculations you'd need to do yet, but presumably they exist and this could be done.
Does that actually make a universe? I don't know!
The comic is meant to be a joke, I think, but I find myself thinking about it all the time!!!
Very much like this effect https://www.reddit.com/r/opticalillusions/comments/1cedtcp/s... . Shouldn't hide complexity under a truth value.
If you're asking for things you can't easily verify you're barking up the wrong tree.
By every scientific measure we have the answer is no. It’s just electrical current taking the path of least resistance through connected neurons mixed with cell death.
The fact a human brain peaks at IQ around 200 is fascinating. Can the scale even go higher? It would seem no since nothing has achieved a higher score it must not exist.
(Sneaking a bit of belief in here, to me "substrate independence" is a more extreme position than the idea that a system could be made which is intelligent but not conscious, hence I find it implausible.)
- ragebait them by saying AIs don’t think
- …
The reason I say this is because an LLM is not a complete self-contained thing if you want to compare it to a human being. It is a building block. Your brain thinks. Your prefrontal cortex however is not a complete system and if you somehow managed to extract it and wire it up to a serial terminal I suspect you’d be pretty disappointed in what it would be capable of on its own.
I want to be clear that I am not making an argument that once we hook up sensory inputs and motion outputs as well as motivations, fears, anxieties, desires, pain and pleasure centers, memory systems, sense of time, balance, fatigue, etc. to an LLM that we would get a thinking feeling conscious being. I suspect it would take something more sophisticated than an LLM. But my point is that even if an LLM was that building block, I don’t think the question of whether it is capable of thought is the right question.
> Large language models (LLMs) can be dishonest when reporting on their actions and beliefs -- for example, they may overstate their confidence in factual claims or cover up evidence of covert actions
Still, the sales pitch has worked to unlock huge liquidity for him so there’s that.
Still making predictions is a big part of what brains do though not the only thing. Someone wise said that LLM intelligence is a new kind of intelligence, like how animal intelligence is different from ours but is still intelligence but needs to be characterized to understand differences.
The AI companies themselves are the ones drawing the parallels to a human being. Look at how any of these LLM products are marketed and described.
causal•4h ago
jvanderbot•4h ago
nh23423fefe•3h ago
d-lisp•3h ago
edit : Thinking is undefined, statements about undefined cannot be verified.
ux266478•3h ago
random9749832•3h ago
nutjob2•3h ago
random9749832•3h ago
gowld•1h ago
The "hair-splitting" underlies the whole GenAI debate.
CamperBob2•19m ago
It ties into another aspect of these perennial threads, where it is somehow OK for humans to engage in deluded or hallucinatory thought, but when an AI model does it, it proves they don't "think."
d-lisp•3h ago
ablob•3h ago
d-lisp•3h ago
If thinking is definable, it is wrong that all statements about it are unverifiable (i.e. there are statements about it that are verifiable.)
Well, basic shit.
terminalshort•3h ago
jvanderbot•3h ago
Unicorns are not bound by the laws of physics - because they do not exist.
cwmoore•15m ago
wizzwizz4•14m ago
gowld•1h ago
Is it only humans that have this need? That makes the need special, so humans are special in the universe.
terminalshort•20m ago
ben_w•3h ago
There's many definitions of "thinking".
AI and brains can do some, AI and brains definitely provably cannot do others, some others are untestable at present, and nobody really knows enough about what human brains do to be able to tell if or when some existing or future AI can do whatever is needed for the stuff we find special about ourselves.
A lot of people use different definitions, and respond to anyone pointing this out by denying the issue and claiming their own definition is the only sensible one and "obviously" everyone else (who isn't a weird pedant) uses it.
jvanderbot•3h ago
The definition of "thinking" in any of the parent comments or TFA is actually not defined. Like literally no statements are made about what is being tested.
So, if we had that we could actually discuss it. Otherwise it's just opinions about what a person believes thinking is, combined with what LLMs are doing + what the person believes they themselves do + what they believe others do. It's entirely subjective with very low SNR b/c of those confounding factors.
BobaFloutist•2h ago
_alternator_•52m ago
BobaFloutist•9m ago
ux266478•4h ago
Overwhelmingly, I just don't think the majority of human beings have the mental toolset to work with ambiguous philosophical contexts. They'll still try though, and what you get out of that is a 4th order baudrillardian simulation of reason.
qsort•3h ago
gowld•1h ago
> finite context windows
like a human has
> or the fact that the model is "frozen" and stateless,
much like a human adult. Models get updated at a slower frequency than humans. AI systems have access to fetch new information and store it for context.
> or the idea that you can transfer conversations between models are trivial
because computers are better-organized than humanity.
isoprophlex•1h ago
I do hope you're able to remember what you had for lunch without incessantly repeating it to keep it in your context window
whoknowsidont•1h ago
I can restart a conversation with an LLM 15 days later and the state is exactly as it was.
Can't do that with a human.
The idea that humans have a longer, more stable context window than LLM's, CAN or is even LIKELY to be true given certain activities but please let's be honest about this.
If you talk to someone for an hour about a technical conversation I would guesstimate that 90% of humans would immediately start to lose track of details in about 10 minutes. So they write things down, or they mentally repeat things to themselves they know or have recognized they keep forgetting.
I know this because it's happened continually in tech companies decade after decade.
LLM's have already passed the Turing test. They continue to pass it. They fool and outsmart people day after day.
I'm no fan of the hype AI is receiving, especially around overstating its impact in technical domains, but pretending that LLM's can't or don't consistently perform better than most human adults on a variety of different activities is complete non-sense.
mewpmewp2•56m ago
How would you say human short term memory works if not by repeated firing (similar to repeatedly putting same tokens in over and over)?
NooneAtAll3•45m ago
I do hope you're able to remember what was your browser tab 5 tab switches ago without keeping track of it...
snickerbockers•3h ago
They're not equivalent at all because the AI is by no means biological. "It's just maths" could maybe be applied to humans but this is backed entirely by supposition and would ultimately just be an assumption of its own conclusion - that human brains work on the same underlying principles as AI because it is assumed that they're based on the same underlying principles as AI.
AlecSchueler•3h ago
That wasn't the assumption though, it was only that human brains work by some "non-magical" electro-chemical process which could be described as a mechanism, whether that mechanism followed the same principles of AI or not.
ikrenji•3h ago
Mehvix•2h ago
gowld•1h ago
Mehvix•1h ago
if there's surely no algo to solve the halting problem, why would there be maths that describes consciousness?
josh-sematic•25m ago
Having read “I Am a Strange Loop” I do not believe Hofstadter indicates that the existence of Gödel’s theorem precludes consciousness being realizable on a Turing machine. Rather if I recall correctly he points out that as a possible argument and then attempts to refute it.
On the other hand Penrose is a prominent believer that human’s ability to understand Gödel’s theorem indicates consciousness can’t be realized on a Turing machine but there’s far from universal agreement on that point.
squidbeak•41m ago
_alternator_•40m ago
hnfong•3h ago
But I think most people get what GP means.
criddell•1h ago
_alternator_•43m ago
When you think in these terms, it becomes clear that LLMs can’t have certain types of experiences (eg see in color) but could have others.
A “weak” panpsychism approach would just stop at ruling out experience or qualia based on physical limitations. Yet I prefer the “strong” pansychist theory that whatever is not forbidden is required, which begins to get really interesting (would imply that for example an LLM actually experiences the interaction you have with it, in some way).
mcswell•3h ago
pegasus•3h ago
As for applying the word thinking to AI systems, it's already in common usage and this won't change. We don't have any other candidate words, and this one is the closest existing word for referencing a computational process which, one must admit, is in many ways (but definitely not in all ways) analogous to human thought.
observationist•3h ago
It's on those who want alternative explanations to demonstrate even the slightest need for them exists - there is no scientific evidence that exists which suggests the operation of brains as computers, as information processors, as substrate independent equivalents to Turing machines, are insufficient to any of the cognitive phenomena known across the entire domain of human knowledge.
We are brains in bone vats, connected to a wonderful and sophisticated sensorimotor platform, and our brains create the reality we experience by processing sensor data and constructing a simulation which we perceive as subjective experience.
The explanation we have is sufficient to the phenomenon. There's no need or benefit for searching for unnecessarily complicated alternative interpretations.
If you aren't satisfied with the explanation, it doesn't really matter - to quote one of Neil DeGrasse Tyson's best turns of phrase: "the universe is under no obligation to make sense to you"
If you can find evidence, any evidence whatsoever, and that evidence withstands scientific scrutiny, and it demands more than the explanation we currently have, then by all means, chase it down and find out more about how cognition works and expand our understanding of the universe. It simply doesn't look like we need anything more, in principle, to fully explain the nature of biological intelligence, and consciousness, and how brains work.
Mind as interdimensional radios, mystical souls and spirits, quantum tubules, none of that stuff has any basis in a ruthlessly rational and scientific review of the science of cognition.
That doesn't preclude souls and supernatural appearing phenomena or all manner of "other" things happening. There's simply no need to tie it in with cognition - neurotransmitters, biological networks, electrical activity, that's all you need.
jvanderbot•2h ago
johnsmith1840•41m ago
This is the point, we don't know the delta between brains and AI any assumption is equivalent to my statement.
CamperBob2•3h ago
The same arguments that appeared in 2015 inevitably get trotted out, almost verbatim, ten years later. It would be amusing on other sites, but it's just pathetic here.
Terr_•3h ago
CamperBob2•3h ago
... someone else points out that the same models that can't "think" are somehow turning in gold-level performance at international math and programming competitions, making Fields Medalists sit up and take notice, winning art competitions, composing music indistinguishable from human output, and making entire subreddits fail the Turing test.
Terr_•3h ago
CamperBob2•3h ago
Uh huh. Good luck getting Stockfish to do your math homework while Leela works on your next waifu.
LLMs play chess poorly. Chess engines do nothing else at all. That's kind of a big difference, wouldn't you say?
ben_w•3h ago
To their utility.
Not sure if it matters on the question "thinking?"; even if for the debaters "thinking" requires consciousness/qualia (and that varies), there's nothing more than guesses as to where that emerges from.
gowld•1h ago
Terr_•1h ago
For my original earlier reply, the main subtext would be: "Your complaint is ridiculously biased."
For the later reply about chess, perhaps: "You're asserting that tricking, amazing, or beating a human is a reliable sign of human-like intelligence. We already know that is untrue from decades of past experience."
CamperBob2•7m ago
I don't know who's asserting that (other than Alan Turing, I guess); certainly not me. Humans are, if anything, easier to fool than our current crude AI models are. Heck, ELIZA was enough to fool non-specialist humans.
In any case, nobody was "tricked" at the IMO. What happened there required legitimate reasoning abilities.
nh23423fefe•3h ago
CamperBob2•3h ago
nutjob2•3h ago
This is exactly the problem. Claims about AI are unfalsifiable, thus your various non-sequiturs about AI 'thinking'.
umanwizard•3h ago
There are people confidently claiming they can’t and then other people expressing skepticism at their confidence and/or trying to get them to nail down what they mean.
jayveeone•34m ago
gfdvgfffv•3h ago
superkuh•3h ago
bigfishrunning•1h ago
ablob•2h ago
Conversely, if the one asserting something doesn't want to define it there is no useful conversation to be had. (as in: AI doesn't think - I won't tell you what I mean by think)
PS: Asking someone to falsify their own assertion doesn't seem a good strategy here.
PPS: Even if everything about the human brain can be emulated, that does not constitute progress for your argument, since now you'd have to assert that AI emulates the human brain perfectly before it is complete. There is no direct connection between "This AI does not think" to "The human brain can be fully emulated". Also the difference between "does not" and "can not" is big enough here that mangling them together is inappropriate.
Tadpole9181•2h ago
Sometimes, because of the consequences of otherwise, the order gets reversed
pegasus•2h ago
Personally, I'm ok with reusing the word "thinking", but there are dogmatic stances on both sides. For example, lots of people decreeing that biology in the end can't but reduce to maths, since "what else could it be". The truth is we don't actually know if it is possible, for any conceivable computational system, to emulate all essential aspects of human thought. There are good arguments for this (in)possibility, like those presented by Roger Penrose in "the Emperor's new Mind" and "Shadows of the Mind".
CamperBob2•13m ago
For one thing, yes, they can, obviously -- when's the last time you checked? -- and for another, there are plenty of humans who seemingly cannot.
The only real difference is that with an LLM, when the context is lost, so is the learning. That will obviously need to be addressed at some point.
that they can't perform simple mathematical operations without access to external help (via tool calling)
But yet you are fine with humans requiring a calculator to perform similar tasks? Many humans are worse at basic arithmetic than an unaided transformer network. And, tellingly, we make the same kinds of errors.
or that they have to expend so much more energy to do their magic (and yes, to me they are a bit magical), which makes some wonder if what these models do is a form of refined brute-force search, rather than ideating.
Well, of course, all they are doing is searching and curve-fitting. To me, the magical thing is that they have shown us, more or less undeniably, that that is all we do. Questions that have been asked for thousands of years have now been answered: there's nothing special about the human brain, except for the ability to form, consolidate, and consult long-term memories.
nutjob2•3h ago
But they are two different things with overlapping qualities.
It's like MDMA and falling in love. They have many overlapping quantities but no one would claim one is the other.
terminalshort•3h ago
__loam•1h ago
mbg721•1h ago
tracerbulletx•1h ago
whoknowsidont•1h ago
You'd think it would unlock certain concepts for this class of people, but ironically, they seem unable to digest the information and update their context.
lisbbb•55m ago
omnicognate•49m ago
sounds•1h ago
But the accompanying XY plot showed samples that overlapped or at least were ambiguous. I immediately lost a lot of my interest in their approach, because traffic lights by design are very clearly red, or green. There aren't mauve or taupe lights that the local populace laughs at and says, "yes, that's mostly red."
I like the idea of studying math by using ML examples. I'm guessing this is a first step and future education will have better examples to learn from.
cwmoore•18m ago
smallerize•8m ago