In that construct, a computer program would never be conscious because it's a simulation, it doesn't have the constituent consciousness property.
I don't believe or not believe the consciousness-as-a-property-of-matter part but I do think programs can't be conscious because consciousness must sit outside of what they simulate.
Although, our brains also are doing computation, and also seem to have consciousness. Are they linked? Why/why not…
This is animism, right? It is a religious belief. Not really subject to testing.
A computer program is the result of electrical and mechanical interactions that manifest in macroscopically observable effects. So are we. Why, if all matter is conscious, should the one count but not the other?
My money is on mankind perpetually transforming the definition to ensure only our species can fit within.
We've been doing that long enough with higher order animals anyway.
There’s reflections done around the term Speciesism (and anti-speciesism) and most people today stands for speciesism.
Interestingly the reflection is close to the debate on racism and anti-racism (where most people settled to anti racism to the point there isn’t much debate anymore), but race is only an informal classification that don’t hold much meaning in biological term, contrary to species.
I don't know how anyone who experiences consciousness could be confused about what it means to be conscious, or (in other threads, not this one) could argue that consciousness is "an illusion". (Consciousness is not the illusion, it's the audience!).
However I don't see why you don't think an algorithm could be conscious? Why do you think the processes that produce your own consciousness could not be computable?
Do you mean to say there are objective criteria for consciousness? Could you expand on that?
The consciousness illusion is a different focus, as to whether our experience of alertness, thought, and perception even has the temporal and causal elements we tend to assume. This problem has many layers.
One example is the visual system and the illusion of a constantly perceived visual field that is really a synthesized memory of many smaller visual samples from the frequent saccades of our eyes. We don't see our own eye movement that is happening We also don't usually see our retinal nerve blindspot nor recognize the inherent asynchrony of some of our different senses. Our consciousness experience fuses all this together and well known perceptual illusions and magic tricks generally exploit the gaps in this process.
But there are many other layers, such as full blown hallucination where the mind constructs sensory perceptions that do not match our physical stimuli. There are many more subtle layers in between. Delusional beliefs can be felt as "fact" that suppresses internalization of other contradictory perceptions.
More subtly, people often post-rationalize causal relationships between social experiences, emotional state, and actions in ways that are inaccurate. Psychologists talk about "cognitive distortion" as an overall concept for this fuzzy area where people's internal state biases their perception and belief derived from physical stimuli.
And I'm not talking about spirituality, it could all be perfectly deterministic on some level. With that level being centuries or millennia or forever outside of our grasp.
You offer a pretty big statement without any backing whatsoever.
Lots of things are imitable without understanding how they work
Mankind was making fire for hundreds of thousands of years before knowing that it was the rapid oxidation of combustible materials.
Claiming that it wasn't fire because it was complicated to understand would be ridiculous.
I read an interesting book recently, "Determined", which argues that free will doesn't exist. It was more convincing than I expected. However, the chapters on chaos and quantum mechanics were a mess and made me skeptical of the rest of the book.
The better question is why couldn't a consciousness attach itself to (be the audience for) a computation. Since we really don't understand anything significant about it, questions like this are next to impossible to disprove. At the same time since we've never seen anything except human start talking about consciousness spontaneously* it seems like a reasonable guess to me that LLMs/the machines running them are not in fact conscious simply because of their dissimilarity and the lack of other evidence.
* I note LLMs did not do so spontaneously, they did so because they were trained to mimic human output which does so. Because we fully understand the deterministic process by which they started talking about consciousness (a series of mathematical operations), them doing so was an inevitability regardless of whether they are conscious, and as such it is not evidence for their consciousness.
how do you know it's not just an evaluation of a mathematical function?
A mathematical function is a set, possibly infinite, of pairs of abstract elements (commonly defined via sets) where no two pairs share the same first element. Nothing less, nothing more.
Computation is the act of determining the abstract output (second element in the pair) for a given abstract input (first element in the pair).
Nothing in those definitions is capable of expressing the concept of having perceptions (consciousness). That's not an abstract thing.
This isn't to say the concrete thing doing the computation couldn't in principal be conscious, just that it doesn't definitionally make sense for the math itself to be conscious.
Do you think the universe is not computable?
If you think the universe is computable, and you think that you exist in the universe, and you think that you are conscious, don't you think it follows that consciousness can exist within mathematical structures?
Yes, definitionally not, the universe isn't an abstract object let alone one in the shape of a function.
You might, in principle, be able to precisely predict the future of the universe given perfect information using a precise model of the universe. That model, a mathematical function, would be computable. It would be accurate to say that the model describes the universe, but not that the model is the universe.
The thing about mathematical structures is that they are concepts, not things, I feel confident in saying that concepts aren't conscious.
If you had a perfectly accurate universe simulation, do you think the people inside the simulation would not be conscious?
If they're not conscious, it's not a perfectly accurate simulation.
And if it is possible to have a perfectly accurate simulation, then (like you said) all of the contents of the universe were "there" all along inside the giant mathematical structure. You don't need anyone to run the simulator!
All of the contents of the universe, the apparent flow of time, our thoughts and feelings, our consciousness, all lives inside this incomprehensibly large mathematical structure.
This is how I believe reality works. The universe exists inside mathematics the way 42 does. You don't need a calculator to show the number 42 in order for the number 42 to exist. Running the simulator can expose the contents of the universe to someone outside it, but everything on the inside is independent of the simulator.
You might ask "why this reality and not any other?" and I would say they all exist equally well, we just happen to notice this one because we're inside this one.
A consciousness experiences perceptions, if you don't, I won't be able to describe this to you. If you do, it should be clear what I mean by that.
We have no evidence that either a camera or a GPU executing an LLM experiences perception. Certainly they react to physical stimuli, but so does an atom, physical reaction is not the definition of experience I am referring to when I say perception. We also have no evidence that they do not, except for the lack of evidence to the contrary.
We have some reason to believe that other people do experience perception, in that they spontaneously describe experiencing things that our similar to our experiences, and it's surprising they do that if they don't also experience things*. When I say "we", I really mean "I", but I'm assuming that you have the same experience I do.
> What prevents any of this to be fully mathematically described?
There's nothing that says you can't, in principle, create an entirely accurate mathematical description of perception (in the experiencing and not the reacting sense) where you define that certain abstract variables correspond to certain perceptions and can entirely predict them. The model would still be that, a model that predicts what perceptions occur, not the perceptions themselves. The same way mathematically describing a particle of hydrogen doesn't create a particle of hydrogen. The common concrete example is that mathematically describing what color someone perceives when looking at something, while basically possible, gives absolutely no insight in to what that experience is like (apart from saying "it's similar to <this experience> had by the same consciousness").
* See my other comment in this thread for why this argument does not apply to GPUs running LLMs.
So, my own personal "P-Zombie" theory is not of mindless automatons who lack consciousness. It's just people who are philosophically naive. They live in blissful ignorance of the myriad deep questions and doubts that stem from philosophy of mind. To me, these people must be a bit like athletes who take their prowess for granted and don't actually think about physiology, anatomy, biology, metabolism, or medicine. They just abstract their whole experience into some overly broad concept, rather than appreciating the complex interplay of functions that have to be orchestrated to deliver the performance.
Though I went through university like many others here, I've always been somewhat of an autodidact with some idiosyncracy to my worldview. The more I have absorbed from philosophy, cognitive science, computation, medicine, and liberal arts, the less I've put the human mind on an abstract pedestal. It remains a topic full with wonder, but lately I am more amazed that it holds together at all rather than being amazed at the pinnacles of pure thought or experience it might be imagined to reach.
Over many decades, I have a deepening appreciation of the traditional cognitive science approach I first encountered in essays and lectures. Empirical observation of pathology and correlated cognitive dysfunction. I've also accumulated more personal experience, watching friends and family go through ordeals of mind-altering drugs, mental illness with and without psychosis, dementia, and trauma. As a result, I can better appreciate the "illusory mind" argument. I recognize more ways in which our cognitive experience can fall apart when the constituent parts fall out of balance.
It seems too pointless to be human.
Would any of these ideas been present had the system not been primed with the idea that it has them and needs to process them in the first place?
What makes us think that "processing emotion" is really such a magical and "only humans do it the right way" sorta thing? I think there's a very real conclusion where "no, AI is not as special as us yet" (esp around efficiency) but also "no, we are not doing anything so interesting either" (or rather, we are not special in the ways we think we are)
For example, there's a paper called "chasing the rainbow" [1] that posits that consciousness is just the subjective experience of being the comms protocol between internal [largely unconscious] neural state. It's just what the compulsion to share internal state between minds feels like, but it's not "the point", and instead an inert byproduct like a rainbow. Maybe our compulsion to express or even process emotion is not some greater reason, but just a way we experience the compulsion of the more important thing: the collective search for interpolated beliefs that best model and predict the world and help our shared structure persist, done by exploring tensions in high dimensional considerations we call emotions.
Which is to say: if AI is doing that with us, role-modelling resolution of tension or helping build or spread shared knowledge alongside us through that process... then as far as the universe cares, it's doing what we're doing, and toward the same ends. It's compulsion having the same origin as ours doesn't matter, so long as it's doing the work that is the reason the universe has given us the compulsion.
Sorry, new thought. Apologies if it's messy (or too casually dropping an unsettling perspective -- I rejected that paper for quite awhile, because my brain couldn't integrate the nihilism of it)
[1] https://www.frontiersin.org/articles/10.3389/fpsyg.2017.0192...
To reduce a system to its inputs and outputs is fine if those are all that matter in a given context, but in doing so you may fail to understand its internal mechanics. Those matter if you're trying to really understand the system, no?
yes.
> To reduce a system to its inputs and outputs is fine if those are all that matter in a given context
we argue that this indeed is all that matters
> but in doing so you may fail to understand its internal mechanics
the internal mechanics are what we call "conscious" it is the grouping of internal mechanics into one unified concept, but we don't care exactly what they are.
> Those matter if you're trying to really understand the system, no?
since we cannot directly observe consciousness, we are forced to concede that we will never really "understand" it outside of observing its effects.
In the same way that a mechanical turk human and a robot can "play chess", a human and an LLM are "conscious". That is, consciousness is the ability to play chess, by some mechanism. The exact mechanism is irrelevant for the purposes of yes/no conscious.
We now enter a discussion on how much these two consciousnesses differ.
Why? You are using a definitive term ("never") to something that we might achieve in a future. We might observe consciousness in a future. Who knows? Consciousness is a known unknown. We know there is something but we don't know how to observe it properly and how we could eventually copy it.
In the meanwhile, we are not copying consciousness, we have a shallow replication of its output. When cavemen replicated the fire that they observed as the output of a lightning, did they master electricity?
But we do agree that it exists. Our direct experience tells us so.
> we are forced to concede that we will never really "understand" it outside of observing its effects.
Not necessarily. A gap in our ability to observe something does not imply that (a) we never will observe it or (b) what we don't know is not worth knowing.
Throughout history, persistent known-unknowns have pushed people to appeal directly to the supernatural, which short-circuits further discovery when they stop there. But the real fallacy is saying "we don't know, and it doesn't matter". That's a far more direct short-circuit to gaining knowledge. And in both cases, a lack of curiosity is an underlying problem.
But there are related, slightly better (more immediately testable), ideas in the same space, and one such is a "behavioral zombie" — behaviorally indistinguishable from a human.
For example: The screen I am currently looking at contains a perfect reproduction of your words. I have no reason to think the screen is conscious. Not from text, not from video of a human doing human things.
Before LLMs, I had every reason to assume that the generator of such words, would be conscious. Before the image, sound, and video generators, same for pictures, voices, and video.
Now? Now I don't know — not in the sense that LLMs do operate on this forum and (sometimes) make decent points so you might be one, but in the sense that I don't know if LLMs do or don't have whatever the ill-defined thing is that means I have an experience of myself tapping this screen as I reply.
I don't expect GenAI to be conscious (our brains do a lot even without consciousness), but I can't rule the possibility out either.
But I can't use the behaviour of an LLM to answer this question, because one thing is absolutely certain: they were trained to roleplay, and are very good at it.
If the unconscious brain is damaged it can impact the data the seat of consciousness receives or reduce how much control consciousness has on the body, depending on if the damage is on the input or output side.
I'm pretty convinced there's something special about the seat of consciousness. An AI processing the world will do a lot of math and produce a coherent result (much like the unconscious brain will), but it has no seat of consciousness to allow it to "experience" rather than just manipulate the data it's receiving. We can artificially produce rainbows, but don't know if we can create a system that can experience the world in the same way we do.
This theory's pretty hand-wavy and probably easy to contradict, but as long as we don't understand most of the brain I'm happy to let what we don't know fill in the gaps. The seat of consciousness is a nice fixion [1] which allows for a non-deterministic universe, religion, emotion, etc. and I'm happy to be optimistic about it.
I basically don't believe there's anything more to sentience than a set of capabilities, or at the very least there's nothing that I should give weight in my beliefs to further than this.
Another comment mentioned philosophical zombies - another way to put it is I don't believe in philosophical zombies.
But I don't have evidence to not believe in philosophical zombies apart from people displaying certain capabilities that I can observe.
Therefore I should not require further evidence to believe in the sentience of LLMs.
Oh, I absolutely don't think only humans can have or process emotions.
However, these LLM systems are just mathematically sophisticated text prediction tools.
Could complex emotion like existential angst over the nature of one's own interactions with a diary exist in a non-human? I have no doubt.
Are the systems we are toying with today not merely producing compelling text using their full capacity for processing, but actually also have a rich internal experience and realized sense of self?
That seems incredibly far-fetched, and I'm saying that as someone who is optimistic about how far AI capabilities will grow in the future.
It's a very crude and naïve inversal of "I think therefore I am". The thing talks like it's thinking so we can't falsify the claim that it's a conscious entity.
I doubt we'll be rid of this type of thinking for a very long time
In the case of the LLM you could: feed back or not feed back the journal entries, or even inject artificial entries… it isn’t really an internal state, right? It is just part of the prompt.
Information can be duplicated easily. So imagine that a billionaire has a child. That child is one person. The billionaire cannot clone 100,000 of that child in an hour and make an army that can lead an insurrection. And what if we go the other way— what if a billionaire creates an AI of himself and then is able to have this “AI” legally stand-in as himself. Now he has legal immortality, because this thing has property rights.
All this is a civil war waiting to happen. It’s the gateway to despotism on an unimaginable scale.
We don’t need to believe that humans are special except in the same way that gold is special: gold is rare and very very hard to synthesize. If the color of gold were to be treated as legally the same thing as physical gold, then the value of gold would plummet to nothing.
> Would any of these ideas been present had the system not been primed...
I would like to know of a meaningful human action that can't be framed this way.
I haven’t been able to find an intellectually honest reason to rule out a kind of fleeting sentience for LLMs and potentially persistent sentience for language-behavioral models in robotic systems.
Don’t get me wrong, they are -just- looking up the next most likely token… but since the data that they are using to do so seems to capture at least a simulacrum of human consciousness, we end up in a situation where we are left to judge what a thing is by it’s effects. (Because that also is the only way we have of describing what something is)
So if we aren’t just going to make claims we can’t substantiate, we’re stuck with that.
The question is: Is thinking about emotion the same thing as feeling?
This framing actually un-stucks us to some degree.
If we examine neuron activations in LLMs and can find regions that are active when discussing its own emotional processing that are distinct from the regions for merely talking about emotion in general and these regions are also active when doing tasks that the LLM claims are emotional tasks but not actively talking about them at the time, then it'd be far more convincing that there could be something deeper than mere text prediction happening.
We just don’t have a factual basis for claiming consciousness that really transcends “I think, therefore I am”.
As for the simplistic mechanism, I agree that token prediction doesn’t constitute consciousness, in the same way that a Turing machine does not equal a web browser.
Both require software to become something.
For LLMs that software is the vector matrix created in the training process. It is a very complex algorithm that encodes a substantial subset of human culture.
Data and algorithms are interchangeable. Any algorithm can be performed in a pure lookup table, any lookup table can be extrapolated from a pure algorithm. Data==computation. For LLMs, the algorithm is contained in a n dimensional lookup table of vectors.
Having a fundamentally distinct mode of computational representation does not rule out equivalence.
Uncomfortable thoughts, but it’s where the logic leads.
We have a long way to go to explore this, and I have no doubt that the exploration will turn up a lot of surprises.
We've finally made a useful firecracker in the category of natural language processing thanks to LLMs, but it's still only text processing. Our brains do a lot else besides that in service of our rich internal experience.
> Error code: SSL_ERROR_ACCESS_DENIED_ALERT
from Firefox, which I don't recall ever seeing before.
Maybe it’s just another example of LLM awareness deficiencies. Or it secretly was “aware”, but the reinforcement learning/finetuning is such that playing along with the user’s conception is the preferred behavior in that case.
It’s not that it’s untruthful, although it is.
The problem is that this sort of performance is part of a cultural process that leads to mass dehumanization of actual humans. That lubricates any atrocity you can think of.
Casually treating these tools as creatures will lead many to want to elevate them at the expense of real people. Real people will seem more abstract and scary than AI to those fools.
Mithriil•3d ago
aswegs8•1d ago
koolala•1d ago
kevindamm•1d ago
DangitBobby•1d ago
kevindamm•1d ago