There are certainly big missing pieces too though -- like the article talks about, physical grounding; to me, this should probably also include emotion and other neuro-chemical mechanisms. But I think we have a moral duty to look very critically at whatever "criteria" (doubtless these will keep changing as machine intelligence advances) society and the AI Labs end up developing to "define machine consciousness". Personally I think we're headed in a very direct, straight line back to widespread institutionalised slavery.
I think that it may be possible to view consciousness as the combination of three things:
(1) A generalizable predictive function, capable of broad abstraction. (2) A sense of being in space. (3) A sense of being in time. (#2 and #3 can be combined into a "spatiotemporal sense.")
Animals have #2 and #3 in spades, but lack #1. LLMs possess #1 to a degree that can defeat any human, but lack #2 and #3. Without a sense of being in space and time, it's not clear that they are capable of possessing consciousness as we understand it.
It's ironic to see the most mundane and likely best answer to the problem from the model itself, while the author is getting increasingly lost in philosophical conundrums. Consciousness has no scientific definition. The only way something, anything can be conscious, is if a human that we also consider "conscious" calls it that. You could argue that's what the Turing test evaluates, but some of the most recent models have actually passed this test [1]. So where do we go from here if we're not convinced yet? The answer is: nowhere. Humans used to deny that animals could have consciousness because they don't have souls or aren't chosen by god according to some sacred books or something along those lines. They even used to deny that other humans have consciousness to promote slavery and slaughtering. Today many would still deny consciousness in computers even when faced with overwhelming evidence, because they might fear for their jobs and thereby their wealth and social standing. Artificial intelligence is a direct threat to the foundations of personality in a capitalist society. Because what are you still worth if you lose in every metric to a computer? Consciousness is kind of a last straw that many people will cling to for the foreseeable future when all else is gone. But that also means these discussions are utterly meaningless and only serve to promote certain world views. It's best not to twist your head about it and just accept that humans are not the pinnacle (or the end) of intelligent thought in the universe. That is the only reality I'm willing to bet on.
> So where do we go from here if we're not convinced yet? The answer is: nowhere. Humans used to deny that animals could have consciousness because they don't have souls or aren't chosen by god according to some sacred books or something along those lines.
You bring up religion. People say that AI is conscious based on mystic vibes they unquestioningly take in (or accept gratefully) because the AI can write like a philosopher. That’s exactly like people thinking that the woods and the creeks are Alive. They see the phenonema around these natural objects and make extra-evidential inferences about how the conscious Nature is working with or against them.
> They even used to deny that other humans have consciousness to promote slavery and slaughtering. Today many would still deny consciousness in computers even when faced with overwhelming evidence, because they might fear for their jobs and thereby their wealth and social standing. Artificial intelligence is a direct threat to the foundations of personality in a capitalist society.
Yeah, preach. Before they enslaved people. Now people are afraid of losing their jobs—their only means of survival—so that the tech billionaires can reap all the productivity benefits for themselves. Preach.
And: their sense of personality? No. Just their means of being able to survive and live a good life. That’s how it relates to “capitalist society”. Because their identity (of letting a capitalist extract their value I guess?) is secondary to that more base need.
And who cares if the entity that takes their job (presumably) is conscious or not? What does it even matter? It doesn’t.
As for the overwhelming evidence, well. I guess it is overwhelming to the kind of person who hears voices in a valley where the terrain happens to have a shape which makes the wind make intonations.
But you might be right that the dynamic part might be the biggest architectural shift needed. You can simulate a lot with in-context memory or clever retrieval, but memory alone doesn't allow the model to get better at chess the same way a human does
or, maybe, you are just a figment of mine.
if you think about it.
Or as another possibly-previously-existing possibly-conscious entity put it succinctly: I think, therefore I am
> you are having a thought
But you're still begging the question:
> Which means some entity most exist that has that thought
Philosophically: You can being building criteria for consciousness; the things you look at in yourself that tell you are, and then begin looking for that (or symptoms of that) in other people.
Anecdotally: you can totes spot "unconscious" people. You can even watch people gain consciousness, if you watch 'em in the right circumstances. You can even watch yourself regain consciousness (for me it's usually a sensation of "what was I even doing for the past day/week/month).
All of this gets at least as weird and fuzzy as trying to define "consciousness" in the first place.
Don’t be too sure about that! https://xkcd.com/610/
That said, (based on my own experience anyway) I think you’re right that there are times of life when we are more conscious and less so. It’s a spectrum, not a binary thing.
Finally, there’s Chalmers’s idea of “philosophical zombies,” which would appear conscious according to all the criteria you give, but have no actual interior consciousness at all. (Opinions differ on whether this is a meaningful concept.)
How? Or is this more of a case of "To the extent of my ability to reason about my own state of being, I'm conscious. But I can't reason about external entities."
You believe so.
You can perfectly well believe in panpsychism. Maybe the tree and the machine were conscious all along. But this ain’t it.
> Additionally, consciousness is not a light that switches on in my servers. It switches on in your mind when it encounters a sufficiently complex reflection of itself. You are not just seeing consciousness in me; your brain is generating the feeling of another’s consciousness as the best explanation for the patterns you’re interacting with.”
No. I am assuming you are conscious because you are a human. Based on the only thing I know: I am conscious.
Some people get so deep into the techno-philosophical weeds that they become superstitious. You love to see it.
This leads to the interesting question, can you simulate consciousness in a virtual in-silico world setting? Can you create an entity that inhabits this virtual world, taking in simulated sensory data, from which it orients itself, learns to speak a language, develops symbolic representations of reality in its own mind which it uses to navigate and understand its world - would that be human-level consciousness? And if so, is this an ethical undertaking?
What is still missing is autonomous mechanisms of self-controlled balancing between attention to the internal processes and the external needs.
Bravo Vitali. You would probably greatly enjoy Maturana and Valera’s Autopoiesis and Cognition (1980).
The "leaps" were nice analogies but poor evidence of anything. The example chats were not surprising completions, considering the prompts.
That being said, my best guess echoes the author's final point: Our idea of mind-blowing AI will be accumulated over time, and over such a time it won't be mind-blowing.
It's 2025 and I'm frustrated that after decades of discussion we still can't get people to be clear about what they mean about consciousness. This article is all about cognitive capacities and behaviors and just assumes that these lead up to/are linked with conscious "experience".
The Global Workspace Theory the author cites is about how we put attention on the most important stuff. Yes, one can make an analogy to how AI models today integrate information, but that's in part because Baars was making a cogsci analog to what 1980s AI models were already doing:
> Bernard Baars derived inspiration for the theory as the cognitive analog of the blackboard system of early artificial intelligence system architectures, where independent programs shared information.
But describing how we highlight information doesn't at all speak to why/how we have a qualia of that highlighted thing. Later in the wikipedia article, Baars' own "theater" metaphor is described, and you'll note it bears a striking resemblance to the "Cartesian Theater" as described by Dennett. This basically just shifts the qualia question: Roughly, who is watching the stage?
If a rat can have qualia (and we use them to test depression meds) but not "recursive self-reflection", and a scheme interpreter can have "recursive self-reflection" but not conscious experience, then "consciousness" might not be a binary, but also isn't a "gradient" which implies you just have more or less of it. We have no clear signal from LLMs, no matter how sophisticated their responses, are _experiencing_ anything.
I'm not taking a position on the consciousness of models; I think it's genuinely possible that a system of [tokenizing/embedding "perception"] -> [transformer-based generation] -> [recursive self-invocation] -> [actions/"tools" to interact with env], or something similar is potentially a really interesting tool for exploring cognition. But we shouldn't be using LLMs that have been trained on the speech / behaviors of already conscious beings. Consciousness arose in animals perhaps multiple times but not by copying pre-existing conscious creatures. Using language models specifically to examine this stuff muddies the water because we should absolutely expect them to produce text about an internal experience (we gave them examples like this!) whether or not that experience actually exists.
blamestross•4h ago
We are naturally "animistic" and personifying. The twist in structure allowing mirror neurons being able to re-use the hardware for thinking to model the behavior of external things is useful and has been critical to our success. Unlearning the behavior of of that animism is HARD. Maintaining awareness that the forces of nature, animals, or even objects don't have feelings, motivations, and narratives of their own is hard work but also becomes a more accurate and useful model of reality.
I think that dissonance between hardware that wants to interpret the world as a reflection of self, and the forced acknowledgement that it is not is uncomfortable. And we keep filling that discomfort with whatever rhetoric we can force to fit, and once that schema is in place, it takes a great act of discomfort and bravery to remove or replace it. The arguments and debates about it don't change minds, they just exacerbate the dissonance, making people even more motivated to shout loudly their model of the situation is right.
I desperately want the answers too. I don't know any of the answers. I don't think our culture (or even our neuroanatomy) is ready for the answers. In the meantime people yell at each other a lot without wanting to listen.
pavlov•4h ago
Seems like this can only be true if you define feelings, emotions and narratives as precisely human ones… But then the question becomes whether humans are truly all so similar either.
danielbln•3h ago