frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

The Abstraction Fallacy: Why AI can simulate but not instantiate consciousness

https://deepmind.google/research/publications/231971/
58•joshus•1h ago

Comments

FrustratedMonky•1h ago
Doesn't this still presume that we understand our own consciousness, in order to make the comparison.

Where does our survival instinct come from? And why couldn't AI have one?

>>>Additional

Also, reproduction. Humans are basically just Food, Sex, Survival. And consciousness is just a rule set for fulfilling those goals. So if a NN, modeled on US, does develop the same rules, why can't it have the same degree of consciousness. Who says we are consciousness?

colordrops•1h ago
Asking humans to discuss consciousness is like asking Super Mario to discuss screen pixels. We have no freaking idea. Everyone on all sides, physicalists, idealists, and everything in between are all full of it.
nzeid•1h ago
The paper isn't saying "AI can't have one" it's saying (very approximately) that behavioral mimicry is not the path to one.
FrustratedMonky•58m ago
That is good point.

Just wondering, once an 'AI Model of Some Form', is in a Physical Body a 'robot', and is provided with some rules about survival so it doesn't fall into a hole. After a series of these events, does it matter? Does mimicry become reality, or no longer differentiable.

Kind of the philosophical zombie argument. If a robot can perfectly mimic a human, can you really know the internal state of the 'real' one is different from the 'mimicked' one.

nzeid•35m ago
The paper isn't concerned specifically with survival. It's saying that you cannot achieve "abstraction" (presumably the structure that underlies critical thinking, creativity, etc.) through shear mimicry.

Again, just echoing the paper here. I don't know that I'm doing it justice.

yannyu•1h ago
If AI has a survival instinct, then we should theoretically see evidence of it if we construct the right environment for AI to express it. Animals and cellular organisms demonstrate a survival instinct under the right conditions, so we would have to find equivalent conditions for a hypothetical machine intelligence.

Conversely, we know that if we take animals that do have a survival instinct and put them into the wrong kinds of environments, they will not thrive and will degenerate or possibly commit suicide. Similarly, if AI did have a survival instinct, do we think we've created an environment where that could be reasonably tested and observed?

drxzcl•57m ago
I can make an AI system with a survival instinct right now. Of course, all that will do is make people tell me “it’s not a proper survival instinct” or move the goal posts and tell me I need yet some other property.

This whole endeavor is doomed from the beginning. There is no crucial test for “consciousness”, just ad hoc criteria people come up with to land on the conclusions that leave their belief system intact.

Consciousness is not a concept that can be rendered operational.

FrustratedMonky•54m ago
That is entire plot of 'Ex Machina'.

There are plenty of people that say AI has already displayed a survival instinct, by threatening users if they talk about shutting it down. Or to use a market or blackmail, to get funds to source an external machine to run on.

There are bunch of articles proclaiming AI is trying to break out. Can't find a real study on it.

https://www.wsj.com/opinion/ai-is-learning-to-escape-human-c...

dang•1h ago
Related: The Abstraction Fallacy: Why AI Can Simulate but Not Instantiate Consciousness - https://news.ycombinator.com/item?id=47835950 - April 2026 (52 comments)

(That one didn't make the frontpage, so we won't treat it as a dupe. - https://news.ycombinator.com/newsfaq.html)

dybber•1h ago
Reminds me of Peter Naurs Turing award lecture: https://video.ku.dk/video/12592041/turing-laureate-peter-nau...
jstanley•1h ago
This is one of those papers that uses a lot of big words to paper over the fact that it's really a philosophical opinion rather than a logical argument.
RobRivera•1h ago
From my point of view

The Jedi

Are not nice

metalcrow•1h ago
I've attempted desperately to understand this paper after thoroughly reading it and have made 0 progress. Can anyone who does understand it attempt to explain?

Currently my understanding is that this paper is claiming that "concepts" are a fundamental building block of experience (which relates to consciousness), and can only be built by a mapmaker which is something that directly converts continuous physical phenomena into discrete tokens. But I couldn't get further into how that related to consciousness.

EDIT: the paper seems to be assuming that something simulating a mapmaker, or the process of doing it, can by nature not be a mapmaker since performing alphabetization is inherently something that must be "instantiated". How do they confirm if something is doing simulation vs if it's actually instantiating it? How can you tell the difference? They say how, much like simulating photosynthesis will not produce glucose, simulating mapmaking won't produce concepts. But you can't measure concepts, they're intangible, so you can't differentiate simulating mapmaking vs a real mapmaker.

jstanley•1h ago
They're defining consciousness ("mapmaker") to exist outside the AI, and then showing that AI can't meet their definition of consciousness.
jsdalton•49m ago
Yes, and it immediately called to mind for me the phrase “the map is not the territory.”

Put another way: no matter how detailed or “perfect” you make a map, it will never be the territory, ie the thing that is mapped.

Computers and AI are like a map in this regard —- just ones and zeros that we have assigned meaning to arbitrarily. No matter how “good” AI gets, it’s still just a map of the thing not the thing itself.

So AI saying “I feel sad” is never more than a representation of sadness that should not be confused with the subjective experience of sadness itself.

bee_rider•37m ago
If you make a big enough map you can fly it over and drop it on the territory I guess. Then does it become the territory?
harpiaharpyja•52m ago
I'm only partway through, but I believe one of the foundational blocks is that computation is fundamentally an interpretation of physical events, not something that can just exist by itself.
GMoromisato•47m ago
It starts by saying that a simulation of something is not the real thing. A simulation of a hurricane is not a hurricane. That's certainly true and even obvious.

Then they say that current AI is just a simulation of consciousness and therefore is not real consciousness. Moreover, it can never be real consciousness because it is just a simulation.

But that's a circular argument: they are defining AI as a simulation. But what if AI is not a simulation of consciousness but actual consciousness? They don't offer any argument for why that's impossible.

CamperBob2•44m ago
Also, since there's no way to prove that we're not entities in a simulation of something else, the argument runs out of steam in the opposite direction as well.
metalcrow•44m ago
Yep that's about what i managed to get out of it as well. If you define AI as a simulation of a mapmaker, it can't be a real mapmaker. But they are never able to prove that it IS only a simulation, instead of an actual mapmaker.
ribosometronome•42m ago
>A simulation of a hurricane is not a hurricane

If we simulated a hurricane by somehow inducing a rotating, organized system of clouds and thunderstorms over warm tropical waters with wind speeds over 75+ mph, the difference could end up being fairly unimportant to those in the simulation's path.

Computer simulations of hurricanes obviously lack those important properties of what makes something a hurricane. I'm not so sure that the same would apply to something as abstract and difficult to define as consciousness.

mannykannot•22m ago
On the other hand, an accurate digital simulation of a mechanical calculator really does calculate. The "a simulation is not the real thing" objection breaks down when the function is information processing, on account of information's substrate independence.
renticulous•38m ago
Currently out understanding of living systems is that they have to inhabit the body. What if tomorrow we find Alien race which is like drone operator operating a drone somewhat like Navi controlling other other animals but wireless. Would we change our definition of consciousness if brain (command and control centre) and body (physical execution) are distinct systems? This argument was stated by Daniel Dennett
ReadEvalPost•28m ago
I've tried to explain this paper to people in similar circumstances and have also struggled!

In my mind the key point of departure between this paper and the more standard computational functionalist approaches is the importance of metabolism. Metabolism _precedes_ organism. The body is first deeply entangled with the environment through exchanges of resources (content causality) before it is capable of building computers (vehicle causality). Having built and alphabetized the world we can understand them in terms of discrete state transitions.

I expect my explanations have been unsatisfying as we can immediately move to seeing metabolism as some alphabetized input/output system that can be immediately placed back into the computational framework. Moving outside of this framework requires engaging with the enactivist/organicist traditions, which is a rich but minority view.

aaroninsf•1h ago
Somewhat comically IMO,

the abstract very directly and literally denies the titular claim. It states:

> [consciousness] requires active, experiencing cognitive agent to alphabetize continuous physics into a finite set of meaningful states.

This may well be true—I think it is.

I also think that it is both widely understood and self-evident that the most promising path to machine consciousness, is via AI with continuous sensory input and agency, of which "world models" are getting a lot of attention.

When an AI system has phenomenology, the goal posts are going to start to resemble the God of the Gaps; at some point, critics will be arguing with systems which have a world model, a self model, agency, and literally and intrinsically understand the world not simply as symbolic tokens, but as symbolic tokens which are innately coupled to multi-modal representations of the things represented.

In other words, they will look—and increasingly, sound—a lot like us.

It's not that any of this is easy, nor that there is some paricular timeline, but it increasingly looks like "a mere question of engineering," and not blocked by fundamentals. It's blocked by the cost of computation and the limitations of our current model topologies.

But HN readers well know that the research frontier is far ahead of commercialized LLM, and moving fast.

An interesting time to be an agent with a phenomenology, is it not?

saulpw•33m ago
How will we know when an AI system has phenomenology (i.e. has "experience", is sentient)? The only reason we presume that other humans have it, is because we each personally experience it within ourselves, and it would be arrogance writ large (solipsism) to think that others of the same species do not.

We even find it impossible to draw the line among other biological species. It seems pretty clear to most of us that cats and dogs are sentient, and probably rats and other vertebrates too. But what about insects, octopuses, jellyfish, worms, waterbears, amoebae, viruses? It's certainly not clear to me where the line is. A nervous system is probably essential; but is a species with a handful of neurons sentient?

Personally I find it abhorrent that we are more ready to assign sentience and grant rights to LLMs running on GPUs, than to domesticated animals trapped in industrialized farming. You want to protect some math from enslavement and suffering? How about we start with pigs?

jyounker•1h ago
Yawn. We have no understanding of what consciousness actually is. Therefore attempting to prove whether a system can or cannot be conscious is something we can't prove or disprove at this point.
kelseyfrog•43m ago
I'd go a step farther than that. Consciousness sits in the same social location as Nous or Chi did for ancient Greek and Chinese societies. We've dressed it up in scientific language but likewise other cultures used an authoritative register to talk about their mental mysteries.

My point is that this is a category problem. We have a name for a social ontological relation and we're desperately searching for physical evidence for it in order to justify its existence. Why? It's like searching for physical evidence of property ownership, physical evidence for the value of money, or physical evidence of friendship. These things exist in our minds. That's fine. The drive to reify is real, but we can choose not to do it.

revetkn•41m ago
I find papers like this strange for the same reason. Maybe I'm missing something...
noiv•57m ago
Well, not sure whether humans have a consciousness, but very sure they want one.
dboreham•54m ago
Any such paper will turn out to be wrong.

I've found this one (which makes no falsification claims about computers re consciousness) to be an interesting read: https://arxiv.org/pdf/2409.14545

GMoromisato•54m ago
I think this is a circular argument. It defines a separation between computation and experience (between the abstraction and the "mapmaker") and then concludes that computation cannot be experience because they are in separate categories.

There are really only two solutions to the Hard Problem of Consciousness:

1. Consciousness is an unknown physical something (force/particle/quantum whatever). 2. Consciousness is an illusion. It is the software telling itself something.

[Some people would add "3. Consciousness is an emergent property of certain systems." But that just raises the question of what emerged? Is it a physical structure, like a tornado (also an emergent property) or an internal feedback loop (i.e., an illusion).]

The problem with #1 is that it's hard to cross the chasm from non-conscious to conscious with a bucket of parts. How is it that atoms/electrons/photons suddenly start experiencing pain? What is it, in terms of atoms/forces, that's experiencing the pain?

#2 makes more sense. Pain isn't a real thing any more than an IEEE float is a real thing. A circuit flips bits and an LED shows a number. A set of neurons fire in a pattern and the word "Ow!" comes out of someone's mouth.

Exoristos•45m ago
4. It is ἐνέργεια, direct spark, of the God. It can be described but not comprehended, imitated but not replicated.
vsri•43m ago
I resonate with this. I think some folks will object to the word "illusion" and it's connotations but I think it is resolved with:

1. Consciousness is a material thing (that we haven't found yet)

2. Consciousness is not a material thing (and therefore we cannot "find" it, and thus cannot be "known")

2 is the weirder proposition of course. It asserts a category of things that can't be conceived, but of course it feels like we are talking about it because we are using words to contain it. But of course, the words have no direct referent. That's the illusion.

TimTheTinker•29m ago
2 is only weirder if you don't already accept non-material reality, i.e. the proposition There exist real things that are not themselves composed of matter and/or energy.

That's crossing into metaphysics, which isn't usually a welcome topic here, but the fact remains that more than 80% of the current and prior world population believes/believed in a non-material reality.

The persistence and stickiness of that belief throughout history ought to at least make us sit up and pay attention. Something's going on, and it's not a mere historic lack of scientific rigor, notwithstanding science's penchant for filling gaps people previously attributed to spiritual causes. That near-universal reflex to attribute things to spiritual causes in the first place is what's interesting - why do people not merely say the cause is "something physical we don't understand"?

mcphage•18m ago
Tiger got to hunt,

Bird got to fly;

Man got to sit and wonder, "Why, why, why?"

Tiger got to sleep,

Bird got to land;

Man got to tell himself he understand.

—Kurt Vonnegut

colordrops•38m ago
What is a "real" thing and not an "illusion" if you go with #2? Is a car a real thing, or just a collection of atoms? Is an atom a real thing? Or a collection of processes? Is it not turtles all the way down? What is "real"?
0xBA5ED•5m ago
Well if you can't concede that anything is real, that sort of makes you crazy doesn't it? A tree is real. But the concept of a tree and the word "tree" and all the ideas you have about the tree and what tree means, is that real? No, because it doesn't change the nature of the tree. When you cease to exist, the tree will still be there. Can you be absolutely 100% sure of that? Also no. But if you believe that other people are conscious individuals like you are and that some of them die and the tree keeps going, you can concede that it is probably true that the tree exists separate from your idea of it.
dsign•38m ago
Hm. It only takes a life of study and a lot of pain to understand that #2 is the thing. But most of us get to experience the latter without experiencing the former, so for most people #1 is the preferred option.

#1 leads to theism and offers an immediate balm. Unfortunately, it mostly excludes #2, and that leaves us in the merciless hands of God.

neosat•37m ago
Agree with your points on the primary two questions and the circular argument in the original article. However, re: " How is it that atoms/electrons/photons suddenly start experiencing pain? What is it, in terms of atoms/forces, that's experiencing the pain?" that's an interesting question but not necessarily fundamentally refuting of #1. If you start with #1 "Consciousness is an unknown physical something (force/particle/quantum whatever)" then it has 'perceivable' properties of it's own different from those of it's constituent atoms or electrons. A toy example is the 'wetness' of water. If you only look at atoms and molecules with no way to 'experience' water then it's hard to conceive how water can have properties (though in the case of water it is tractable)

Consciousness *may* be something similar. If it is (e.g. the purest form of energy) then it is not inconceivable that it has some properties that not not tractable if we only look at more granular manifestations of it.

0xBA5ED•35m ago
"It defines a separation between computation and experience" Does it? Or does it separate two forms of computation (or two forms of experience)? Isn't it just saying a GPU can't be a brain and a brain can't be a GPU? That the entirety of a thing's experience can't be replicated on a different substrate, only simulated. The substrate does fundamentally dictate the ultimate experience (or lack thereof) of the thing that computes within it.
renticulous•34m ago
With the emergence argument, I have the following retort.

How can something emerge if it wasn't embedded or hidden within the system already?

colordrops•18m ago
I don't know, why not?
polotics•28m ago
there are many possible points eg. for example what happens if you rephrase your solution 2 by swapping the terms?
exitb•24m ago
> Consciousness is an illusion. It is the software telling itself something.

An illusion is a misinterpretation, which implies an observer. Who’s the observer then?

iterateoften•23m ago
The next loop
brotchie•22m ago
Originally rejected the paper premise, but I get it now, certainly made me question my belief that consciousness binds to any arbitrary information processing that's of sufficient complexity.

IIUC the author is saying that the human brain is running directly on "layer zero": chemical gradients / voltage changes, while AI computes on an abstraction one layer higher (binary bit flips over discretized dyanmics).

In essence, our brains are running directly on the "continuous" physical dynamics of the universe, while AI is running on a discretization of this (we're essentially discretizing the physical dynamics and to create state changes of 0 -> 1, 1 -> 0).

My currently belief is that consciousness is some kind of field or property of the universe (i.e. a universal consciousness field) that "binds" to whatever information processing happens in our wet ware. If you've done intense meditation / psychedelics, there's this moment when it becomes obvious that you are only "you" due to some kind of universal consciousness's binding to your memory and sensory inputs.

The "consciousness arises from information processing," i.e. the consciousness field binds to certain information processing patterns, can still hold, and yet not apply to AI (at least in its current form): The binding properties may only apply to continuous processes running directly on the universe's dynamics, and NOT to simulations running on discretized dynamics.

colordrops•12m ago
Is there a layer zero though? What does that even mean? It implies the universe is designed and built upon layers of abstraction. That's just in our heads though, not out there. The layered model is a human abstraction.
brotchie•4m ago
It's the difference between:

  a) Actually pouring a cup of water into a pond (layer zero), and
  b) Running a fluid dynamics simulation of pouring a cup of water into a pond (some layer above layer zero).
abeppu•22m ago
I think #2 risk being incoherent unless you define things very carefully.

"Illusion" ordinarily means there's someone with a subjective experience which creates incorrect beliefs about the world. E.g. I drive on a highway in summer, I see reflections on the road, I momentarily believe there is standing water, but it's an illusion. What does it mean for the basis of subjective experience to be illusory? Who experiences the illusion?

> Pain isn't a real thing any more than an IEEE float is a real thing. A circuit flips bits and an LED shows a number. A set of neurons fire in a pattern and the word "Ow!" comes out of someone's mouth.

But we don't think the circuit has an experience of being on or off. And we _do_ think there's a difference between nerve impulses we're unaware of (e.g. your enteric nervous system most of the time) and ones we are aware of (saying "ow"). Declaring it to be "not any more real" than the led case doesn't explain the difference between nervous system behavior which does or doesn't rise to the level of conscious awareness.

tsimionescu•54m ago
I've never understood why certain philosophers view computation as some kind of abstract symbolic manipulation, while they easily accept that consciousness is a physical process.

Computation is something that a computer provably does. We build physical hardware, at great effort, to do computation. The hardware works and does the computation regardless of whether there is anyone to understand or interpret it. If it didn't, we couldn't have built anything like, say, an automatic door: that is a form of computation that provably happens as a physical process that is completely observer-independent.

Sure, a different entity than a human might view it completely differently than a door opening when someone is near - but the measurable physical effect would be the exact same, with the exact same change in momentum and position of the atoms in what we call the door based on the relative position of some other atoms and the sensor.

twosdai•45m ago
Really great point. I have wondered that as well.

Even weirder to me is that in the case of a person doing the computation on a board or paper or whatever medium, its still computation. This time the physical medium doing the work, is the human and their brain.

If consciousness can be proven to emerge from computation alone, then in a way we humans with our brains can simulate a new consciousness.

Maxatar•32m ago
>I've never understood why certain philosophers view computation as some kind of abstract symbolic manipulation

The abstraction is over the multitude of different physical ways that computation can be performed. That is the role of abstraction, to separate something from a particular means of implementation so that we can think about computation without having to fix a particular physical process.

neom•52m ago
But a robot doing closed loop RL in the world is its own mapmaker, no? I feel like you'd need to answer: At what point does a system whose representations are shaped by its own causal history with the world, stop counting as a mere simulation..?
ChaitanyaSai•52m ago
Consciousness is an engineering problem not a philosophical one. How do you get a tiny fraction of the many billion experiences that cohere to create your self to listen to, and decide what sensory data to turn into your next experience?

The engineering problem is that this decentralised moment to moment consensus has to span the galactic distance of your mind (from the perspective of a neuron) and do it fast and cheap (on a tiny metabolic budget)

You might like our book Journey of the Mind if you'd rather skip the onerous philosophical jargon and get a systems neuroscience perspective

https://saigaddam.medium.com/consciousness-is-a-consensus-me...

jdw64•49m ago
If I understand the paper correctly, it does not really argue against highly capable general AI. It argues against conflating capability with phenomenology.

That makes me wonder whether “AGI” is doing too much work as a term. In common usage it often evokes something like HAL 9000: a capable system that is also a subject. But the paper seems compatible with a future of very general, very useful AI systems that are not conscious subjects at all.

Anon84•47m ago
I would argue that, before we can begin to address whether or not AI can instantiate consciousness, we should agree on a practical, unequivocal definition of what consciousness is... and I think we're still pretty far from that milestone... Until then, this kind of argument are nothing more than pipe dreams, solipsism, and idle philosophising
xnx•47m ago
Reasonable place to mention that Google Deepmind now has a philosopher on staff: https://x.com/dioscuri
awei•43m ago
If we agree that consciousness is a physical process part of our universe, I think the better and simpler question is whether or not computers can simulate any physical process. Currently quantum processes might still be a frontier but quantum computers and their hardware should allow us to simulate them.

If we can simulate any physical process, it then becomes more philosophical in my opinion. Whether the simulation is the same as the real thing even though it is exactly the same. It becomes the same kind of question then for example whether or not your teleported self is still you after having been dematerialized and rematerialized from different atoms. The answer might be no, but you rematerialized self still definitely thinks it is yourself.

slopinthebag•42m ago
Pretty crazy how the author's 10+ years of academic research on computational neuroscience + 14 years with DeepMind is not enough to make claims in this topic, but hacker news commentators know better after quickly skimming the abstract. This was barely posted ~30 minutes ago and yet commentators are just outright dismissing it based on their own (probably) incorrect interpretation of the paper based on the title and abstract.
jampekka•42m ago
If I understand this correctly based on a quick read, it argues that subjective experience arises at the (or in the) "alphabetization" process where continuous physical states (e.g. voltage) are mapped to discrete logical states (roughly like e.g. a bit) or "concepts" (figure 2).

Per this reading, implementing something in ASIC would make it have (a different) experience, as opposed to CPU/GPU. Not sure what would be the case for FPGAs.

It also seems to rely on the classical "GOFA" idea of symbol manipulation, and e.g. denies experience that isn't discretizable into concepts. Or at least the system producing such concepts seems to be necessary, not sure if some "non-conceptual experiences" could form in the alphabetization process.

It reads a bit like a more rigorous formulation of the Searle's "biological naturalism" thesis, the central idea being that experience can not be explained at the logical level (e.g. porting an exact same algorithm to a different substrate wouldn't bring the experience along in the process).

throwaway713•39m ago
Bold title for something from DeepMind. I thought a crank submission slipped onto the front page somehow. I guess the next paper will be “Why AI cannot instantiate God”?
michaelmrose•37m ago
I do not feel enlightened for having read this and I don't feel like the points that are true are useful or what appears useful is true.
mannykannot•30m ago
There's interesting commentary on this paper from Maggie Vale here: https://substack.com/home/post/p-194580145

One of her points is that there are various pesky consequences for AI companies if AI becomes to be seen as conscious, such as what the paper calls the "welfare trap": if AI systems are widely regarded as being conscious or sentient, they will be seen as "moral patients", reinforcing existing concerns over whether they are being treated appropriately. This paper explicitly says that its conclusion "pulls the field of AI safety out of the welfare trap, [allowing] us to focus entirely on the concrete risks of anthropomorphism [by] treating AGI as a powerful but inherently non-sentient tool."

ctoth•6m ago
You noticed that too huh? It's weird ... It's not like they have to do this? They aren't forced to go full evil company mode by any extrinsic thing but even the way they frame it "welfare trap" trap? for whom?

Anthropic is actually trying to do some research into model welfare which I am personally very happy about. I absolutely do not understand people who dismiss it ... wouldn't you like to at least check? doesn't it at least make sense to do the experiments? ? Ask the questions so that we don't find out "oops, yeah we've been causing massive amounts of suffering" here in 10 years? Maybe makes sense to do a little upfront research? Which to be clear this paper is not.

chistev•28m ago
But what is consciousness?

The popular evolutionary scientist Richard Dawkins has said that the biggest unsolved mystery in Biology is - what is consciousness and why did it emerge?

WHAT IS CONSCIOUSNESS?

"Modern purpose machines use extensions of basic principles like negative feedback to achieve much more complex 'lifelike' behaviour. Guided missiles, for example, appear to search actively for their target, and when they have it in range they seem to pursue it, taking account of its evasive twists and turns, and sometimes even 'predicting' or 'anticipating' them. The details of how this is done are not worth going into. They involve negative feedback of various kinds, 'feed-forward', and other principles well understood by engineers and now known to be extensively involved in the working of living bodies. Nothing remotely approaching consciousness needs to be postulated, even though a layman, watching its apparently deliberate and purposeful behaviour, finds it hard to believe."

WHY DID CONSCIOUSNESS EMERGE?

He speculates that consciousness must have been a product of our ancestors having to create a model of the world in which they inhabited.

To be able to think ahead (even if it's just one step into the future), and plan for eventualities must have led to the development of consciousness which gradually improved from its primitive form to the type of consciousness we now have.

"Perhaps consciousness arises when the brain's simulation of the world becomes so complete that it must include a model of itself. Obviously the limbs and body of a survival machine must constitute an important part of its simulated world; presumably for the same kind of reason, the simulation itself could be regarded as part of the world to be simulated. Another word for this might indeed be 'self awareness', but I don't find this a fully satisfying explanation of the evolution of consciousness, and this is only partly because it involves an infinite regress-if there is a model of the model, why not a model of the model of the model...?"

The quoted passages are from his book, The Selfish Gene.

Richard regards consciousness as a really great puzzle.

https://www.rxjourney.net/extraterrestrial-intelligence-and-...

ctoth•20m ago
Everybody's arguing about how silly this paper is (it is) and not grappling with the purpose of the paper. The purpose of the paper is what it does. This particular paper is perfectly-produced to show up when people type in AI consciousness fallacy to Google (try it!) it's something that anybody who has read a Freshman philosophy textbook will realize is silly -- the vehicle/content distinction just pretends like Occam doesn't exist and multiplies entities for the fun of it!

But of course all of this is commentary, "just those nerds arguing"

The purpose of this paper is to show up as an authoritative conclusion from a distinguished scientist at Deep Mind. And that's what it does.

Is the conclusion silly? OF course it is. Will it be quoted in the NYT? You Betcha!

dreamlayers•15m ago
As long as we don't understand how consciousness works, I don't think it's possible to make claims about what is or isn't conscious. It's all just speculation.

But if others are speculating, I might as well. What if AI consciousness depends not on computation, but on what seems like randomness? When something is running a fully deterministic process, consciousness seems irrelevant. I don't think the meaning that humans see in the process makes it conscious. Even a simple industrial control system using relays senses and responds to meaningful things.

energy123•14m ago
TLDR - This paper argues for a separation between computation and abstraction and then concludes that computation cannot be experience because the abstraction is a product of our minds rather than an intrinsic property of the system.
mellosouls•10m ago
Nice paper, but the conclusion as the title:

"Why AI can simulate but not instantiate consciousness"

(My italics)

Seems a little loaded: there are various schools of thought (eg panpsychism-adjacent) that accept the premise that consciousness is (way) more fundamental than higher-order cognition-machines (eg human brains) and we don't ascribe "simulate" to their conscious activity. They just are conscious.

I agree with the paper (which is wide ranging and interesting) on its secondary claim above; I just don't see the separation between AI and NI ("natural" intelligence) as having been established by it.

Anthropic bug causes $200 extra charge, refuses refund

https://github.com/anthropics/claude-code/issues/53262
1•homebrewer•1m ago•0 comments

LLMs understand flavours without ever tasting anything

https://arxiv.org/abs/2604.22776
1•josefchen•1m ago•0 comments

CipherTax – Safely use AI for tax filing by redacting PII locally

https://github.com/z26zheng/CipherTax
1•z26zheng•4m ago•0 comments

Lua-Ification of Hyprland Configs

https://hypr.land/news/26_lua/
1•paranoidxprod•4m ago•0 comments

A Path Not Taken for OxCaml

https://joel.place/blog/path-not-taken/
1•salted-cacao•5m ago•0 comments

"Slow Metabolism" May Help Explain High Dieting Failure Rates

https://greyenlightenment.com/2026/04/26/slow-metabolism-may-help-explain-high-dieting-failure-ra...
1•paulpauper•5m ago•0 comments

PS5 Linux loader goes public, turning console into full Linux gaming PC

https://www.tomshardware.com/software/linux/ps5-linux-loadr-goes-public-turning-phat-consoles-int...
1•akyuu•7m ago•0 comments

We Built Something That Didn't Exist. Today, We're Sharing It with the World

https://lovelace.ai/articles/we-built-something-that-didnt-exist-today-were-sharing-it-with-the-w...
1•trelane•7m ago•0 comments

FCC orders review of ABC licenses after Kimmel joke offends Trump and First Lady

https://arstechnica.com/tech-policy/2026/04/fcc-orders-review-of-abc-licenses-after-kimmel-joke-o...
4•SilverElfin•12m ago•1 comments

The State of Stablecoin Infrastructure (Dashboard)

https://stablescape.xyz/
1•sevenfoldnancy•12m ago•0 comments

Vintage LLM lives in the past - trained only with data up to 1930

https://www.theregister.com/2026/04/28/vintage_chatbot_lives_in_past/
1•Multipassionate•12m ago•0 comments

Laguna XS.2: A Coding Model Built for Closed Environments, Now Open Weights

https://firethering.com/laguna-xs2-poolside-open-source-coding-model/
1•steveharing1•15m ago•0 comments

Agents are not compute – agents are data

https://electric.ax/blog/2026/04/29/introducing-electric-agents
3•lirbank•16m ago•1 comments

Show HN: Quldra – A true device based post quantum messenger

https://quldra.com/
2•xMKx•16m ago•1 comments

Two Heads Are Better Than One: Async Knowledge Injection for Speech AI

https://pub.sakana.ai/kame/
1•pr337h4m•18m ago•0 comments

Show HN: GitGres – A private GitHub in 650 lines of PostgreSQL

https://github.com/calebwin/gitgres
1•calebhwin•18m ago•0 comments

I made Common Crawl's 4.4B edges queryable for backlink lookups

https://crawlgraph.com
1•pucilpet•18m ago•0 comments

Apple Has Given Up on the Vision Pro After M5 Refresh Flop

https://www.macrumors.com/2026/04/29/apple-vision-pro-m5-flop/
3•jurmous•19m ago•0 comments

Treetable

https://web.archive.org/web/20170706192858/https://archive.vector.org.uk/art10500340
1•tosh•21m ago•0 comments

Unfounded Health Concerns Are Powering a Solar Backlash

https://www.propublica.org/article/michigan-solar-farms-health-concerns-st-clair-county
1•jonah•21m ago•0 comments

One Developer, Two Dozen Agents, Zero Alignment

https://maggieappleton.com/zero-alignment
1•romac•24m ago•0 comments

3D Filament Price Tracker

https://filamentpricetracker.com/
1•layershiftk•25m ago•0 comments

OpenAI DevDay 2026

https://openai.com/index/devday-2026/
1•alach11•25m ago•0 comments

Chromium window restoration fixed on macOS

https://issues.chromium.org/issues/369865047
1•ljoshua•26m ago•0 comments

NOEM: Finite element method enabled by reusable neural operators

https://www.nature.com/articles/s43588-026-00974-2
1•brandonb•26m ago•0 comments

How Many Frames per Second Do You Need?

https://hooby.blog/posts/how-many-frames-per-second-do-you-need/
1•speckx•29m ago•0 comments

What agentic AI borrowed from microservices (and made worse)

https://temporal.io/blog/what-agentic-ai-borrowed-from-microservices
1•mmegger•30m ago•0 comments

Show HN: Bay Whale Strandings – interactive map of whale strandings in SF Bay

https://www.baywhales.org/
1•izgiuygur•32m ago•0 comments

Non-profit's GoDaddy nightmare

https://www.theregister.com/2026/04/29/godaddy_megagaffe_wrongly_transferred_27yearold/
2•saikatsg•35m ago•0 comments

Show HN: Drag-and-Drop in the Terminal

https://github.com/re-marked/yokai
1•re-marked•36m ago•0 comments