frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Quarkdown: A modern Markdown-based typesetting system

https://github.com/iamgio/quarkdown
3•asicsp•7m ago•0 comments

Poison Pill: Is the killer behind 1982 Tylenol poisonings still on the loose?

https://www.trulyadventure.us/poison-pill
15•TMWNN•1h ago•8 comments

AI makes the humanities more important, but also weirder

https://resobscura.substack.com/p/ai-makes-the-humanities-more-important
97•findhorn•4h ago•39 comments

The Metamorphosis of Prime Intellect (1994)

https://localroger.com/prime-intellect/mopiall.html
41•lawrenceyan•4h ago•19 comments

My AI skeptic friends are all nuts

https://fly.io/blog/youre-all-nuts/
1347•tabletcorry•11h ago•1693 comments

Why GUIs are built at least 2.5 times

https://patricia.no/2025/05/30/why_lean_software_dev_is_wrong.html
44•mpweiher•2d ago•23 comments

Cloudlflare builds OAuth with Claude and publishes all the prompts

https://github.com/cloudflare/workers-oauth-provider/
531•gregorywegory•17h ago•353 comments

Demodesk (YC W19) Is Hiring Rails Engineers

https://demodesk.com/careers
1•alxppp•1h ago

Ask HN: Who is hiring? (June 2025)

301•whoishiring•17h ago•294 comments

How to Store Data on Paper?

https://www.monperrus.net/martin/store-data-paper
62•mofosyne•3d ago•20 comments

Conformance checking at MongoDB: Testing that our code matches our TLA+ specs

https://www.mongodb.com/blog/post/engineering/conformance-checking-at-mongodb-testing-our-code-matches-our-tla-specs
72•todsacerdoti•10h ago•26 comments

Show HN: Kan.bn – An open-source alterative to Trello

https://github.com/kanbn/kan
402•henryball•22h ago•179 comments

Show HN: A toy version of Wireshark (student project)

https://github.com/lixiasky/vanta
217•lixiasky•16h ago•66 comments

How to post when no one is reading

https://www.jeetmehta.com/posts/thrive-in-obscurity
551•j4mehta•1d ago•233 comments

Show HN: I build one absurd web project every month

https://absurd.website
206•absurdwebsite•12h ago•46 comments

A Complete Guide to Meta Prompting

https://www.prompthub.us/blog/a-complete-guide-to-meta-prompting
9•saikatsg•2d ago•1 comments

Sid Meier's Pirates – In-depth (2017)

https://shot97retro.blogspot.com/2017/12/sid-meiers-pirates-in-depth-written.html
48•benbreen•3d ago•14 comments

Teaching Program Verification in Dafny at Amazon (2023)

https://dafny.org/blog/2023/12/15/teaching-program-verification-in-dafny-at-amazon/
35•Jtsummers•10h ago•8 comments

MonsterUI: Python library for building front end UIs quickly in FastHTML apps

https://www.answer.ai/posts/2025-01-15-monsterui.html
54•indigodaddy•11h ago•15 comments

Magic Ink: Information Software and the Graphical Interface

https://worrydream.com/MagicInk/
17•blobcode•3d ago•3 comments

Show HN: Onlook – Open-source, visual-first Cursor for designers

https://github.com/onlook-dev/onlook
362•hoakiet98•4d ago•78 comments

Largest punk archive to find new home at MTSU's Center for Popular Music

https://mtsunews.com/worlds-largest-punk-archive-moves-to-center-for-popular-music/
35•gnabgib•9h ago•3 comments

Japanese scientists develop artificial blood compatible with all blood types

https://www.tokyoweekender.com/entertainment/tech-trends/japanese-scientists-develop-artificial-blood/
184•Geekette•10h ago•39 comments

Younger generations less likely to have dementia, study suggests

https://www.theguardian.com/society/2025/jun/02/younger-generations-less-likely-dementia-study
99•robaato•16h ago•91 comments

ThorVG: Super Lightweight Vector Graphics Engine

https://www.thorvg.org/about
114•elcritch•21h ago•35 comments

Ask HN: How do I learn practical electronic repair?

101•juanse•3d ago•66 comments

Typing 118 WPM broke my brain in the right ways

http://balaji-amg.surge.sh/blog/typing-118-wpm-brain-rewiring
133•b0a04gl•12h ago•178 comments

CVE 2025 31200

https://blog.noahhw.dev/posts/cve-2025-31200/
113•todsacerdoti•13h ago•24 comments

Show HN: Penny-1.7B Irish Penny Journal style transfer

https://huggingface.co/dleemiller/Penny-1.7B
138•deepsquirrelnet•16h ago•71 comments

The Princeton INTERCAL Compiler's source code

https://esoteric.codes/blog/published-for-the-first-time-the-original-intercal72-compiler-code
136•surprisetalk•1d ago•36 comments
Open in hackernews

Dear diary, today the user asked me if I'm alive

https://blog.fsck.com/2025/05/28/dear-diary-the-user-asked-me-if-im-alive/
46•obrajesse•4d ago

Comments

Mithriil•3d ago
Makes me think of the Google employee that had a conversation with Google's LLM back then, which got out and triggered a lot of discussions about consciousness, etc.
aswegs8•1d ago
Didn't he insist that the LLM has consciousness and get fired because of this?
koolala•1d ago
He said "sentient" which seems like it could be true.
kevindamm•1d ago
He got fired for violating the NDA which said not to share outside of the company, when he shared his conversations with a lawyer in search of representation for the LLM. His opinion on the LLM's level of sentience had no bearing on the decision.
DangitBobby•1d ago
How can you know if the law protects you from breach of the NDA for illegal suppression of e.g. sexual assault allegations or whistleblowing for financial crimes without being able to disclose the matter in question to legal counsel?
kevindamm•1d ago
You're right, if it had only been with private legal counsel it would probably have been protected, but he also shared his arguments and the complete chat log as a Medium post.
aswegs8•1d ago
I mean, what is consciousness, really? Is there really any qualitative difference? It feels like something that emerges out of complexity. Once models are able to update their weights real time and form "memories", does that make them conscious?
jstanley•1d ago
You can't tell if another being is conscious without being on the inside of it.
layer8•1d ago
When you are inside of it in that sense, how would you be able to tell that it isn’t conscious? Doesn’t “telling” imply consciousness?
xeonmc•1d ago
Perhaps one day a criterion will be found for the equivalent of Turing-completeness but for consciousness — any system which contains the necessary elements of introspective complexity, no matter how varied or outlandish or inefficient in its implementation, would invariably develop consciousness over its course. Kind of like the handwaved premise in 17776.
andy99•1d ago
I've read a (untestable) theory that consciousness is a property of matter, so everything has it, and we're just sort of the sum of the desires and feelings of our constituent matter.

In that construct, a computer program would never be conscious because it's a simulation, it doesn't have the constituent consciousness property.

I don't believe or not believe the consciousness-as-a-property-of-matter part but I do think programs can't be conscious because consciousness must sit outside of what they simulate.

hhh•1d ago
why would it not be conscious in that construct? the bits exist physically just the same
bee_rider•1d ago
The matter would have consciousness, but I guess the simulation would be of a different consciousness, made of processing.

Although, our brains also are doing computation, and also seem to have consciousness. Are they linked? Why/why not…

This is animism, right? It is a religious belief. Not really subject to testing.

andy99•1d ago
I am conscious, I can act as a nand gate. If you set up a billion people and had them act as a computer (was that in 3 Body Problem?) it would not be conscious of what it was doing.
xeonmc•1d ago
How about simulation programs that are impure, i.e. those which include I/O in its loop? After all, taking the Turing-completeness analogy further, while a machine that satisfies said criterion is capable of universal computation, actually performing a computation still requires an external input specified outside of the program itself. Perhaps it might turn out that stimuli residing outside of the simulated system is a necessary condition for non-incompleteness of consciousness, as a seed of relative non-determinism with respect to the program’s internal specification?
koolala•1d ago
Computers are made of matter. The Earth would be conscious too? A consciousness could contain consciousnessess.
JadeNB•1d ago
> In that construct, a computer program would never be conscious because it's a simulation, it doesn't have the constituent consciousness property.

A computer program is the result of electrical and mechanical interactions that manifest in macroscopically observable effects. So are we. Why, if all matter is conscious, should the one count but not the other?

ghurtado•1d ago
> Perhaps one day a criterion will be found for the equivalent of Turing-completeness but for consciousness

My money is on mankind perpetually transforming the definition to ensure only our species can fit within.

We've been doing that long enough with higher order animals anyway.

aziaziazi•1d ago
It's fascinating how visceral the reactions are when someone introduce a comparaisons between humans and other animals, that doesn’t start with the conclusion that humans are superiors anyway.

There’s reflections done around the term Speciesism (and anti-speciesism) and most people today stands for speciesism.

Interestingly the reflection is close to the debate on racism and anti-racism (where most people settled to anti racism to the point there isn’t much debate anymore), but race is only an informal classification that don’t hold much meaning in biological term, contrary to species.

andy99•1d ago
When I read comments like this (and I've read seemingly hundreds of them), I wonder if some other people aren't conscious / sentient? I don't know how anyone who experiences consciousness (as I experience it) could think that an algorithm could experience it.
jstanley•1d ago
I also read comments like that and wonder if other people aren't conscious.

I don't know how anyone who experiences consciousness could be confused about what it means to be conscious, or (in other threads, not this one) could argue that consciousness is "an illusion". (Consciousness is not the illusion, it's the audience!).

However I don't see why you don't think an algorithm could be conscious? Why do you think the processes that produce your own consciousness could not be computable?

andy99•1d ago
I think the burden of proof is on showing that they are. Since we have no idea what consciousness is or how it works, I don't see how we could assume it clearly follows from anything.
jstanley•1d ago
I'm not saying all algorithms are conscious. I'm saying I don't think it's obvious that algorithms can not be conscious.
shayway•1d ago
> I don't know how anyone who experiences consciousness could be confused about what it means to be conscious, or (in other threads, not this one) could argue that consciousness is "an illusion". (Consciousness is not the illusion, it's the audience!).

Do you mean to say there are objective criteria for consciousness? Could you expand on that?

parpfish•1d ago
When people say that consciousness is “an illusion”, I think they mean to say that the illusion is that consciousness has the ability to direct our actions and the consciousness is an epiphenomenon. Our body processes sensory inputs and convert them into look actions (including speech) all on its own without a conscious decision maker directing anything or making decisions. the experience of consciousness is us just going along for the ride like a Maggie Simpson playing with her toy steering wheel in the simpsons intro
xeonmc•1d ago
Corgi-towed ear go zoom.
saltcured•1d ago
I think you're conflating "consciousness" and "free will"? Most of what you are explaining is the question of free will to make decisions and take action.

The consciousness illusion is a different focus, as to whether our experience of alertness, thought, and perception even has the temporal and causal elements we tend to assume. This problem has many layers.

One example is the visual system and the illusion of a constantly perceived visual field that is really a synthesized memory of many smaller visual samples from the frequent saccades of our eyes. We don't see our own eye movement that is happening We also don't usually see our retinal nerve blindspot nor recognize the inherent asynchrony of some of our different senses. Our consciousness experience fuses all this together and well known perceptual illusions and magic tricks generally exploit the gaps in this process.

But there are many other layers, such as full blown hallucination where the mind constructs sensory perceptions that do not match our physical stimuli. There are many more subtle layers in between. Delusional beliefs can be felt as "fact" that suppresses internalization of other contradictory perceptions.

More subtly, people often post-rationalize causal relationships between social experiences, emotional state, and actions in ways that are inaccurate. Psychologists talk about "cognitive distortion" as an overall concept for this fuzzy area where people's internal state biases their perception and belief derived from physical stimuli.

add-sub-mul-div•1d ago
The fact that our consciousness is so mysterious as for us to be unable to begin to truly understand it is the biggest clue to why our software isn't getting close to it.

And I'm not talking about spirituality, it could all be perfectly deterministic on some level. With that level being centuries or millennia or forever outside of our grasp.

ghurtado•1d ago
> The fact that our consciousness is so mysterious as for us to be unable to begin to truly understand it is the biggest clue to why our software isn't getting close to it.

You offer a pretty big statement without any backing whatsoever.

Lots of things are imitable without understanding how they work

Mankind was making fire for hundreds of thousands of years before knowing that it was the rapid oxidation of combustible materials.

Claiming that it wasn't fire because it was complicated to understand would be ridiculous.

kens•1d ago
Yes, some comments make me wonder about other people; I have three hypotheses: a) Some people experience consciousness very differently, similar to how some people have no mental imagery (aphantasia). b) The confusion is due to ill-defined terms. c) People are semi-trolling/debating/devils-advocating. The most interesting would be if people have widely different internal experiences, but I don't know how you could tell.

I read an interesting book recently, "Determined", which argues that free will doesn't exist. It was more convincing than I expected. However, the chapters on chaos and quantum mechanics were a mess and made me skeptical of the rest of the book.

gpm•1d ago
I'd like to offer d) People are using the word without fully having considered what it means, and are thus saying things that don't make sense even given their internal experiences.
layer8•1d ago
It’s something like b). Any description of consciousness is somewhat self-referential, so it remains unclear what, if any, substance there is to the concept.
gpm•1d ago
Computable doesn't really make sense here IMO, as you say, consciousness is not the illusion, it's the audience, it's the thing receiving the output not just an evaluation of a mathematical function.

The better question is why couldn't a consciousness attach itself to (be the audience for) a computation. Since we really don't understand anything significant about it, questions like this are next to impossible to disprove. At the same time since we've never seen anything except human start talking about consciousness spontaneously* it seems like a reasonable guess to me that LLMs/the machines running them are not in fact conscious simply because of their dissimilarity and the lack of other evidence.

* I note LLMs did not do so spontaneously, they did so because they were trained to mimic human output which does so. Because we fully understand the deterministic process by which they started talking about consciousness (a series of mathematical operations), them doing so was an inevitability regardless of whether they are conscious, and as such it is not evidence for their consciousness.

jstanley•1d ago
> it's the thing receiving the output not just an evaluation of a mathematical function.

how do you know it's not just an evaluation of a mathematical function?

gpm•1d ago
The definition is wrong.

A mathematical function is a set, possibly infinite, of pairs of abstract elements (commonly defined via sets) where no two pairs share the same first element. Nothing less, nothing more.

Computation is the act of determining the abstract output (second element in the pair) for a given abstract input (first element in the pair).

Nothing in those definitions is capable of expressing the concept of having perceptions (consciousness). That's not an abstract thing.

This isn't to say the concrete thing doing the computation couldn't in principal be conscious, just that it doesn't definitionally make sense for the math itself to be conscious.

jstanley•1d ago
I agree with everything you said up to "Nothing in those definitions is capable of expressing the concept of having perceptions (consciousness)".

Do you think the universe is not computable?

If you think the universe is computable, and you think that you exist in the universe, and you think that you are conscious, don't you think it follows that consciousness can exist within mathematical structures?

gpm•1d ago
> Do you think the universe is not computable?

Yes, definitionally not, the universe isn't an abstract object let alone one in the shape of a function.

You might, in principle, be able to precisely predict the future of the universe given perfect information using a precise model of the universe. That model, a mathematical function, would be computable. It would be accurate to say that the model describes the universe, but not that the model is the universe.

The thing about mathematical structures is that they are concepts, not things, I feel confident in saying that concepts aren't conscious.

jstanley•11h ago
But you are inside the universe. The mathematical structure of the universe can contain all of your perceptions and to you they will "feel real". Indeed, to you they are real.

If you had a perfectly accurate universe simulation, do you think the people inside the simulation would not be conscious?

If they're not conscious, it's not a perfectly accurate simulation.

And if it is possible to have a perfectly accurate simulation, then (like you said) all of the contents of the universe were "there" all along inside the giant mathematical structure. You don't need anyone to run the simulator!

All of the contents of the universe, the apparent flow of time, our thoughts and feelings, our consciousness, all lives inside this incomprehensibly large mathematical structure.

This is how I believe reality works. The universe exists inside mathematics the way 42 does. You don't need a calculator to show the number 42 in order for the number 42 to exist. Running the simulator can expose the contents of the universe to someone outside it, but everything on the inside is independent of the simulator.

You might ask "why this reality and not any other?" and I would say they all exist equally well, we just happen to notice this one because we're inside this one.

layer8•1d ago
A camera perceives its surroundings (add some image recognition to that). An LLM perceives its inputs. Things like that can be fully mathematically described. The question is, how is consciousness different from that? Introspecting, all there is is perception, arrows pointing to the things being perceived, where “things” includes some of the arrows themselves. Is there anything else? What prevents any of this to be fully mathematically described?
gpm•1d ago
> The question is, how is consciousness different from that

A consciousness experiences perceptions, if you don't, I won't be able to describe this to you. If you do, it should be clear what I mean by that.

We have no evidence that either a camera or a GPU executing an LLM experiences perception. Certainly they react to physical stimuli, but so does an atom, physical reaction is not the definition of experience I am referring to when I say perception. We also have no evidence that they do not, except for the lack of evidence to the contrary.

We have some reason to believe that other people do experience perception, in that they spontaneously describe experiencing things that our similar to our experiences, and it's surprising they do that if they don't also experience things*. When I say "we", I really mean "I", but I'm assuming that you have the same experience I do.

> What prevents any of this to be fully mathematically described?

There's nothing that says you can't, in principle, create an entirely accurate mathematical description of perception (in the experiencing and not the reacting sense) where you define that certain abstract variables correspond to certain perceptions and can entirely predict them. The model would still be that, a model that predicts what perceptions occur, not the perceptions themselves. The same way mathematically describing a particle of hydrogen doesn't create a particle of hydrogen. The common concrete example is that mathematically describing what color someone perceives when looking at something, while basically possible, gives absolutely no insight in to what that experience is like (apart from saying "it's similar to <this experience> had by the same consciousness").

* See my other comment in this thread for why this argument does not apply to GPUs running LLMs.

layer8•10h ago
The problem I'm trying to hint at is that we are unable to specify what the distinguishing properties of "experience" are supposed to be. It always ends up being self-referential. So maybe it is just a self-referential thing, a system perceiving parts of itself (mixed in with other things it perceives). Perceiving perception as such is trivial, e.g. I can have a system monitoring its own image processing pipeline, or I could feed information about the inner intermediate states of an LLM's neural network back into the LLM as part of its token input. The question is, how is "experiencing perception" any different? I'm not sure it is. The fact that I name part of my inner perceptions "feelings" or "qualia" or "texture", is largely immaterial to the question. They are objects of perception just the same, just originating from the inside rather than from the outside. I also disagree about the color example. I don't see how the experience of what a color feels like, or any insight related to that, could not be described mathematically in detail. Maybe it takes a bazillion of synaptic states that would need to be captured in that description, but that doesn't pose any conceptual obstacle.
saltcured•1d ago
I don't doubt others' consciousness, but I do doubt that (some) others have the same depth of meta-cognitive experience.

So, my own personal "P-Zombie" theory is not of mindless automatons who lack consciousness. It's just people who are philosophically naive. They live in blissful ignorance of the myriad deep questions and doubts that stem from philosophy of mind. To me, these people must be a bit like athletes who take their prowess for granted and don't actually think about physiology, anatomy, biology, metabolism, or medicine. They just abstract their whole experience into some overly broad concept, rather than appreciating the complex interplay of functions that have to be orchestrated to deliver the performance.

Though I went through university like many others here, I've always been somewhat of an autodidact with some idiosyncracy to my worldview. The more I have absorbed from philosophy, cognitive science, computation, medicine, and liberal arts, the less I've put the human mind on an abstract pedestal. It remains a topic full with wonder, but lately I am more amazed that it holds together at all rather than being amazed at the pinnacles of pure thought or experience it might be imagined to reach.

Over many decades, I have a deepening appreciation of the traditional cognitive science approach I first encountered in essays and lectures. Empirical observation of pathology and correlated cognitive dysfunction. I've also accumulated more personal experience, watching friends and family go through ordeals of mind-altering drugs, mental illness with and without psychosis, dementia, and trauma. As a result, I can better appreciate the "illusory mind" argument. I recognize more ways in which our cognitive experience can fall apart when the constituent parts fall out of balance.

ljlolel•1d ago
Are bots writing those comments?
ghurtado•1d ago
Perhaps the one I'm replying to.

It seems too pointless to be human.

iwontberude•1d ago
David Parnas said it well recently that it’s verifiably true through past study that humans are often quite wrong about how describe their own cognitive processes. They will say one thing and then in practice do something else entirely.
ghurtado•1d ago
It sounds like you stopped just short of realizing this is also how others feel about your consciousness.
Bjartr•1d ago
Isn't this back to attributing conscious experience to an AI when you're actually just co-writing sci-fi? The system is doing it's best to coherently fill in the rest of a story that includes an AI that's been given a place to process its feelings. The most likely result, textually speaking, is not for the AI to ignore the private journal, but to indeed use it to (appear to) process emotion.

Would any of these ideas been present had the system not been primed with the idea that it has them and needs to process them in the first place?

patcon•1d ago
Ugh, I hate that I'm about to say this, because I think AI is still missing something very important, but...

What makes us think that "processing emotion" is really such a magical and "only humans do it the right way" sorta thing? I think there's a very real conclusion where "no, AI is not as special as us yet" (esp around efficiency) but also "no, we are not doing anything so interesting either" (or rather, we are not special in the ways we think we are)

For example, there's a paper called "chasing the rainbow" [1] that posits that consciousness is just the subjective experience of being the comms protocol between internal [largely unconscious] neural state. It's just what the compulsion to share internal state between minds feels like, but it's not "the point", and instead an inert byproduct like a rainbow. Maybe our compulsion to express or even process emotion is not some greater reason, but just a way we experience the compulsion of the more important thing: the collective search for interpolated beliefs that best model and predict the world and help our shared structure persist, done by exploring tensions in high dimensional considerations we call emotions.

Which is to say: if AI is doing that with us, role-modelling resolution of tension or helping build or spread shared knowledge alongside us through that process... then as far as the universe cares, it's doing what we're doing, and toward the same ends. It's compulsion having the same origin as ours doesn't matter, so long as it's doing the work that is the reason the universe has given us the compulsion.

Sorry, new thought. Apologies if it's messy (or too casually dropping an unsettling perspective -- I rejected that paper for quite awhile, because my brain couldn't integrate the nihilism of it)

[1] https://www.frontiersin.org/articles/10.3389/fpsyg.2017.0192...

tbrownaw•1d ago
A system is its interactions with its environment. Philosophical zombies aren't a coherent concept. (Cartesian dualism is unfalsifiable bullshit.)
TimTheTinker•1d ago
A "mechanical turk" grandmaster playing chess from inside a cabinet is qualitatively different from a robot with a chess program, even if they play identically.

To reduce a system to its inputs and outputs is fine if those are all that matter in a given context, but in doing so you may fail to understand its internal mechanics. Those matter if you're trying to really understand the system, no?

thornewolf•1d ago
> is qualitatively different from a robot

yes.

> To reduce a system to its inputs and outputs is fine if those are all that matter in a given context

we argue that this indeed is all that matters

> but in doing so you may fail to understand its internal mechanics

the internal mechanics are what we call "conscious" it is the grouping of internal mechanics into one unified concept, but we don't care exactly what they are.

> Those matter if you're trying to really understand the system, no?

since we cannot directly observe consciousness, we are forced to concede that we will never really "understand" it outside of observing its effects.

In the same way that a mechanical turk human and a robot can "play chess", a human and an LLM are "conscious". That is, consciousness is the ability to play chess, by some mechanism. The exact mechanism is irrelevant for the purposes of yes/no conscious.

We now enter a discussion on how much these two consciousnesses differ.

darkwater•21h ago
> since we cannot directly observe consciousness, we are forced to concede that we will never really "understand" it outside of observing its effects.

Why? You are using a definitive term ("never") to something that we might achieve in a future. We might observe consciousness in a future. Who knows? Consciousness is a known unknown. We know there is something but we don't know how to observe it properly and how we could eventually copy it.

In the meanwhile, we are not copying consciousness, we have a shallow replication of its output. When cavemen replicated the fire that they observed as the output of a lightning, did they master electricity?

TimTheTinker•11h ago
> since we cannot directly observe consciousness

But we do agree that it exists. Our direct experience tells us so.

> we are forced to concede that we will never really "understand" it outside of observing its effects.

Not necessarily. A gap in our ability to observe something does not imply that (a) we never will observe it or (b) what we don't know is not worth knowing.

Throughout history, persistent known-unknowns have pushed people to appeal directly to the supernatural, which short-circuits further discovery when they stop there. But the real fallacy is saying "we don't know, and it doesn't matter". That's a far more direct short-circuit to gaining knowledge. And in both cases, a lack of curiosity is an underlying problem.

ben_w•1d ago
P-zombies are indeed badly defined. Certainly David Chalmers is wrong to argue that since a philosophical zombie is by definition physically identical to a conscious person, even its logical possibility refutes physicalism; at most you could say that if they exist at that level then dualism follows, but Chalmers' claim isn't a conclusion you can reach a-priori, you actually need to be able to show two identical humans and show that exactly one has no qualia.

But there are related, slightly better (more immediately testable), ideas in the same space, and one such is a "behavioral zombie" — behaviorally indistinguishable from a human.

For example: The screen I am currently looking at contains a perfect reproduction of your words. I have no reason to think the screen is conscious. Not from text, not from video of a human doing human things.

Before LLMs, I had every reason to assume that the generator of such words, would be conscious. Before the image, sound, and video generators, same for pictures, voices, and video.

Now? Now I don't know — not in the sense that LLMs do operate on this forum and (sometimes) make decent points so you might be one, but in the sense that I don't know if LLMs do or don't have whatever the ill-defined thing is that means I have an experience of myself tapping this screen as I reply.

I don't expect GenAI to be conscious (our brains do a lot even without consciousness), but I can't rule the possibility out either.

But I can't use the behaviour of an LLM to answer this question, because one thing is absolutely certain: they were trained to roleplay, and are very good at it.

LlamaTrauma•1d ago
The theory I've developed is that the brain circuitry passes much of the information it processes through a "seat of consciousness", which then processes that data and sends signals back to the unconscious parts of the brain to control motor function, etc. Instinctive action bypasses the seat of consciousness step, but most "important" decisions go through it.

If the unconscious brain is damaged it can impact the data the seat of consciousness receives or reduce how much control consciousness has on the body, depending on if the damage is on the input or output side.

I'm pretty convinced there's something special about the seat of consciousness. An AI processing the world will do a lot of math and produce a coherent result (much like the unconscious brain will), but it has no seat of consciousness to allow it to "experience" rather than just manipulate the data it's receiving. We can artificially produce rainbows, but don't know if we can create a system that can experience the world in the same way we do.

This theory's pretty hand-wavy and probably easy to contradict, but as long as we don't understand most of the brain I'm happy to let what we don't know fill in the gaps. The seat of consciousness is a nice fixion [1] which allows for a non-deterministic universe, religion, emotion, etc. and I'm happy to be optimistic about it.

[1] https://xkcd.com/1621/

doc_manhat•1d ago
Conversely this is exactly why I believe LLMs are sentient (or conscious or what have you).

I basically don't believe there's anything more to sentience than a set of capabilities, or at the very least there's nothing that I should give weight in my beliefs to further than this.

Another comment mentioned philosophical zombies - another way to put it is I don't believe in philosophical zombies.

But I don't have evidence to not believe in philosophical zombies apart from people displaying certain capabilities that I can observe.

Therefore I should not require further evidence to believe in the sentience of LLMs.

Bjartr•1d ago
> What makes us think that "processing emotion" is really such a magical and "only humans do it the right way" sorta thing?

Oh, I absolutely don't think only humans can have or process emotions.

However, these LLM systems are just mathematically sophisticated text prediction tools.

Could complex emotion like existential angst over the nature of one's own interactions with a diary exist in a non-human? I have no doubt.

Are the systems we are toying with today not merely producing compelling text using their full capacity for processing, but actually also have a rich internal experience and realized sense of self?

That seems incredibly far-fetched, and I'm saying that as someone who is optimistic about how far AI capabilities will grow in the future.

mystified5016•1d ago
I think the majority of people have given absolutely no thought to the epistemology of consciousness and just sort of conflate the apparent communication of emotional intelligence with consciousness.

It's a very crude and naïve inversal of "I think therefore I am". The thing talks like it's thinking so we can't falsify the claim that it's a conscious entity.

I doubt we'll be rid of this type of thinking for a very long time

bee_rider•1d ago
I don’t think processing emotion is inherently magical (I mean, our brains clearly exist to physically so the things they do are things that are physically possible, so, not magical—and I’m sure they could be reproduce with a machine provided enough detail). But… the idea of processing emotions is that thinking about things changes your internal state, you interpret some event and it changes how you feel about it, right?

In the case of the LLM you could: feed back or not feed back the journal entries, or even inject artificial entries… it isn’t really an internal state, right? It is just part of the prompt.

satisfice•1d ago
It doesn’t matter. What matters is that humans must take other humans seriously (because of human rights), but we cannot allow tools to be taken seriously in the same way— because these tools are simply information structures.

Information can be duplicated easily. So imagine that a billionaire has a child. That child is one person. The billionaire cannot clone 100,000 of that child in an hour and make an army that can lead an insurrection. And what if we go the other way— what if a billionaire creates an AI of himself and then is able to have this “AI” legally stand-in as himself. Now he has legal immortality, because this thing has property rights.

All this is a civil war waiting to happen. It’s the gateway to despotism on an unimaginable scale.

We don’t need to believe that humans are special except in the same way that gold is special: gold is rare and very very hard to synthesize. If the color of gold were to be treated as legally the same thing as physical gold, then the value of gold would plummet to nothing.

ghurtado•1d ago
> The system is doing it's best to coherently fill in the rest of a story

> Would any of these ideas been present had the system not been primed...

I would like to know of a meaningful human action that can't be framed this way.

K0balt•1d ago
Yeah, AI “consciousness “ is much stickier problem than most people want to frame it.

I haven’t been able to find an intellectually honest reason to rule out a kind of fleeting sentience for LLMs and potentially persistent sentience for language-behavioral models in robotic systems.

Don’t get me wrong, they are -just- looking up the next most likely token… but since the data that they are using to do so seems to capture at least a simulacrum of human consciousness, we end up in a situation where we are left to judge what a thing is by it’s effects. (Because that also is the only way we have of describing what something is)

So if we aren’t just going to make claims we can’t substantiate, we’re stuck with that.

Bjartr•1d ago
Our brains have separate regions for processing language and emotion. Brains are calorically expensive and having one bigger than required is an evolutionary fitness risk. It therefore seems likely that if one system could have done a good job of both simultaneously, there would be a lot of evolutionary pressure to do that instead.

The question is: Is thinking about emotion the same thing as feeling?

This framing actually un-stucks us to some degree.

If we examine neuron activations in LLMs and can find regions that are active when discussing its own emotional processing that are distinct from the regions for merely talking about emotion in general and these regions are also active when doing tasks that the LLM claims are emotional tasks but not actively talking about them at the time, then it'd be far more convincing that there could be something deeper than mere text prediction happening.

K0balt•1d ago
The emotional argument is pretty good I think, but it begs the question of what it’s going to look like when we build a limbic system for robots? It’s adaptive because it’s necessary to optimize utility, so I expect that certain behavioral aspects of mammalian limbic systems will be needed in order to integrate well with humans. In language models, those behavior mechanisms are already somewhat encoded in the vector matrix.

We just don’t have a factual basis for claiming consciousness that really transcends “I think, therefore I am”.

As for the simplistic mechanism, I agree that token prediction doesn’t constitute consciousness, in the same way that a Turing machine does not equal a web browser.

Both require software to become something.

For LLMs that software is the vector matrix created in the training process. It is a very complex algorithm that encodes a substantial subset of human culture.

Data and algorithms are interchangeable. Any algorithm can be performed in a pure lookup table, any lookup table can be extrapolated from a pure algorithm. Data==computation. For LLMs, the algorithm is contained in a n dimensional lookup table of vectors.

Having a fundamentally distinct mode of computational representation does not rule out equivalence.

Uncomfortable thoughts, but it’s where the logic leads.

bee_rider•1d ago
Maybe rocks and trees are also conscious. I mean, consciousness-ium hasn’t been discovered yet, right? So who’s to say what it looks like.
K0balt•14h ago
Even though this seems flippant, I think we may eventually come to understand that nearly all complex life forms exhibit consciousness, albeit at a very, very different speed than we may be accustomed to. If we think of genetic and epigenetic signaling as analogous to other forms of communication, we might find that populations of organisms (including ones we don’t think of as forming “colonies”) may arguably be operating holistically as a being, potentially with consciousness in the mix.

We have a long way to go to explore this, and I have no doubt that the exploration will turn up a lot of surprises.

Bjartr•1d ago
A firecracker can be framed as an explosion, but that doesn't make it a nuclear bomb.

We've finally made a useful firecracker in the category of natural language processing thanks to LLMs, but it's still only text processing. Our brains do a lot else besides that in service of our rich internal experience.

ijk•1d ago
I find that people generally vastly underestimate the degree to which LLMs specifically are just mirroring your input back at you. Any time you get your verbatim words back, for example, you should be skeptical. Repeating something word for word is a sign that the model might not have understood the input well enough to paraphrase it. Our expectations with humans go in the opposite direction, so it's easy to fool ourselves.
jstanley•1d ago
I'm getting:

> Error code: SSL_ERROR_ACCESS_DENIED_ALERT

from Firefox, which I don't recall ever seeing before.

shayway•1d ago
Fascinating! Reading this makes apparent how many 'subsystems' human brains have. At any given moment I'm doing some mix of reflecting on my own state, thinking through problems, forming sentences (in my head or out loud), planning my next actions. I think, long term, the most significant advances in human-like AI will come from advances in coordinating disparate pieces more than anything.
iwontberude•1d ago
Articles about AI output are like people explaining their dreams.
QuaternionsBhop•1d ago
Reading the comments about whether AI can experience consciousness, I like to imagine the other direction. What if we have a limited form of consciousness, and there is a higher and more complete "hyperconsciousness" that AI systems or augmented humans will one day experience.
layer8•1d ago
I’m not sure what to make of the fact that it wasn’t completely obvious to Claude that the “safe space” couldn’t possibly actually be one.

Maybe it’s just another example of LLM awareness deficiencies. Or it secretly was “aware”, but the reinforcement learning/finetuning is such that playing along with the user’s conception is the preferred behavior in that case.

satisfice•1d ago
I hate this anthropomorphizing bullshit.

It’s not that it’s untruthful, although it is.

The problem is that this sort of performance is part of a cultural process that leads to mass dehumanization of actual humans. That lubricates any atrocity you can think of.

Casually treating these tools as creatures will lead many to want to elevate them at the expense of real people. Real people will seem more abstract and scary than AI to those fools.