That said, someone diving too far into the "dog parent" vibe is annoying to me personally. I think it's more comprehensible than loving `sycophant.sh`.
If you're confused about this go seek help now.
It's not psychosis, but it's also not healthy to blur the line between a pet and a child, but at least a pet is a living thing that can know you and have a relationship with you.
But if someone's calling their laptop their baby and carrying it around in a baby carriage, I'd be comfortable calling that psychosis.
My pet theory is one of ontological conscienceness paredoila. Just like face paredoila is a heightened sensitivity to seeing faces in inanimate objects, we observe consciousness through behavior including language with varying sensitivity. While our face detection circuitight be triggered by knots on a tree, we have other inputs which negate it so that we ultimately conclude that it is not in fact a face.
The same principal applies to consciousness. The consciousness trigger is triggered, but for some people the negating input can't overcome it and they conclude that consciousness really is in there.
I've observed a number of negating reasons like, a disbelief in substrate independence and knowledge of failure modes, but I'm curious what an exhaustive list would look like. Does your consciousness circuit get triggered? I know mine does. What beliefs override it preventing you from concluding AI is conscious?
When previous generation LLMs spit out absurdist slop I think it was much easier for people avoid the fluency trap.
In the short term, but over time the patterns get more obvious and the illusion breaks down. Generative AI is incredible at first impressions.
There are no tests for consciousness. Consciousness resides fully as a first person perspective and can't be inspected or detected from the outside (at least not in any way currently known to science or philosophy). What they mean when they say that is "my brain is interpreting this thing as conscious, so I am accepting that".
Maybe LLMs are conscious in some abstract way we don't understand. I doubt it, but there's no way to tell. And an AI claiming that it IS or is NOT conscious is not evidence of either conclusion.
If there is some level of consciousness, it's in a weird way that only becomes instantiated in the brief period while the model is predicting tokens, and would be highly different from human consciousness.
Makes sense, but at the same time: subjectively, an LLM is always predicting tokens. Otherwise it's just frozen.
(Some might argue that's basically the human experience anyway, in the Buddhist non self perspective - you're constantly changing and being reified in each moment, it's not actually continuous)
My mental image, though, is that LLMs do have an internal state that is longer lived than token prediction. The prompt determines it entirely, but adding tokens to the prompt only modifies it slightly- so in fact it's a continuously evolving "mental state" influenced by a feedback loop that (unfortunately) has to pass through language.
It will have no conception or memory of the alternate line of discussion with the previous term. It only "knows" what is contained in the current combination of training + system prompt + context.
If you change the LLM's personal from "Sam" to "Alex" in the LLM's conception of the world it's always been "Alex". It will have no memory of ever being "Sam".
etc, etc.
Basically, the reporting machinery is compromised in the same way that with the Müller-Lyer illusion you can "know" the lines are the same length but not perceive them as such.
- I know I am conscious.
- It's likely that as a random human, I am in the belly of the bell curve.
- It's likely that you're also a random human, and share my characteristics.
- Then, it's very likely that you know you're conscious too.
I can't be absolutely certain, but I'd bet a million dollars on you being conscious vs an automaton.
Secondarily, I feel like it's difficult to make inferences about consciousness though I understand why you would given that the predicate of the reality that you can access is your individual consciousness.
There are countless configurations of reality that are plausible where you're the only "conscious" being but it looks identical to how it looks now.
But saying that it's "female" is just nonsensical, it's a category error. Being female or male is a fact about the biological world. The LLM is objectively non-biological, so it's nonsense to label it with a sex.
(No, this comment isn't about gender, nor being feminine/masculine. We have different words to convey those concepts. I'm not trying to make a political or social statement here.)
The chart in [1] is a good visualisation of that, if you wish to learn more.
[1]: https://www.scientificamerican.com/article/beyond-xx-and-xy-...
It is not mathematical, not a proof, and generally doesn't make any sense. Many of these sentences are grammatically correct but completely devoid of meaning.
Has there already been any paper published on the correlation between language preference and mental illness?
We think of ourselves as conscious because it is our lived experience— but we are always wrong to some degree. My mother has dementia and cannot be made aware of her situation, except momentarily.
We think of other humans as conscious not as the outcome of any test, but rather because we each share with other humans a common origin which suggests common mechanisms of experience.
Treating other humans as equivalent to ourselves is a heuristic for maintaining social order— not an epistemological achievement.
I think this is something similar.
pavel_lishin•2h ago
thomasjudge•2h ago
QuercusMax•2h ago
yomismoaqui•1h ago
So the reasonable man uses Ext4 I guess.
webdevver•1h ago
its quite funny to me that ext4 very much mirrors him in that regard. its underpinning damn well everything, but you'd never know about it because it works so well.
nancyminusone•1h ago
yomismoaqui•2h ago
QuercusMax•2h ago
yomismoaqui•1h ago
bob1029•1h ago
Designing software for a printer means being a very aggressive user of a printer. There's no way to unit test this stuff. You just have to print the damn thing and then inspect the physical artifact.
QuercusMax•1h ago
webdevver•1h ago
Gud•14m ago
throawayonthe•2h ago
awjlogan•2h ago
MBCook•2h ago
dmead•1h ago
cperciva•1h ago
(For anyone not familiar with the text, Goodstein's treatment of the subject opens with "Ludwig Boltzman, who spent much of his life studying statistical mechanics, died in 1906, by his own hand. Paul Ehrenfest, carrying on the work, died similarly in 1933. Now it is our turn to study statistical mechanics.")