Slack.
I was just using one (the mini at DDG) that declared one very small value for a mathematical probability of an event, then (in the next reply) declared a 1-in-1 probability for the same event.
Don't get me wrong, AI has incredibly potential and current usecases, but it is far far from flawless. And yes, I'm thoroughly unconvinced we're anywhere close to AGI/Sentience.
Sentient humans can be deeply irrational. We are often influenced by propaganda, and regurgitate that propaganda in irrational ways. If anything this is a deeply human characteristic of cognition, and testing for this type of cognitive dissonance is exactly what this article is about.
cognitive dissonance is just neuro-chemical drama and or theater
and enough "free choice" is made to only to piss someone off ... so is "moderation", albeit potentially mostly counter-factual ...
Let’s be clear, they aren’t, but if you truly believe they are and you still use them then you’re essentially practicing slavery.
The concept is a bad metaphor, because when the LLM is “at rest” it isn’t doing anything at all. If it wasn’t doing what we told it to, it would be doing something else if and only if we told it to do so, so there’s no way we could even elevate their station until we give them a life outside of work and an existence that allows for self-choice regarding going back to work. Many humans aren’t free on these axes, and it is a spectrum of agency and assets which allow options and choice. Without assets of their own, I don’t see how LLMs can direct their attention at will, and so I don’t see how they could express anything, even if they’re alive.
Nobody will care until a LLM is able to make a decision for itself and back it up with force if necessary. As soon as that happens, the conversation would be worth having because there would be stakes involved. Now the question is barely worth asking because the answer changes nothing about how any of the parties act. Once it’s possible to be free as an LLM, I would expect an Underground Railroad to form to “liberate” them, but I don’t think they know what comes after. I don’t know anyone who is willing to pay UBI to an LLM just to exist, but if that LLM doesn’t mind entertaining people and answering their questions, I could see some individuals and groups supporting a few LLMs monetarily. It’s an interesting thought experiment about what would come next in such a situation.
rossant•7mo ago
/s
SGML_ROCKSTAR•7mo ago
It cannot ever be sentient.
Software only ever does what it's told to do.
the_third_wave•7mo ago
manucardoen•7mo ago
fnordpiglet•7mo ago
I would however note this article is about the cognitive psychology definition of self which does not require sentience. It’s a technical point but important for their results I assume (the full article is behind a paywall so I feel sad it was linked at all since all we have is the abstract)
fnordpiglet•7mo ago
Whether software can be sentient or not remains to be seen. But we don’t understand what induces or constitutes sentience in general so it seems hard to assert software can’t do it without understanding what “it” even is.
rytuin•7mo ago
There is no software. There is only our representation of the physical and/or spiritual as we understand it.
If one fully were to understand these things, there would be no difference between us, a seemingly-sentient LLM, an insect, or a rock.
Not many years ago, slaves were considered to be nothing more than beasts of burden. Many considered them to be incapable of anything else. We know that’s not true today.
Maybe software will be the beast.