(I'm more of the Dennett persuasion. Let's NOT discuss the empirical facts here, because they add up funny and I don't like it)
But we absolutely believe we are conscious.
Perhaps it's a useful idea.
Even our decision making as I understand it, from the functional MRIs we know our subjective perspective of how and why we made simple decisions is wildly inaccurate.
Obviously free will and feeling like you control your actions is hugely important for us. But in a physical sense free will does not exist.
throw98709•1h ago
Nobody sane believes the current LLMs are conscious, ffs
strogonoff•56m ago
The reason is that there is no working definition of “consciousness” or “sentience” that does not imply “human-like”, which in turn implies ability to feel and suffer, and what we do with LLMs would generally be considered something that would make beings with human-like sentience and consciousness suffer.
[0] Some definitely do, though; or at least they behave with LLMs in a way one would behave with a conscious being.
gavinray•38m ago
If you follow the line of thinking that consciousness is an emergent phenomenon, arising out of complexity, it doesn't seem far-fetched to me to believe that someday in the future, a silicon-based computing machine (rather than a biological, carbon-based computing machine) might be "conscious" -- whatever that means.
Kim_Bruning•11m ago
From an objective, empirical, scientific point of view, consciousness and feelings are not really fantastically defined.
But looking at diverse tests that ARE available, modern LLMs seem to get interesting scores on a number of them.
The counter-argument being -of course- that no one ever made those tests with LLMs in mind. But that's not something you should come up with post-hoc. Define better experiments instead!
(The ethical issues you mention should probably be (re-)evaluated once systems have continuous memory/context)
sh3rl0ck•54m ago