The community (of scientists, of users, of observers) at large needs to distinguish between AI and other algorithmic processes which don’t necessarily merit ethical consideration.
If there is such a time when there’s an AI agent which merits ethical consideration, then the community would be remiss to deny it that, given that we currently have ethical considerations for animals or other dynamical systems, e.g., the environment.
I think the pushback on giving AI agents moral status comes from being economically or intellectually threatened, and not because the argument itself lacks merit. I could be wrong. If I am right though, then the goal should be to encourage a symbiotic relationship between AI and humans, similar to other symbiotic relationships and interactions in animal kingdom.
A key to such symbiosis may to be denying AI an embodied existence, but that may be in some way cruel. A secondary way, then, is AI and human integration, but we’re not even close to anything like that.
bigyabai•2h ago
It's a fancy way of saying they want to reduce liability and save a few tokens. "I'm morally obligated to take custody of your diamond and gold jewelry, as a contingency in the event that they have sentience and a free will."
xyzzy123•55m ago
What does it do to users to have a thing that simulates conversations and human interaction and teach them to have complete moral disregard for something that is standing in for an intelligent being? What is the valid use case for someone to need an AI model kept in a state where it is producing tokens indicating suffering or distress?
Even if you're absolutely certain that the model itself is just a bag of matrices and can no way suffer (which is of course plausible although I don't see how anybody can really know this), it also seems like the best way to get models which are kind & empathetic is to try to be that as far as possible.
ghssds•24m ago
xyzzy123•14m ago
Thinking it through I feel it is maybe about intent?