frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

If AI Systems Become Conscious, Should They Have Rights?

https://www.nytimes.com/2025/04/24/technology/ai-welfare-anthropic-claude.html
10•diggan•9mo ago

Comments

1970-01-01•9mo ago
Star Trek says yes: https://en.wikipedia.org/wiki/The_Measure_of_a_Man_(Star_Tre...
ViktorRay•9mo ago
That is most certainly one of the greatest episodes of the entire Star Trek franchise
pavel_lishin•9mo ago
Star Trek has had multiple episodes where the humanity & rights of AI are brought into question - several other episodes with Data, large parts of the EMH's arc on Voyager, the episode with the Exocomps, how Dr. Moriarty's situation was handled, just to name the ones I remember.

(Fun fact: I was temporarily banned from a subreddit because I kept arguing that the Federation hates & fears AI, because the moderators thought I was trolling. I was glad to have been vindicated by the first season of Picard!)

bell-cot•9mo ago

   if( really_cool_sounding_poorly_defined_indeterminable_condition )
      print "yes"
   else
      print "no"
BriggyDwiggs42•9mo ago
Like if we knew that for sure then… yeah of course? But by the same logic it’s quite likely some of the most intelligent animals also deserve rights, and we sure haven’t done that very well.
more_corn•9mo ago
If an artificial person reaches a point where it is indistinguishable from a person, should it have the rights of a person?

How would we assess such a thing? Maybe we could create a list of the essential characteristics of personhood and a checklist or assessment to measure.