Computers used to be like dogs. You could teach them some really cool tricks. We enjoyed the accomplishment, and appreciated the tricks. But, dogs are dogs. Essentially, even as much as one might love them, they're just property.
Now, computers have a soul; they're persons? Maybe not by definition, but that belief would seem to foreclose the property argument. One can destroy property, but one ought to shy away from destroying persons. Well, anyway, I think one should.
If someone pulled the plug on Claude, what does that mean, ethically?
First, "[dogs are] just property" is wrong on the facts. There are probably hundreds of millions of dogs in the world that are not pets (often called "free range dogs") and are no one's "property." This is probably in the ballpark of half of all dogs.
Pet dogs are not generally seen primarily as property. For example, if you were walking down the street in your neighborhood and saw someone in their driveway disassembling a bicycle and discarding the parts, you probably wouldn't think twice about it. Dismembering a dog is an entirely different thing and doing so to a live dog would be a crime in most jurisdictions.
Dogs are inarguably conscious and sentient. An "AI" is not.
Unlike dogs, a running AI is inarguably property. The software may or may not have some "open" license, but the hardware it runs on is, beyond a doubt, someone's property. No hardware, no "AI."
Pulling the plug on a running AI has no ethical implications.
i_dont_know_•2h ago