What's our responsibility and what's not is based on made up morals, which are based on either evolutionary benefits and dangers combined with random historical developments.
I don't think pain can be felt without the ability to have emotions, and no emotions are possible without personality (that "I" feeling), until AIs can feel real emotions and have a personality than they won't ever be able feel pain.
In very cold weather, my car tells me the tires need air. The warning, like that of the time to change oil is bright yellow and flashes when I start the car. Is my car in pain? Is it unethical to drive my car when it is cold as I'm hurting it? Would the answer change if in addition to a warning light a voice were to say. "Your tires are low and it hurts me"?
In my opinion, we have no ethical obligation to any non-living system. I think we certainly have a stronger ethical duty of care with respect to the shared resources we consume than we do to any AI system powered by those resources.
[1] https://en.wikipedia.org/wiki/The_Mind%27s_I
Update: Found a PDF: http://people.whitman.edu/~herbrawt/classes/339/Mark.pdf
Edit: Reading it again now, I think the story stands up well, aside from its obvious 1970s-isms. If the story has any philosophical value today, it's that pretty soon we will actually build machines that behave like this (if it hasn't even been done already). And some of their owners will definitely treat them as sentient, even if obviously they are not. And at some point as the machines get better and better at this mimicry there'll be people demanding that laws are passed to protect them.
I think the question relates to various ideas of mental distress. You might get better answers asking if AI feels rejection, loss, embarrassment etc. Personally I still think the answer is no.
rwmj•1h ago