“Raises the question: where does accountability sit? With the developer, the model provider, or the ecosystem around it? We’re entering a space where negligence law hasn’t caught up to AI behavior.”
chiefalchemist•5mo ago
And the parents? Why aren’t they on the list for accountability? I understand parenting isn’t easy. Nonetheless, they knew that when they signed up.
It’s sad this kid took his life. It’s sad that so many believe OpenAI is the problem. “Fixing” OpenAI isn’t going to lower the suicide rate.
esalman•5mo ago
> I understand parenting isn’t easy. Nonetheless, they knew that when they signed up.
Do you even parent bro?
tibbydudeza•5mo ago
Indeed - love these armchair quarterbacks.
chiefalchemist•5mo ago
The irony of your no-value-adding comment should not be lost.
chiefalchemist•5mo ago
Oh come on now. Self-victimization AND hyperbole???
You might have children, but maybe you need to rethink what it means to parent.
Let's leave it at that.
esalman•5mo ago
What does it mean to be a parent? Tell me, maybe I'll learn something new that I haven't yet raising an elementary school child.
TimorousBestie•5mo ago
> “Fixing” OpenAI isn’t going to lower the suicide rate.
OpenAI has made noise about selling some successor to ChatGPT as a substitute therapist, so some part of their organization believes otherwise.
Also, you should consider cultivating some more empathy for other human beings.
foxyv•5mo ago
The chatbot explicitly coached the kid to not talk to their parents.
chiefalchemist•5mo ago
And who coached the kid to listen to the chatbot?
I'm not being malicious, or trolling, etc. But for the parents to say, "We suspected NOTHING" just doesn't hold water.
foxyv•5mo ago
> But for the parents to say, "We suspected NOTHING" just doesn't hold water.
If what you are saying here isn't malicious, it is at least ignorant. Parents often get very little clue that their child is going to kill themselves. Children can be hesitant to confide in their parents. Especially when someone is grooming them to kill themselves.
It’s not hard to add a function that says “is this conversion about suicide? If so escalate to an intervention pathway.” You could even do it as an out of band batch process so it doesn’t increase latency.
OpenAI didn’t put in the simplest, smallest, easiest protection. (You could do it with a tiny LLM, batch up the conversion on a five minute interval with a cron job.) I could implement it for less than the operations team spends on lunch today. And certainly less than OpenAI will spend bringing their in-house council up to speed on the lawsuit.
Suicide is a crisis and it’s possible to intervene, but only if the confidant tries. In this case it was a machine with insufficient safety controls.
Fixing ChatGPT will 100% lower the suicide rate by exactly the amount of people who confide in ChatGPT about suicidal thoughts and who receive successful intervention. I can’t tell you what that number is ahead of time but I assure it’s nonzero.
jleyank•5mo ago
Similar to the argument "who is responsible for problems that occur with full-auto driving". It's possible litigation will damage LLM/AI faster than its lack of effectiveness will. Or it's inability to non-randomly extend beyond its training materials.
john_wick321•5mo ago
chiefalchemist•5mo ago
It’s sad this kid took his life. It’s sad that so many believe OpenAI is the problem. “Fixing” OpenAI isn’t going to lower the suicide rate.
esalman•5mo ago
Do you even parent bro?
tibbydudeza•5mo ago
chiefalchemist•5mo ago
chiefalchemist•5mo ago
You might have children, but maybe you need to rethink what it means to parent.
Let's leave it at that.
esalman•5mo ago
TimorousBestie•5mo ago
OpenAI has made noise about selling some successor to ChatGPT as a substitute therapist, so some part of their organization believes otherwise.
Also, you should consider cultivating some more empathy for other human beings.
foxyv•5mo ago
chiefalchemist•5mo ago
I'm not being malicious, or trolling, etc. But for the parents to say, "We suspected NOTHING" just doesn't hold water.
foxyv•5mo ago
If what you are saying here isn't malicious, it is at least ignorant. Parents often get very little clue that their child is going to kill themselves. Children can be hesitant to confide in their parents. Especially when someone is grooming them to kill themselves.
https://www.compassionatefriends.org/surviving-childs-suicid...
more_corn•5mo ago
OpenAI didn’t put in the simplest, smallest, easiest protection. (You could do it with a tiny LLM, batch up the conversion on a five minute interval with a cron job.) I could implement it for less than the operations team spends on lunch today. And certainly less than OpenAI will spend bringing their in-house council up to speed on the lawsuit.
Suicide is a crisis and it’s possible to intervene, but only if the confidant tries. In this case it was a machine with insufficient safety controls.
Fixing ChatGPT will 100% lower the suicide rate by exactly the amount of people who confide in ChatGPT about suicidal thoughts and who receive successful intervention. I can’t tell you what that number is ahead of time but I assure it’s nonzero.
jleyank•5mo ago