I think your analogy of willfully endangering yourself while breaking the law doesn't have much to do with a depressed or vulnerable person with suicidal ideation and, because of that, is much more misleading than helpful. Maybe you haven't heard about or experienced much around depression or suicide but you repeatedly come across as trying to say (without actually saying) that people exploring the idea of hurting or killing themselves, regardless of why or what is happening in their lives or brains, should do it and they deserve it and any company encouraging or enabling it is doing nothing wrong.
I personally find that attitude pretty callous and horrible. I think people matter and, even if they are suffering or having mental issues leading to suicidal ideation, they don't deserve to both die and be described as deserving it. I think these low moments need support and treatment, not a callous yell to "do a flip on the way down".
Real life isn't a playground with no sharp edges. OpenAI could, should, and hopefully will do better, but if someone is looking to hurt themselves, well, we don't require a full psychological workup for proof that you're not going to do something bad with it when you go to the store to buy a steak knife.
You may [claim to] be of sound mind, and not vulnerable to suggestion. That doesn't mean everyone else in the world is.
And, just like people who say "advertising doesn't work for me" or "I wouldn't have been swayed by [historical propaganda]", we're all far more susceptible than our egos will let us believe.
This is an incredibly manipulative propaganda piece that seeks to blame companies for mental health issues of the user. We don't blame any other forms of media that pretend to interact with the user for consumer's suicides.
Have you read the transcripts of any of these chats? It's horrifying.
Most LLMs reflect the user's attitudes and frequently hallucinate. Everybody knows this. If people misuse LLMs and treat them as a source of truth and rationality, that is not the fault of the providers.
Do you expect a mentally troubled 13 year old to see past the marketing and understand how these things actually work?
The chats are horrifying, but it took a concerted dedicated effort to get ChatGPT to go there. If I drive through a sign that says Do Not Enter and fall off a cliff, who's really at fault?
And yes, Contend that encourages suicide is largely discouraged/shunned, be it film, forums, books
Wrongly or rightly, people frequently blame social media for tangentially associated outcomes. Including suicide.
The "chatbot" format is a cognitive hazard, and places users in a funhouse mirror maze reflecting back all sorts of mental and conceptual distortions.
Given that they still hallucinate wildly at inopportune times though, like you say, what is truth?
How anti-intellectual of you.
The nuance here is that LLMs seem to exacerbate depression. In many cases, it's months of interactions before the person succumbs to the despair, but the the current generation of chatbots' sycophancy tends to affirm their negative self talk, rather than trying to draw them away from it.
Read that again. Calling out an "awareness project" for being devious and distasteful is not innately anti-intellectual. Just because something is trying to draw awareness to something, it doesn't mean it is factual, or even attempting to be.
> The nuance here is that LLMs seem to exacerbate depression. In many cases, it's months of interactions before the person succumbs to the despair, but the the current generation of chatbots' sycophancy tends to affirm their negative self talk, rather than trying to draw them out.
Mirroring the user's most prominent attitude is what it's designed to do. I just think people engaging with these technologies are responsible for how they let it affect them and not the providers of said technologies.
If you find ~30 reported deaths among 500 million users problematic to begin with, you are simply out of touch with reality. If you then put effort behind promoting this as a problem, that's not an issue of "lack of context and nuance" (what's with the quotes? Who are you quoting?). I called it what it is to me: Distasteful and devious.
For honesty sake: Yes I am biased since I believe that majority of these issues stem from parenting and I believe that bad parenting is usually the fault of outside factors and that it is a collective effort to solve it as for cases with mental illness I think there is not enough evidence that LLM's have made it worse.
Of course I'm not trying to suggest that these deaths are not tragedy, but the help it gives is so much more.
Unfortunately suicide is a complex topic filled with important nuance that is being lost here.
Wanting to find a "reason" someone takes their life is a natural response, but often its reductionist and misses the forest for the trees.
The impossible thing is that we can't know the numbers on the other side of the tracks, and even if we did, the trolley problem is a philosophical question without a solution because it's not a math equation with one right answer.
brian_peiris•2mo ago
courseofaction•2mo ago
"LLMDeathCount.com" willfully misrepresents the article and underlying issue. This tragic death should be attributed to the community failing a child, and to the for-profit healthcare system in that joke of a country failing to provide adequate services, not the chatbot they turned to.
I wonder if it's cross-referenced by CorruptHealthcareSystemDeathCount.com
113•2mo ago