You may [claim to] be of sound mind, and not vulnerable to suggestion. That doesn't mean everyone else in the world is.
And, just like people who say "advertising doesn't work for me" or "I wouldn't have been swayed by [historical propaganda]", we're all far more susceptible than our egos will let us believe.
This is an incredibly manipulative propaganda piece that seeks to blame companies for mental health issues of the user. We don't blame any other forms of media that pretend to interact with the user for consumer's suicides.
Have you read the transcripts of any of these chats? It's horrifying.
Most LLMs reflect the user's attitudes and frequently hallucinate. Everybody knows this. If people misuse LLMs and treat them as a source of truth and rationality, that is not the fault of the providers.
Do you expect a mentally troubled 13 year old to see past the marketing and understand how these things actually work?
And yes, Contend that encourages suicide is largely discouraged/shunned, be it film, forums, books
Wrongly or rightly, people frequently blame social media for tangentially associated outcomes. Including suicide.
The "chatbot" format is a cognitive hazard, and places users in a funhouse mirror maze reflecting back all sorts of mental and conceptual distortions.
I swear, it's like literacy's been made illegal or something. (For sake of explicitness, I am now mocking your inability to decipher what I said in the previous comment despite it being very straightforward.)
How anti-intellectual of you.
The nuance here is that LLMs seem to exacerbate depression. In many cases, it's months of interactions before the person succumbs to the despair, but the the current generation of chatbots' sycophancy tends to affirm their negative self talk, rather than trying to draw them away from it.
Read that again. Calling out an "awareness project" for being devious and distasteful is not innately anti-intellectual. Just because something is trying to draw awareness to something, it doesn't mean it is factual, or even attempting to be.
> The nuance here is that LLMs seem to exacerbate depression. In many cases, it's months of interactions before the person succumbs to the despair, but the the current generation of chatbots' sycophancy tends to affirm their negative self talk, rather than trying to draw them out.
Mirroring the user's most prominent attitude is what it's designed to do. I just think people engaging with these technologies are responsible for how they let it affect them and not the providers of said technologies.
If you find ~30 reported deaths among 500 million users problematic to begin with, you are simply out of touch with reality. If you then put effort behind promoting this as a problem, that's not an issue of "lack of context and nuance" (what's with the quotes? Who are you quoting?). I called it what it is to me: Distasteful and devious.
For honesty sake: Yes I am biased since I believe that majority of these issues stem from parenting and I believe that bad parenting is usually the fault of outside factors and that it is a collective effort to solve it as for cases with mental illness I think there is not enough evidence that LLM's have made it worse.
Of course I'm not trying to suggest that these deaths are not tragedy, but the help it gives is so much more.
Unfortunately suicide is a complex topic filled with important nuance that is being lost here.
Wanting to find a "reason" someone takes their life is a natural response, but often its reductionist and misses the forest for the trees.
brian_peiris•1h ago
courseofaction•44m ago
"LLMDeathCount.com" willfully misrepresents the article and underlying issue. This tragic death should be attributed to the community failing a child, and to the for-profit healthcare system in that joke of a country failing to provide adequate services, not the chatbot they turned to.
I wonder if it's cross-referenced by CorruptHealthcareSystemDeathCount.com