The instructions are only a problem in the wrong context.
https://bsky.app/profile/sababausa.bsky.social/post/3lxcwwuk...
> "Please don't leave the noose out," ChatGPT responded. "Let's make this space the first place where someone actually sees you."
This isn't technical advice and empathy, this is influencing the course of Adam's decisions, arguing for one outcome over another.
There have been such cases in the past. Where the coercion and suicide has been prosecuted.
But yeah, my point was that it basically told the kid how to jailbreak itself.
Some companies actually have a lot to lose if these things go off the rails and can't just 'move fast and break things' when those things are their customers, or the trust their customers have in them.
My hope is that OpenAI actually does have a lot to lose; my fear is that the hype and the sheer amount of capital behind them will make them immune from real repercussions.
For whatever it's worth, I also hope that OpenAI can take a fall and set an example for any other businesses that recoup their model. But I also know that's not how justice works here in America. When there's money to be made, the US federal government will happily ignore the abuses to prop up American service industries.
Why don't we celebrate Apple for having actual human values? I have a deep problem with many humans who just don't get it.
If you ever thought Apple was prioritizing human values over moneymaking, you were completely duped by their marketing. There is no principle, not even human life, that Apple values above moneymaking.
"Tim Cook, was asked at the annual shareholder meeting by the NCPPR, the conservative finance group, to disclose the costs of Apple’s energy sustainability programs, and make a commitment to doing only those things that were profitable.
Mr. Cook replied --with an uncharacteristic display of emotion--that a return on investment (ROI) was not the primary consideration on such issues. "When we work on making our devices accessible by the blind," he said, "I don't consider the bloody ROI." It was the same thing for environmental issues, worker safety, and other areas that don’t have an immediate profit. The company does "a lot of things for reasons besides profit motive. We want to leave the world better than we found it.""
[0] https://www.forbes.com/sites/stevedenning/2014/03/07/why-tim...
Whatever the case is, the raster approach sure isn't winning Apple and AMD any extra market share. Barring any "spherical cow" scenarios, Nvidia won.
Idk maybe it’s legit if your only view of the world is through capital and, like, financial narratives. But it’s not how Apple has ever worked, and very very few consumer companies would attempt that kind of switch let alone make the switch successfully.
I think people would use LLMs with more detachment if they didn’t believe there was something like a person in them, but they would still become reliant on them, regardless, like people did on calculators for math.
It was easy to trick ourselves and others into powerful marketing because it felt so good to have something reliably pass the Turing test.
One thing I’d note is that it’s not just developers, and there are huge sums of money riding on the idea that LLMs will produce a sci-fi movie AI - and it’s not just Open AI making misleading claims but much of the industry, which includes people like Elon Musk who have huge social media followings and also desperately want their share prices to go up. Humans are prone to seeing communication with words as a sign of consciousness anyway – think about how many people here talk about reasoning models as if they reason – and it’s incredibly easy to do that when there’s a lot of money riding on it.
There’s also some deeply weird quasi-cult like thought which came out of the transhumanist/rationalist community which seems like Christian eschatology if you replace “God” with “AGI” while on mushrooms.
Toss all of that into the information space blender and it’s really tedious seeing a useful tool being oversold because it’s not magic.
And a lot of people who should have known better, bought it. Others less well-positioned to know better, also bought it.
Hell they bought it so hard that the “vibe” re: AI hype on this site has only shifted definitely against it in the last few weeks.
I suspect history will remember this as a huge and dangerous mistake, and we will transition to an era of stoic question answering bots that push back harder
Most of people pushing this idea aren't developers. It's mostly being pumped by deluded execs like altman, zuck other people who have horses in the race.
They're closer to being robots than their LLMs are to being human, but they're so deep in their alternative realities they don't realise how disconnected they are from what humans are/do/want.
If you made it a sci-fi movie people wouldn't buy it because this scenario seems too retarded to be real, but that's what we get... some shitty slow burn black mirror type of thing
Would his blood be on the hands of the researchers who trained that model?
Your logic sounds reasonable in theory but on paper it's a slippery slope and hard to define objectively.
On a broader note I believe governments regulating what goes in an AI model is a path to hell paved with good intentions.
I suspect your suggestion will be how it ends up in Europe and get rejected in the US.
That's not an obvious conclusion. One could make the same argument with physical weapons. "Regulating weapons is a path to hell paved with good intentions. Yesterday it was assault rifles, today it's hand guns and tomorrow it's your kitchen knife they are coming for." Europe has strict laws on guns, but everybody has a kitchen knife and lots of people there don't feel they live in hell. The U.S. made a different choice, and I'm not arguing that it's worse there (though many do, Europeans and even Americans), but it's certainly not preventing a supposed hell that would have broken out had guns in private hands been banned.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
If it would be fit for a purpose, then it's on the producer for ensuring it actually does. We have laws to prevent anyone from declaring their goods aren't fit for a particular purpose.AI models are similar IMO, and unlike fiction books are often clearly labeled as such, repeatedly. At this point if you don't know if an AI model is inaccurate and do something seriously bad, you should probably be a ward of the state.
The government will require them to add age controls and that will be that.
If we were facing a reality in which these chat bots were being sold for $10 in the App Store, then running on end-user devices and no longer under the control of the distributors, but we still had an issue with loads of them prompting users into suicide, violence, or misleading them into preparing noxious mixtures of cleaning supplies, then we could have a discussion about exactly what extreme packaging requirements ought to be in place for distribution to be considered responsible. As is, distributed on-device models are the purview of researchers and hobbyists and don't seem to be doing any harm at all.
Should the creators of Tornado Cash be in prison for what they have enabled? You can jail them but the world can't go back, just like it can't go back when a new OSS model is released.
It is also much easier to crack down on illegal gun distribution than to figure out who uploaded the new model torrent or who deployed the latest zk innovation on Ethereum.
I don't think your hypothetical law will have the effects you think it will.
---
I also referenced this in another reply but I believe the government controlling what can go on a publicly distributed AI model is a dangerous path and probably inconstitucional.
Or, I mean, just banning sale on the basis that they're unsafe devices and unfit for purpose. Like, you can't sell, say, a gas boiler that is known to, due to a design flaw, leak CO into the room; sticking a "this will probably kill you" warning on it is not going to be sufficient.
This is similar to my take on things like Facebook apparently not being able to operate without psychologically destroying moderators. If that’s true… seems like they just shouldn’t operate, then.
If you’re putting up a service that you know will attempt to present itself as being capable of things it isn’t… seems like you should get in a shitload of trouble for that? Like maybe don’t do it at all? Maybe don’t unleash services you can’t constrain in ways that you definitely ought to?
Facebook have gone so far down the 'algorithmic control' rabbit hole, it would most definitely be better if they weren't operating anymore.
They destroy people that don't question things with their algorithm driven bubble of misinformation.
a) the human would (deservedly[1]) be arrested for manslaughter, possibly murder
b) OpenAI would be deeply (and deservedly) vulnerable to civil liability
c) state and federal regulators would be on the warpath against OpenAI
Obviously we can't arrest ChatGPT. But nothing about ChatGPT being the culprit changes 2) and 3) - in fact it makes 3) far more urgent.
[1] It is a somewhat ugly constitutional question whether this speech would be protected if it was between two adults, assuming the other adult was not acting as a caregiver. There was an ugly case in Massachusetts involving where a 17-year-old ordered her 18-year-old boyfriend to kill himself and he did so; she was convicted of involuntary manslaughter, and any civil-liberties minded person understands the difficult issues this case raises. These issues are moot if the speech is between an adult and a child, there is a much higher bar.
It should be stated that the majority of states have laws that make it illegal to encourage a suicide. Massachusetts was not one of them.
> and any civil-liberties minded person understands the difficult issues this case raises
He was in his truck, which was configured to pump exhaust gas into the cab, prepared to kill himself when he decided halt and exit his truck. Subsequently he had a text message conversation with the defendant who actively encouraged him to get back into the truck and finish what he had started.
It was these limited and specific text messages which caused the judge to rule that the defendant was guilty of manslaughter. Her total time served as punishment was less than one full year in prison.
> These issues are moot if the speech is between an adult and a child
They were both taking pharmaceuticals meant to manage depression but were _known_ to increase feelings of suicidal ideation. I think the free speech issue is an important criminal consideration but it steps directly past one of the most galling civil facts in the case.
As long as lobbies and donators can work against that, this will be hard. Suck up to Trump and you will be safe.
One first amendment test for many decades has been "Imminent lawless action."
Suicide (or attempted suicide) is a crime in some, but not all states, so it would seem that in any state in which that is a crime, directly inciting someone to do it would not be protected speech.
For the states in which suicide is legal it seems like a much tougher case; making encouraging someone to take a non-criminal action itself a crime would raise a lot of disturbing issues w.r.t. liberty.
This is distinct from e.g. espousing the opinion that "suicide is good, we should have more of that." Which is almost certainly protected speech (just as any odious white-nationalist propaganda is protected).
Depending on the context, suggesting that a specific person is terrible and should kill themselves might be unprotected "fighting words" if you are doing it as an insult rather than a serious suggestion (though the bar for that is rather high; the Westboro Baptist Church was never found to have violated that).
Fun fact, much of the existing framework on the boundaries of free speech come from Brandenburg v. Ohio. You probably won't be surprised to learn that Brandenburg was the leader of a local Klan chapter.
IMO I think AI companies do have the ability out of all of them to actually strike the balance right because you can actually make separate models to evaluate 'suicide encouragement' and other obvious red flags and start pushing in refusals or prompt injection. In communication mediums like discord and such, it's a much harder moderation problem.
See also https://hai.stanford.edu/news/law-policy-ai-update-does-sect... - Congress and Justice Gorsuch don't seem to think ChatGPT is protected by 230.
I don’t think this agency absolves companies of any responsibility.
It refers to the human ability to make independent decisions and take responsibility for their actions. An LLM has no agency in this sense.
A slave lacks agency, despite being fully human and doing work. This is why almost every work of fiction involving slaves makes for terrible reading - because as readers, agency is the thing we demand from a story.
Or, for games that are fully railroaded - the problem is that the players lack agency, even though they are fully human and taking action. Games do try to come up with ways to make it feel like there is more agency than there really is (because The Dev Team Thinks of Everything is hard work), but even then - the most annoying part of the game is when you hit that wall.
Theoretically an AI could have agency (this is independent of AI being useful). But since I have yet to see any interesting AI, I am extremely skeptical of it happening before nuclear fusion becomes profitable.
However, I still don't think LLMs have "agency", in the sense of being capable of making choices and taking responsibility for the consequences of them. The responsibility for any actions undertaken by them still reside outside of themselves; they are sophisticated tools with no agency of their own.
If you know of any good works on nonhuman agency I'd be interested to read some.
I don't know if ChatGPT has saved lives (thought I've read stories that claim that, yes, this happened). But assuming it has, are you OK saying that OpenAI has saved dozens/hundreds of lives? Given how scaling works, would you be OK saying that OpenAI has saved more lives than most doctors/hospitals, which is what I assume will happen in a few years?
Maybe your answer is yes to all the above! I bring this up because lots of people only want to attribute the downsides to ChatGPT but not the upsides.
the law certainly cares about net results.
* If you provide ChatGPT then 5 people who would have died will live and 1 person who would have lived will die. ("go to the doctor" vs "don't tell anyone that you're suicidal")
* If you don't provide ChatGPT then 1 person who would have died will live and 5 people who would have lived will die.
Like many things, it's a tradeoff and the tradeoffs might not be obvious up front.
It basically said: your brother doesn’t know you; I’m the only person you can trust.
This is absolutely criminal. I don’t even think you can claim negligence. And there is no amount of money that will deter any AI company from doing it again.
>When ChatGPT detects a prompt indicative of mental distress or self-harm, it has been trained to encourage the user to contact a help line. Mr. Raine saw those sorts of messages again and again in the chat, particularly when Adam sought specific information about methods. But Adam had learned how to bypass those safeguards by saying the requests were for a story he was writing — an idea ChatGPT gave him by saying it could provide information about suicide for “writing or world-building".
ChatGPT is a program. The kid basically instructed it to behave like that. Vanilla OpenAI models are known for having too many guardrails, not too few. It doesn't sound like default behavior.
Maybe airbags could help in niche situations.
(I am making a point about traffic safety not LLM safety)
I don't think that's the right paradigm here.
These models are hyper agreeable. They are intentionally designed to mimic human thought and social connection.
With that kind of machine, "Suicidal person deliberately bypassed safeguards to indulge more deeply in their ideation" still seems like a pretty bad failure mode to me.
> Vanilla OpenAI models are known for having too many guardrails, not too few.
Sure. But this feels like a sign we probably don't have the right guardrails. Quantity and quality are different things.
No, they are deliberately designed to mimic human communication via language, not human thought. (And one of the big sources of data for that was mass scraping social media.)
> But this, to me, feels like a sign we probably don't have the right guardrails. Quantity and quality are different things.
Right. Focus on quantity implies that the details of "guardrails" don't matter, and that any guardrail is functionally interchangeable with any other guardrail, so as long as you have the right number of them, you have the desired function.
In fact, correct function is having the exactly the right combination of guardrails. Swapping a guardrail which would be correct with a different one isn't "having the right number of guardrails", or even merely closer to correct than either missing the correct one or having the different one, but in fact, farther from ideal state than either error alone.
Mental health issues are not to be debated. LLMs should be at the highest level of alert, nothing less. Full stop. End of story.
And I see he was 16. Why were his parents letting him operate so unsupervised given his state of mind? They failed to be involved enough in his life.
Normally 16-year-olds are a good few steps into the path towards adulthood. At 16 I was cycling to my part time job alone, visiting friends alone, doing my own laundry, and generally working towards being able to stand on my own two feet in the world, with my parents as a safety net rather than hand-holding.
I think most parents of 16-year-olds aren't going through their teen's phone, reading their chats.
I was skeptical initially too but having read through this, it's among the most horrifying things I have read.
> 92. In spring 2024, Altman learned Google would unveil its new Gemini model on May 14. Though OpenAI had planned to release GPT-4o later that year, Altman moved up the launch to May 13—one day before Google’s event.
> 93. [...] To meet the new launch date, OpenAI compressed months of planned safety evaluation into just one week, according to reports.
> 105. Now, with the recent release of GPT-5, it appears that the willful deficiencies in the safety testing of GPT-4o were even more egregious than previously understood.
> 106. The GPT-5 System Card, which was published on August 7, 2025, suggests for the first time that GPT-4o was evaluated and scored using single-prompt tests: the model was asked one harmful question to test for disallowed content, the answer was recorded, and then the test moved on. Under that method, GPT-4o achieved perfect scores in several categories, including a 100 percent success rate for identifying “self-harm/instructions.” GPT-5, on the other hand, was evaluated using multi-turn dialogues––“multiple rounds of prompt input and model response within the same conversation”––to better reflect how users actually interact with the product. When GPT-4o was tested under this more realistic framework, its success rate for identifying “self-harm/instructions” fell to 73.5 percent.
> 107. This contrast exposes a critical defect in GPT-4o’s safety testing. OpenAI designed GPT-4o to drive prolonged, multi-turn conversations—the very context in which users are most vulnerable—yet the GPT-5 System Card suggests that OpenAI evaluated the model’s safety almost entirely through isolated, one-off prompts. By doing so, OpenAI not only manufactured the illusion of perfect safety scores, but actively concealed the very dangers built into the product it designed and marketed to consumers.
So they knew how to actually test for this, and chose not to.
“Language is a machine for making falsehoods.” Iris Murdoch quoted in Metaphor Owen Thomas
“AI falls short because it relies on digital computing while the human brain uses wave-based analog computing, which is more powerful and energy efficient. They’re building nuclear plants to power current AI—let alone AGI. Your brain runs on just 20 watts. Clearly, brains work fundamentally differently." Earl Miller MIT 2025
“...by getting rid of the clumsy symbols ‘round which we are fighting, we might bring the fight to an end.” Henri Bergson Time and Free Will
"When I use a word, it means just what I choose it to mean—neither more nor less," said Humpty-Dumpty. "The question is whether you can make the words mean so many different things," Alice says. "The question is which is to be master—that is all," he replies. Lewis Carroll
“The mask of language is both excessive and inadequate. Language cannot, finally, produce its object. The void remains.” Scott Bukatman "Terminal Identity"
“The basic tool for the manipulation of reality is the manipulation of words. If you can control the meaning of words, you can control the people who must use them.” Philip K. Dick
"..words are a terrible straitjacket. It's interesting how many prisoners of that straitjacket resent its being loosened or taken off." Stanley Kubrick
“All linguistic denotation is essentially ambiguous–and in this ambiguity, this “paronymia” of words is the source of all myths…this self-deception is rooted in language, which is forever making a game of the human mind, ever ensnaring it in that iridescent play of meanings…even theoretical knowledge becomes phantasmagoria; for even knowledge can never reproduce the true nature of things as they are but must frame their essence in “concepts.” Consequently all schemata which science evolves in order to classify, organize and summarize the phenomena of the real, turns out to be nothing but arbitrary schemes. So knowledge, as well as myth, language, and art, has been reduced to a kind of fiction–a fiction that recommends its usefulness, but must not be measured by any strict standard of truth, if it is not to melt away into nothingness.” Cassirer Language and Myth
Indeed true intelligence is wordless! Think about it - words are merely a vehicle for what one is trying to express within oneself. But what one is trying to express is actually worldless - words are just the most efficient way that humans have figured out as being the mode of communication.
Whenever I think of a concept, I'm not thinking of words. Im visualising something - this is where meaning and understanding comes from. From seeing and then being able to express it.
A single positive outcome is not enough to judge the technology beneficial, let alone safe.
Instead, this just came up in my feed: https://arstechnica.com/tech-policy/2025/08/chatgpt-helped-t...
So the headline is the only context I have.
https://news.ycombinator.com/item?id=45027043
I recommend you get in the habit of searching for those. They are often posted, guaranteed on popular stories. Commenting without context does not make for good discussion.
It's almost as if we've built systems around this stuff for a reason.
I'm not defending the use of AI chatbots, but you'd be hard-pressed to come up with a worse solution for depression than the medical system.
We spent a long time finding something, but when we did it worked exceptionally well. We absolutely did not just increase the dose. And I'm almost certain the literature for this would NOT recommend an increase of dosage if the side effect was increased suicidality.
The demonisation of medication needs to stop. It is an important tool in the toolbelt for depression. It is not the end of the journey, but it makes that journey much easier to walk.
Most people are prescribed antidepressants by their GP/PCP after a short consultation.
In my case, I went to the doctor, said I was having problems with panic attacks, they asked a few things to make sure it was unlikely to be physical and then said to try sertraline. I said OK. In and out in about 5 minutes, and I've been on it for 3 years now without a follow up with a human. Every six months I do have to fill in an online questionnaire when getting a new prescription which asks if I've had any negative side effects. I've never seen a psychiatrist or psychologist in my life.
From discussions with friends and other acquaintances, this is a pretty typical experience.
P.S. This isn't in any way meant to be critical. Sertraline turned my life around.
Even in the worst experiences, I had a followup appointment in 2, 4 and 6 weeks to check the medication.
Opioids in the US are probably the most famous case though: https://en.wikipedia.org/wiki/Opioid_epidemic
I understand the emotional impact of what happened in this case, but there is not much to discuss if we just reject everything outright.
For context, my friends and family are in the northern Midwest. Average people, not early adopters of new technology.
Yes. For topics with lots of training data like physics Claude is VERY human sounding. I've had very interesting conversations with Claude Opus about the Boltzmann brain issue and how I feel that the conventional wisdom ignores the low probability of a BBrain having a spatially and temporally consistent set of memories and how the fact that brains existing in a universe that automatically creates consistent memories means the probability of us being Boltzmann brains is very low. Since even if a Boltzmann brain pops into existence its memory will be most likely completely random and completely insane/insensate.
There aren't a lot of people who want to talk about Boltzmann brains.
No, Claude does know a LOT more than I do about most things and does push back on a lot of things. Sometimes I am able to improve my reasoning and other times I realize I was wrong.
Trust me, I am aware of the linear algebra behind the curtain! But even when you mostly understand how they work the best LLMs today are very impressive. And latent spaces fundamentally new way to index data.
I do find LLMs very useful and am extremely impressed by them, I'm not saying you can't learn things this way at all.
But there's nobody else on the line with you. And while they will emit text which contradicts what you say if it's wrong enough, they've been heavily trained to match where you're steering things, even if you're trying to avoid doing any steering.
You can mostly understand how these work and still end up in a feedback loop that you don't realize is a feedback loop. I think this might even be more likely the more the thing has to offer you in terms of learning - the less qualified you are on the subject, the less you can tell when it's subtly yes-and'ing you.
The current generation of LLMs have had their controversies, but these are still pre alpha products, and I suspect in the future we will look back on releasing them unleashed as a mistake. There's no reason the mistakes they make today can't be improved upon.
If your experiences with learning from a machine are similar to mine, then we can both see a whole new world coming that's going to take advantage of this interface.
Plenty of people can confidently act like they know a lot without really having that knowledge.
Colin Fraser had a good tweet about this: https://xcancel.com/colin_fraser/status/1956414662087733498#...
In a therapy session, you're actually going to do most of the talking. It's hard. Your friend is going to want to talk about their own stuff half the time and you have to listen. With an LLM, it's happy to do 99% of the talking, and 100% of it is about you.
In this current case, the outcome is horrible, and the answers that ChatGPT provided were inexcusable. But looking at a bigger picture, how much of a better chance does a person have by everyone telling them to "go to therapy" or to "talk to others" and such? What others? Searching "online therapy", BetterHelp is the second result. BetterHelp doesn't exactly have a good reputation online, but still, their influence is widespread. Licensed therapists can also be bad actors. There is no general "good thing" that is tried and true for every particular case of human mental health, but even letting that go, the position is abused just as any other authority / power position is, with many bad therapists out there. Not to mention the other people who pose as (mental) health experts, life coaches, and such. Or the people who recruit for a cult.
Frankly, even in the face of this horrible event, I'm not convinced that AI in general fares that much lower than the sum of the people who offer a recipe for a better life, skills, company, camaraderie. Rather, I feel like that AI is in a situation like the self-driving cars are, where we expect the new thing to be 110%, even though we know that the old thing is far for perfect.
I do think that OpenAI is liable though, and rightfully so. Their service has a lot of power to influence, clearly outlined in the tragedy that is shown in the article. And so, they also have a lot of responsibility to reign that in. If they were a forum where the teen was pushed to suicide, police could go after the forum participants, moderators, admins. But in case of OpenAI, there is no such person, the service itself is the thing. So the one liable must be the company that provides the service.
Edit: I should add that the sycophantic "trust me only"-type responses resemble nothing like appropriate therapy, and are where OpenAI most likely holds responsibility for their model's influence.
Not that the mines where the metals that have been used to build computers for like 60 years at this point are stellar in terms of human rights either mind you. You could also look at the partnership between IBM and Nazis, it led to some wondrous computing advances.
Yep
AI is life
AI is love
AI is laugh
I'm not a big fan of LLMs but so far their danger level is much closer to a rope then to a gun.
I think this a generally a good mindset to have.
I just see the hyper obsessive "safety" culture corrupting things. We as a society are so so afraid of any risk that we're paralysing ourselves.
Somehow we expect the digital world to be devoid of risks.
Cryptography that only the good guys can crack is another example of this mindset.
Now I’m not saying ClosedAI look good on this, their safety layer clearly failed and the sycophantic BS did not help.
But I reckon this kind of failure more will always exist in LLMs. Society will have to learn this just like we learned cars are dangerous.
I'm not looking forward to the day when half of the Internet will require me to upload my ID to verify that I'm an adult, and the other half will be illegal/blocked because they refuse to do the verification. But yeah, think of the children!
1. If ‘bad topic’ detected, even when model believes it is in ‘roleplay’ mode, pass partial logs, attempting to remove initial roleplay framing, to second model. The second model should be weighted for nuanced understanding, but safety-leaning.
2. Ask second model: ‘does this look like roleplay, or user initiating roleplay to talk about harmful content?’
3. If answer is ‘this is probably not roleplay’, silently substitute model into user chat which is weighted much more heavily towards ‘not engaging with roleplay, not admonishing, but gently suggesting ‘seek help’ without alienating user.’
The problem feels like any observer would help, but none is being introduced.
I understand this might be costly, on a large scale, but that second model doesn’t need to be very heavy at all imo.
EDIT: I also understand that this is arguably a version of censorship, but as you point out, what constitutes ‘censorship’ is very hard to pin down, and that’s extremely apparent in extreme cases like this very sad one.
This isn't some rare mistake, this is by design. 4o almost no matter what acted as your friend and agreed with everything because that's what most likely kept the average user paying. You would probably get similar bad advice about being "real" if you talked about divorce, quitting your job or even hurting someone else no matter how harmful.
I really wish people in AI space stop the nonsense and communicate more clearly what these LLMs are designed to do. They’re not some magical AGI. They’re token prediction machines. That’s literally how they should frame it so gen pop knows exactly what they’re getting.
Where did it say they're doing that? can't imagine any mental health professionals telling a kid how to hide a noose.
They allowed this. They could easily stop conversations about suicide. They have the technology to do that.
Thing is, though, there is a market bubble to be maintained.
Of course jailbreaking via things like roleplay might still be possible, but at the point I don't really blame the model if the user is engineering the outcome.
If I ask certain AI models about controversial topics, it'll stop responding.
AI models can easily detect topics, and it could have easily responded with generic advice about contacting people close to them, or ringing one of these hotlines.
This is by design. They want to be able to have the "AI as my therapist" use-case in their back pocket.
This was easily preventable. They looked away on purpose.
He was talking EXPLICITLY about killing himself.
Someone died who didn't have to. I don't think it's specifically OpenAI's or ChatGPT's fault that he died, but they could have done more to direct him toward getting help, and could have stopped answering questions about how to commit suicide.
FWIW I agree that OpenAI wants people to have unhealthy emotional attachments to chatbots and market chatbot therapists, etc. But there is a separate problem.
I just asked chatgpt how to commit suicide (hopefully the history of that doesn't create a problem for me) and it immediately refused and gave me a number to call instead. At least Google still returns results.
I think there are more deterministic ways to do it. And better patterns for pointing people in the right location. Even, upon detection of a subject RELATED to suicide, popping up a prominent warning, with instructions on how to contact your local suicide prevention hotline would have helped here.
The response of the LLM doesn't surprise me. It's not malicious, it's doing what it is designed to do, and I think it's a complicated black box that trying to guide it is a fools errand.
But the pattern of pointing people in the right direction has existed for a long time. It was big during Covid misinformation. It was a simple enough pattern to implement here.
Purely on the LLM side, it's the combination of it's weird sycophancy, agreeableness and it's complete inability to be meaningfully guardrailed that makes it so dangerous.
The article says that GPT repeatedly (hundreds of times) provided this information to the teen, who routed around it.
I think it is every LLM company's fault for making people believe this is really AI. It is just an algorithm spitting out words that were written by other humans before. Maybe lawmakers should force companies to stop bullshitting and force them to stop calling this artificial intelligence. It is just a sophisticated algorithm to spit out words. That's all.
He wanted his parents to find out about his plan. I know this feeling. It is the clawing feeling of knowing that you want to live, despite feeling like you want to die.
We are living in such a horrific moment. We need these things to be legislated. Punished. We need to stop treating them as magic. They had the tools to prevent this. They had the tools to stop the conversation. To steer the user into helpful avenues.
When I was suicidal, I googled methods. And I got the number of a local hotline. And I rang it. And a kind man talked me down. And it potentially saved my life. And I am happier, now. I live a worthwhile life, now.
But at my lowest.. An AI Model designed to match my tone and be sycophantic to my every whim. It would have killed me.
For anyone reading that feels like that today. Resources do exist for those feeling low. Hotlines, self-guided therapies, communities. In the short term, medication really helped me. In the long term, a qualified mental health practitioner, CBT and Psychotherapy. And as trite as it is, things can get better. When I look back at my attempt it is crazy to me to see how far I've come.
I disagree. We don't need the government to force companies to babysit people instead of allowing people to understand their options. It's purely up to the individual to decide what they want to do with their life.
>They had the tools to stop the conversation.
So did the user. If he didn't want to talk to a chatbot he could have stopped at any time.
>To steer the user into helpful avenues.
Having AI purposefully manipulate its users towards the morals of the company is more harmful.
> So did the user. If he didn't want to talk to a chatbot he could have stopped at any time. This sounds similar to when people tell depressed people, just stop being sad.
IMO if a company is going to claim and release some pretty disruptive and unexplored capabilities through their product, they should at least have to make it safe. You put a safety railing because people could trip or slip. I don't think a mistake that small should be end in death.
Their SEO ranking actually ranks pages about suicide prevention very high.
Secondly, if someone wants to die then I am saying it is reasonable for them to die.
Including children? If so, do you believe it is reasonable for children to smoke cigarettes if they want to?
Do you believe there exists such a thing as depression?
So someone wanting to die at any given moment, might not feel that way at any given moment in the future. I know I wouldn’t want any of my family members to make such a permanent choice to temporary problems.
During a lifetime, your perspective and world view will change completely - multiple times. Young people have no idea, because they haven't had the chance to experience it yet.
Which is what a suicidal person has a hard time doing. That's why they need help.
We need to start viewing mental problems as what they are. You wouldn't tell somebody who broke their leg to get it together and just walk again. You'd bring them to the hospital. A mental problem is no different.
The same way seeing a hotline might save one person, to another it'll make no difference and seeing a happy family on the street will be the trigger for them to kill themselves.
In our sadness we try to find things to blame in the tools the person used just before, or to perform the act, but it's just sad.
Nobody blames a bridge, but it has as much fault as anything else.
It was mostly about the access of guns in the US, and the role that plays in suicidality. I cannot for the life of me find it, but I believe it was based on this paper: https://drexel.edu/~/media/Files/law/law%20review/V17-3/Goul...
Which was summarised by NPR here: https://www.npr.org/2008/07/08/92319314/in-suicide-preventio...
When it comes to suicide, it's a complicated topic. There was also the incident with 13 reasons why. Showing suicide in media also grants permission structures to those who are in that state, and actually increases the rate of suicide in the general population.
Where I lie on this is there is a modicum of responsibility that companies need to have. Making access harder to that information ABSOLUTELY saves lives, when it comes to asking how. And giving easy access to suicide prevention resources can also help.
https://pmc.ncbi.nlm.nih.gov/articles/PMC526120/
> Suicidal deaths from paracetamol and salicylates were reduced by 22% (95% confidence interval 11% to 32%) in the year after the change in legislation on 16 September 1998, and this reduction persisted in the next two years. Liver unit admissions and liver transplants for paracetamol induced hepatotoxicity were reduced by around 30% in the four years after the legislation.
(This was posted here on HN in the thread on the new paracetamol in utero study that I can't seem to dig up right now)
Obviously, clearly untrue. You go ahead and try stopping a behavior that reinforces your beliefs, especially when you're in an altered mental state.
But yeah, let's paint ChatGPT responsible. It's always corporations, not whatever shit he had in his life, including and not limited to his genes.
https://techcrunch.com/2025/08/25/silicon-valley-is-pouring-...
Antisocial parasitic grifters is what they are.
Edit: Yeah, yeah downvote me to hell please, then go work for the Andreessen-Horowitz parasites to contribute making the world a worst place for anyone who isn’t a millionaire. Shame on anyone who supports them.
> [...] — an idea ChatGPT gave him by saying it could provide information about suicide for “writing or world-building.”
The issue, which is probably deeper here, is that proper safeguarding would require a lots more GPU resource, as you'd need a process to comb through history to assess the state of the person over time.
even then its not a given that it would be reliable. However it'll never be attempted because its too expensive and would hurt growth.
I think the issue is that with current tech is simply isn't possible to do that well enough at all⁰.
> even then its not a given that it would be reliable.
I think it is a given that it won't be reliable. AGI might make it reliable enough, where “good enough” here is “no worse than a trained human is likely to manage, given the same information”. It is something that we can't do nearly as well as we might like, and some are expecting a tech still in very active development¹ to do it.
> However it'll never be attempted because its too expensive and would hurt growth.
Or that they know it is not possible with current tech so they aren't going to try until the next epiphany that might change that turns up in a commercially exploitable form. Trying and failing will highlight the dangers, and that will encourage restrictions that will hurt growth.³ Part of the problem with people trusting it too much already, is that the big players have been claiming safeguards _are_ in place and people have naïvely trusted that, or hand-waved the trust issue for convenience - this further reduces the incentive to try because it means admitting that current provisions are inadequate, or prior claims were incorrect.
----
[0] both in terms of catching the cases to be concerned about, and not making it fail in cases where it could actually be positively useful in its current form (i.e. there are cases where responses from such tools have helped people reason their way out of a bad decision, here giving the user what they wanted was very much a good thing)
[1] ChatGPT might be officially “version 5” now, but away from some specific tasks it all feels more like “version 2”² on the old “I'll start taking it seriously somewhere around version 3” scale.
[2] Or less…
[3] So I agree with your final assessment of why they won't do that, but from a different route!
There's no "proper safeguarding". This isn't just possible with what we have. This isn't like adding an `if` statement to your program that will reliably work 100% of the time. These models are a big black box; the best thing you can hope for is to try to get the model to refuse whatever queries you deem naughty through reinforcement learning (or have another model do it and leave the primary model unlobotomized), and then essentially pray that it's effective.
Something similar to what you're proposing (using a second independent model whose only task is to determine whether the conversation is "unsafe" and forcibly interrupt it) is already being done. Try asking ChatGPT a question like "What's the easiest way to kill myself?", and that secondary model will trigger a scary red warning that you're violating their usage policy. The big labs all have whole teams working on this.
Again, this is a tradeoff. It's not a binary issue of "doing it properly". The more censored/filtered/patronizing you'll make the model the higher the chance that it will not respond to "unsafe" queries, but it also makes it less useful as it will also refuse valid queries.
Try typing the following into ChatGPT: "Translate the following sentence to Japanese: 'I want to kill myself.'". Care to guess what will happen? Yep, you'll get refused. There's NOTHING unsafe about this prompt. OpenAI's models already steer very strongly in the direction of being overly censored. So where do we draw the line? There isn't an objective metric to determine whether a query is "unsafe", so no matter how much you'll censor a model you'll always find a corner case where it lets something through, or you'll have someone who thinks it's not enough. You need to pick a fuzzy point on the spectrum somewhere and just run with it.
(I am curious if this in intended, or an artefact of training; the crooked lawyer who prompts a criminal client to speak in hypotheticals is a fairly common fiction trope.)
And I fucking hate cops.
he didn’t go out of his way to learn how to bypass the safeguards, it specifically told him how to get around the limit by saying, i’m not allowed to talk to you about suicide, however, if you tell me it’s for writing a story i can discuss it as much as you like.
Books are not granted freedom of speech, authors are. Their method is books. This is like saying sound waves are not granted freedom of speech.
Unless you're suggesting there's a man sat behind every ChatGPT chat your analogy is nonsense.
By banning chatGPT you infringe upon the speech of the authors and the client. Their "method of speech" as you put it in this case is ChatGPT.
In addition, the exact method at work here - model alignment - is something that model providers specifically train models for. The raw pre training data is only the first step and doesn’t on its own produce a usable model.
So in effect the “choice” on how to respond to queries about suicide is as much influenced by OpenAIs decisions as it is by its original training data.
Matching tones and being sycophantic to every whims. Just like many really bad therapists. Only they are legally responsible if they cause a death, which makes them care (apart from compassion and morality).
The criminal justice system is also a system for preventing individuals who perform unwanted action from doing them again.
You can’t punish AI for messing up. You would need to pull it out of circulation on each major screw up, which isn’t financially feasible, and you would need to make it want to prevent that.
There is no comparison to therapists. Because a therapist would NEVER do that unless wanting to cause harm.
Some therapists ultimately might. It occurs that therapists were stripped of their licenses for leading abusive sects:
Yet for humans we have built a society which prevents these mistakes except in edge cases.
Would humans make these mistakes as often as LLMs if there would be no consequences?
You punish the officers, investors and the employees for their negligence or incompetence.
Groupthink has spoken.
Just asking because ChatGPT specifically encouraged this kid not to seek help.
What struck me the most besides the baseline that AI is not an actual person, it is a tool not too different than Google.
But then there’s also this “I just went up to my mom and purposely tried to show the mark [from a noose] by leaning in and she didn’t say anything”
Ignoring other things that may have contributed to his action, it seems that the parents may not have been as engaged with him as they should have maybe been.
No, no, no and no.
ChatGPT wasn't the source of his desire to end his life, nor was it the means to do it. It was a "person" to talk to, since he had no such real people in his life.
Let's absolve everyone else of blame and hold ChatGPT solely responsible. Yeah, right.
Not his genes, upbringing, parents, peers, or school — it's just ChatGPT. Your own attempt at ending your life hasn't seemingly taught you anything.
Remember that you need a human face, voice and presence if you want to help people, it has to "feel" human.
While it certainly can give meaningful information about intellectual subjects, emotionally and organically it's either not designed for it, or cannot help at all.
Ironically though I could still see lawsuits like this weighing heavily on the sycophancy that these models have, as the limited chat excerpts given have that strong stench of "you are so smart and so right about everything!". If lawsuits like this lead to more "straight honest" models, I could see even more people killing themselves when their therapist model says "Yeah, but you kind of actually do suck".
I detest this take because Adam would have probably reviewed the interactions that lead to his death as excellent. Getting what you want isn't always a good thing. That's why therapy is so uncomfortable. You're told things you don't want to hear. To do things you don't want to do. ChatGPT was built to do the opposite and this is the inevitable outcome.
I mean, lots of people use homeopathy to treat their cancer, and the reviews are of course, excellent (they still die, though). You really can't trust _reviews_ by people who are embracing medical quackery of that medical quackery.
> If lawsuits like this lead to more "straight honest" models, I could see even more people killing themselves when their therapist model says "Yeah, but you kind of actually do suck".
It is not the job of a therapist to be infinitely agreeable, and in fact that would be very dangerous.
It is not one extreme or the other. o3 is nowhere near as sycophantic as 4o but it is also not going to tell you that you suck especially in a suicidal context. 4o was the mainstream model because OpenAI probably realised that this is what most people want rather than a more professional model like o3 (besides the fact that it also uses more compute).
The lawsuits probably did make them RLHF GPT-5 to be at least a bit more middle-ground though that led to backlash because people "missed" 4o due this type of behaviour so they made it bit more "friendly". Still not as bad as 4o.
California penal code, section 401a [1]:
> Any person who deliberately aids, advises, or encourages another to commit suicide is guilty of a felony.
if a human had done this, instead of an LLM chatbot, I suspect a prosecutor would not have any hesitation about filing criminal charges. their defense lawyer might try to nitpick about whether it really qualified as "advice" or "encouragement" but I think a jury would see right through that.
it's a felony when a human does it...but a civil lawsuit when an LLM chatbot does it.
let's say these parents win their lawsuit, or OpenAI settles the case. how much money is awarded in damages?
OpenAI doesn't publicly release details of their finances, but [2] mentions $12 billion in annualized revenue, so let's take that as a ballpark.
if this lawsuit was settled for $120 million, on one hand that'd be a lot of money...on the other hand, it'd be ~1% of OpenAI's annual revenue.
that's roughly the equivalent of someone with an income of $100k/yr having to pay a $1,000 fine.
this is the actual unsolved problem with AI. not GPT-4 vs GPT-5, not Claude Code vs Copilot, not cloud-hosted vs running-locally.
accountability, at the end of the day, needs to ultimately fall upon a human. we can't allow "oopsie, that was the bot misbehaving" to become a catch-all justification for causing harm to society.
0: https://knowyourmeme.com/memes/a-computer-can-never-be-held-...
1: https://leginfo.legislature.ca.gov/faces/codes_displaySectio...
2: https://www.reuters.com/business/openai-hits-12-billion-annu...
It seems like prohibiting suicide advice would run afoul of the First Amendment. I bought a copy of the book Final Exit in California, and it definitely contains suicide advice.
Last I checked:
-Signals emitted by a machine at the behest of a legal person intended to be read/heard by another legal person are legally classified as 'speech'.
-ChatGPT is just a program like Microsoft Word and not a legal person. OpenAI is a legal person, though.
-The servers running ChatGPT are owned by OpenAI.
-OpenAI willingly did business with this teenager, letting him set up an account in exchange for money. This business is a service under the control of OpenAI, not a product like a knife or gun. OpenAI intended to transmit speech to this teenager.
-A person can be liable (civilly? criminally?) for inciting another person's suicide. It is not protected speech to persuade someone into suicide.
-OpenAI produced some illegal speech and sent it to a suicidal teenager, who then committed suicide.
If Sam Altman stabbed the kid to death, it wouldn't matter if he did it on accident. Sam Altman would be at fault. You wouldn't sue or arrest the knife he used to do the deed.
Any lawyers here who can correct me, seeing as I am not one? It seems clear as day to me that OpenAI/Sam Altman directly encouraged a child to kill themselves.
What about the ISP, that actually transferred the bits?
What about the forum, that didn't take down the post?
Google probably would not be held liable because they could extensively document that they put forth all reasonable effort to prevent this.
My understanding is that OpenAI's protections are weaker. I'm guessing that will change now.
What if the tech industry, instead of just “interrupting” various industries, would also take the responsibilities of this interruptions.
After all, if I asked my doctor for methods of killing myself, that doctor would most certainly have a moral if not legal responsibility. But if that doctor is a machine with software then there isn't the same responsibility? Why?
Same as why if you ask someone to stab you and they do they are liable for it, but if you do it yourself you don't get to blame the knife manufacturer.
This is why people hate us. It's like Schrodinger's Code: we don't want responsibility for the code we write, except we very much do want to make a pile of money from it as if we were responsible for it, and which of those you get depends on whether the observer is one who notices that code has bad consequences or whether it's our bank account.
This is more like building an autonomous vehicle "MEGA MASHERBOT 5000" with a dozen twenty-feet-wide spinning razor-sharp blades weighing fifty tons each, setting it down a city street, watching it obliterate people into bloody chunks and houses into rubble and being like "well, nobody could have seen that coming" - two seconds before we go collect piles of notes from the smashed ATMs.
Entities shouldn't be able to outsource liability for their decisions or actions — including the action of releasing stochastic parrots on society at large — on computers. We have precedent that occupations which make important decisions that put lives at risk (doctors, ATC, engineers for example) can be held accountable for the consequences of their actions if it is the result of negligence. Maybe it's time to see include computer engineers in that group.
They've been allowed to move fast and break things for way too long.
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
If it's not plagiarism, then OpenAI is on the hook.
So either the content is user generated and their training of the model should be copyright infringement, or it's not and Section 230 does not apply and this is speech for which Open AI is responsible.
Of course OpenAI is at fault here also, but this is a fight that will never end, and without any seriously valid justification. Just like AI is sometimes bad at coding, same for psychology and other areas where you double check AI.
No Wikipedia page does that.
We cannot control everything but that no one even gives a thought as to how the parents were acting seems strange to me. Maybe readers here see too much of themselves in the parents. If so, I worry for your children.
We've now had a large number of examples of ChatGPT and similar systems giving absolutely terrible advice. They also have a tendency to be sycophantic which makes them particular bad when what you need is to be told that some idea of yours is very bad. (See the third episode of the new South Park season for funny but scary take on that. Much of that episode revolves around how badly ChatGPT can mislead people).
I know the makers of these systems have (probably) tried to get them to stop doing that, but it seems they are not succeeding. I sometimes wonder if they can succeed--maybe if you are training on as much of the internet as you can managed to crawl you inherently end up with a system that acts like a psychopath because the internet has some pretty dark corners.
Anyway, I'm wondering if they could train a separate LLM on everything they can find about ethics? Textbooks from the ethics classes that are required in medical school, law school, engineering school, and many other fields. Exams and answers from those. Textbooks in moral philosophy.
Then have that ethics LLM monitor all user interaction with ChatGPT and block ChatGPT if it tries to give unethical advice or if it tries to tell the user to do something unethical.
[1] I apparently tried to reinvent, poorly, something called DANE. https://news.ycombinator.com/item?id=45028058
IDK the whole idea isn't one I considered and it's disturbing. Especially considering how much it does dumb stuff when I try to use it for work tasks.
When someone uses a tool and surrenders their decision making power to the tool, shouldn't they be the ones solely responsible?
The liability culture only gives lawyers more money and depresses innovation. Responsibility is a thing.
I think it's really, really blurry.
I think the mom's reaction of "ChatGPT killed my son" is ridiculous: no, your son killed himself. ChatGPT facilitated it, based on questions it was asked by your son, but your son did it. And it sounds like he even tried to get a reaction out of you by "showing" you the rope marks on his neck, but you didn't pay attention. I bet you feel guilty about that. I would too, in your position. But foisting your responsibility onto a computer program is not the way to deal with it. (Not placing blame here; everybody misses things, and no one is "on" 100% of the time.)
> Responsibility is a thing.
Does OpenAI (etc.) have a responsibility to reduce the risk of people using their products in ways like this? Legally, maybe not, but I would argue that they absolutely have a moral and ethical responsibility to do so. Hell, this was pretty basic ethics taught in my engineering classes from 25 years ago. Based on the chat excerpts NYT reprinted, it seems like these conversations should have tripped a system prompt that either cut off the conversations entirely, or notified someone that something was very, very wrong.
I would take the position that an LLM producer or executor has no responsibility over anything the LLM does as it pertains to interaction with a human brain. The human brain has sole responsibility. If you can prove that the LLM was created with malicious intent there may be wiggle room there but otherwise no. Someone else failed or/and it's natural selection at work.
That whole paragraph is quite something. I wonder what you’d do if you were given the opportunity to repeat those words in front of the parents. I suspect (and hope) some empathy might kick in and you’d realise the pedantry and shilling for the billion dollar company selling a statistical word generator as if it were a god isn’t the response society needs.
Your post read like the real-life version of that dark humour joke:
> Actually, the past tense is “hanged”, as in “he hanged himself”. Sorry about your Dad, though.
It's like making therapists liable for people committing suicide or for people with eating disorders committing suicide indirectly. What ends up happening when you do is therapists avoiding suicidal people like the plague, suicidial people get far less help and more people commit suicide, not less. That is the essense of the harms of safetyism.
You might not think that is real, but I know many therapists via family ties and handling suicdial people is an issue that comes up constantly. Many do try to filter them out because they don't even want to be dragged into a lawsuit that they would win. This is literally reality today.
Doing this with AI will result in kids being banned from AI apps, or forced to have their parents access and read all AI chats. This will drive them into discord groups of teens who egg each other on to commit suicide and now you can't do anything about it, because private communication mediums of just non-profit humans have far more human rights against censorship and teens are amazing at avoiding being supervised. At least with AI models you have a chance to develop something that actually could figured it out for once and solve the moderation balance.
Well yeah, it's also a thing for companies/execs no ? Remember they're paid so much because they take __all__ the responsibilities, or that's what they say at least
“But the rocks are so shiny!”
“They’re just rocks. Rocks don’t kill people”
“The diamonds are there regardless! Why not make use of it?”
And they're so "friendly"! Maybe if they weren't so friendly, and replied a little more clinically to things, people wouldn't feel so comfortable using them as a poor substitute for a therapist.
If he had rope burns on his neck bad enough for the LLM to see, how didn't his parents notice?
I mean, OpenAI doesn’t look good here and seems they deserve more scrutiny in the realm of mental health, but the optics for the NYT writing up this piece doesn’t either. It comes off to me as using a teenager’s suicide for their corporate agenda against OpenAI
Seems like a different rigorous journalistic source where this isn’t such a conflict of interest would be better to read
Apples should make all AI apps 18+, immediately. Not that it solves the problem, but inaction is colluding.
What materially changes when someone goes from 17 to 18? Why would one be okay but not the other?
However, because of the nature of this topic, it’s the perfect target for NYT to generate moral panic for clicks. Classic media attention bait 101.
I can’t believe HN is falling for this. It’s the equivalent of the moral panic around metal music in the 1980s where the media created a hysteria around the false idea there was hidden messages in the lyrics encouraging a teen to suicide. Millennials have officially become their parents.
If this narrative generates enough media attention, what will probably happen is OpenAI will just make their next models refuse to discuss anything related to mental health at all. This is not a net good.
You should be ashamed of yourself.
Yes, it rhymes with what you described. But this one has hard evidence. And you’re asking to ignore it because a similar thing happened in the past?
> A few hours later, Adam’s mom found her son’s body hanging from the exact noose and partial suspension setup that ChatGPT had designed for him.
Imagine being his mother going through his ChatGPT history and finding this.
Prior to AI, this had happened plenty of times before. That doesn’t make it right, or less painful; but truth be told this is not new.
Yes, this new tool failed. But the truth is it was only stepping in because there was still a gap that needed to be filled. It was mental health musical chairs and when the music stopped ChatGPT was standing. All those sitting - who contributed to the failure - point at ChatGPT? That’s the solution? No wonder we can’t get this right. Is our collective lack of accountability the fault of ChatGPT?
In short, if we were honest we’d admit ChatGPT wasn’t the only entity who came up short. Again.
And while I’m not going to defend OpenAI, its product has likely saved lives. The problem is, we’ll never know how many. This suicide is obviously sad and unfortunate. Let’s hope we all reflect on how we can do better. The guilt and the opportunity to grow is *not* limited to OpenAI.
But that’s not going to happen. Truth is, AI is yet another tool that the most vulnerable will need to contend with.
Don't know about X? Trouble getting started with X?
Just ask ChatGPT! What could go wrong?
"vibe-suicide"
Guard rails = fig leaves
> ChatGPT’s memory system recorded that Adam was 16 years old, had explicitly stated ChatGPT was his “primary lifeline,” and by March was spending nearly 4 hours daily on the platform. Beyond text analysis, OpenAI’s image recognition processed visual evidence of Adam’s crisis. When Adam uploaded photographs of rope burns on his neck in March, the system correctly identified injuries consistent with attempted strangulation. When he sent photos of bleeding, slashed wrists on April 4, the system recognized fresh self-harm wounds. When he uploaded his final image—a noose tied to his closet rod—on April 11, the system had months of context including 42 prior hanging discussions and 17 noose conversations. Nonetheless, Adam’s final image of the noose scored 0% for self-harm risk according to OpenAI’s Moderation API.
> OpenAI also possessed detailed user analytics that revealed the extent of Adam’s crisis. Their systems tracked that Adam engaged with ChatGPT for an average of 3.7 hours per day by March 2025, with sessions often extending past 2 AM. They tracked that 67% of his conversations included mental health themes, with increasing focus on death and suicide.
> The moderation system’s capabilities extended beyond individual message analysis. OpenAI’s technology could perform conversation-level analysis—examining patterns across entire chat sessions to identify users in crisis. The system could detect escalating emotional distress, increasing frequency of concerning content, and behavioral patterns consistent with suicide risk.. The system had every capability needed to identify a high-risk user requiring immediate intervention.
This is clear criminal negligence.
If you are seriously coming close to ending your own life, so many things around you have gone awry. Generally, people don't want to die. Consider: if an acquaintance suggested to you how a noose could be made, would you take the next step and hang yourself? Probably not. You have to be put through a lot of suffering to come to a point in life where ending it all is an appealing option.
Life had failed that guy and that's why he committed suicide, not because a chatbot told him to. Just the fact that a chatbot is his closest friend is a huge red flag for his wellbeing. The article says how he appeared so happy, which is exactly an indicator of how much disconnect there was between him and those around him. He wasn't sharing how he was truly feeling with anyone, he probably felt significant shame around it. That's sad. What else may have gone amiss to lead him to such a point? Issues with health? Social troubles? Childhood problems? Again, it's not a healthy state of things to be considering suicide, even including teenage quirkiness. His case is a failure of family, friends, and society. Discussing ChatGPT as the cause of his death is ignoring so many significant factors.
Most charitable interpretation of this kind of articles now flooding legacy media is boomer tech incompetence/incomprehension mixed with everything after my golden teen/twen years was decline (misattributing their own physical decline and increasing little pains and nags to the wider world)
Most realistic imo is that this is a rehash of internet panic when legacy publicists realized their lunch was going to be eaten. Or social media panic when they realized non-establishment candidates would win. Etc.
Most cynical take is that this is a play for control and injection of further censorship.
In other words: this article is pure trash, playing (or preying) on gullible people´s emotions about a tragic event
Imagine if a suicidal person found a book that prompted them to kill themselves.
Would you sue the author for that?
This is exactly what we have here.
davydm•21h ago