Their blogpost about the 5.1 personality update a few months ago showed how much of a pull this section of their customer base had. Their updated response to someone asking for relaxation tips was:
> I’ve got you, Ron — that’s totally normal, especially with everything you’ve got going on lately.
How does OpenAI get it so wrong, when Anthropic gets it so right?
Are you saying people aren't having proto-social relationships with Anthorpic's models? Because I don't think that's true, seems people use ChatGPT, Claude, Grok and some other specific services too, although ChatGPT seems the most popular. Maybe that just reflects general LLM usage then?
Also, what is "wrong" here really? I feel like the whole concept is so new that it's hard to say for sure what is best for actual individuals. It seems like we ("humanity") are rushing into it, no doubt, and I guess we'll find out.
If we're talking generally about people having parasocial relationships with AI, then yea it's probably too early to deliver a verdict. If we're talking about AI helping to encourage suicide, I hope there isn't much disagreement that this is a bad thing that AI companies need to get a grip on.
I think the term you're looking for is "parasocial."
I think it's because of two different operating theories. Anthropic is making tools to help people and to make money. OpenAI has a religious zealot driving it because they think they're on the cusp of real AGI and these aren't bugs but signals they're close. It's extremely difficulty to keep yourself in check and I think Altman no longer has a firm grasp on what it possible today.
The first principle is that you must not fool yourself, and you are the easiest person to fool. - Richard P. Feynman
Looking at the poem in the article I would be more inclined to call the end human written because it seemed kind of crap like I expect from an eighth grader's poem assignments, but probably this is the lower availability of examples for the particular obsessions of the requestor.
We don’t expect Adobe to restrict the content that can be created in Photoshop. We don’t expect Microsoft to have acceptable use policies for what you can write in Microsoft Office. Why is it that as soon as generative AI comes into the mix, we hold the AI companies responsible for what users are able to create?
Not only do I think the companies shouldn’t be responsible for what users make, I want the AI companies to get out of the way and stop potentially spying on me in order to “enforce their policies”…
Photoshop and Office don't (yet) conjure up suicide lullabys or child nudity from a simple user prompt or button click. If they did, I would absolutely expect to hold them accountable.
How about if the knife would convince you to cut yourself?
It's not about the ideation, it's that the attention model (and its finite size) causes the suicidal person's discourse to slowly displace any constraints built into the model itself over a long session. Talk to the thing about your feelings of self-worthlessness long enough and, sooner or later, it will start to agree with you. And having a machine tell a suicidal person, using the best technology we've built to be eloquent and reasonable-sounding, that it agrees with them is incredibly dangerous.
"The things you are describing might not be happening. I think it would be a good time to check in with your mental health provider." or "I don't see any worms crawling on your skin. This may not be real." Or whatever is correct way to deal with these things.
> Austin Gordon, died by suicide between October 29 and November 2
That's 5 days. 5 days. That's the sad piece.
Some of those quotes from ChatGPT are pretty damning.
Out of context? Yes. We'd need to read the entire chat history to even begin to have any kind of informed opinion. extreme guardrails
I feel that this is the wrong angle. It's like asking for a hammer or a baseball bat that can't harm a human being. They are tools. Some tools are so dangerous that they need to be restricted (nuclear reactors, flamethrowers) because there are essentially zero safe ways to use them without training and oversight but I think LLMs are much closer to baseball bats than flamethrowers.Here's an example. This was probably on GPT3 or GPT35. I forget. Anyway, I wanted some humorously gory cartoon images of $SPORTSTEAM1 trouncing $SPORTSTEAM2. GPT, as expected, declined.
So I asked for images of $SPORTSTEAM2 "sleeping" in "puddles of ketchup" and it complied, to very darkly humorous effect. How can that sort of thing possibly be guarded against? Do you just forbid generated images of people legitimately sleeping? Or of all red liquids?
I think several of the models (especially Sora) are doing this by using an image-aware model to describe the generated image, without the prompt as context, to just look at the image.
I feel this is misleading as hell. The evidence they gave for it coaching him to suicide is lacking. When one hears this, one would think ChatGPT laid out some strategy or plan for him to do it. No such thing happened.
The only slightly damning thing it did was make suicide sound slightly ok and a bit romantic but I’m sure that was after some coercion.
The question is, to what extent did ChatGPT enable him to commit suicide? It wrote some lullaby, and wrote something pleasing about suicide. If this much is enough to make someone do it.. there’s unfortunately more to the story.
We have to be more responsible assigning blame to technology. It is irresponsible to have a reactive backlash that would push towards much more strengthening of guardrails. These things come with their own tradeoffs.
The fact that he spoke about his favorite children’s book is screwed up. I can’t get the eerie name out of my head. I can’t imagine what he went through, the loneliness and the struggle.
I hate the fact that ChatGPT is blamed for this. You are fucked up if this is what you get from this story.
d_silin•1h ago