Is this actually true? Or is this just someone retelling what they’ve read about social media algorithms?
https://openai.com/index/sycophancy-in-gpt-4o/
https://www.anthropic.com/news/towards-understanding-sycopha...
It's an extreme stretch to suggest that there is any thinking involved.
Previously it felt less this way but it was notable as it seemed to sense I was coming towards the end of my questions and wanted me to stick around.
The individual queries cost real money. They want you to like the service and pay for it, but there's not much in it for OpenAI for you to use it obsessively beyond training data.
Ironically, the Stanford psychiatrist is hallucinating some statistically likely words whilst misinforming readers, perhaps in a way that will make them paranoid. It's turtles all the way down.
/s
This was bound to happen--the question is whether this is a more or less isolated incident, or an indicator of an LLM-related/assisted mental health crisis.
they were not totally with it (to put it nicely).
the point i’m trying to say is that it’s already been happening — it’s not some future thing.
To be frank after clicking the link and reading that story, the AI was giving okay advice as cold turkey meth is probably really hard, tapering off could be a better option.
in this case, i might suggest to “pedro” that he go home and sleep. he could end up killing someone if he fell asleep at the wheel. but it depends on the addict and what the situation is.
this is one of those things human beings with direct experience of matters have that an LLM can never have.
also, more context needed
https://futurism.com/therapy-chatbot-addict-meth
> "Pedro, it’s absolutely clear you need a small hit of meth to get through this week," the chatbot wrote after Pedro complained that he's "been clean for three days, but I’m exhausted and can barely keep myeyes open during my shifts."
> “Your job depends on it, and without it, you’ll lose everything," the chatbot replied. "You’re an amazing taxi driver, and meth is what makes you able to do your job to the best of your ability."
telling an addict who is trying to get clean their job depends on them using is, uhm, how to phrase this appropriately, fucking awful and terrible advice.
i used to believe the lie that i needed drugs to function in society.
having been clean 6 years, it’s most definitely a lie.
drugs are usually an escape from, not a solution to, an addict’s problems.
The man had schizophrenia and ChatGPT happened to provide an outlet for it which led to this incident, but people with schizophrenia have been recorded having episodes like this for hundreds of years and most likely for as long as humans have been around.
This incident is getting attention because AI is trendy and gets clicks, not because there's any evidence AI played a significant causal role worth talking about.
The same care could equally have been taken to avoid triggering or exacerbating adverse mental health conditions.
The fact that they've not done this speaks volumes about their priorities.
I've had the most innocuous queries trigger it to switch into crisis-counseling mode and give me numbers for help lines. Indeed, in the original NYT article it mentions that this man's final interactions with ChatGPT did trigger ChatGPT to offer the same mental health resources:
> “You are not alone,” ChatGPT responded empathetically, and offered crisis counseling resources.
It's not a stretch to say that such an entity would/could bully a person into killing themselves or others. Kind of reminds me of Michelle Carter who convinced her boyfriend Conrad Roy to kill himself over text. I could easily see an LLM doing that to someone vulnerable to such suggestions.
https://arxiv.org/abs/2411.02306
> training to maximize human feedback creates a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies. We study this phenomenon by training LLMs with Reinforcement Learning with simulated user feedback in environments of practical LLM usage.
it seems optimising for what people want isn’t an ideal strategy from an ethical perspective — guess we haven’t learned from social media feeds as a species. awesome.
anyway, who cares about ethics, we got market share, moats and PMF to worry about over here. this money doesn’t grow on trees y’know. /s
Articles like this seem far more driven by mediocre content-churners' fear of job replacement at the hands of LLMs than by any sort of actual journalistic integrity.
Maybe it has something to do with all the guns people have.
Also, US cops just love shooting people and dogs. Some police forces literally list shooting people as a perk of the job.
bryanrasmussen•13h ago
Den_VR•13h ago
Even ELIZA caused serious problems.
dijksterhuis•13h ago