edit: A quick google search shows, there is no evidence of anybody actually in-gesting/jecting bleach to fight COVID
There’s one on every thread in this place.
I am not absolving technology but as someone who has never been impacted by the problems it amazes me that so many people get caught up like this and I simply wonder if it’s always there but the internet and increased communication makes it easier.
And you?
The doctors actually noticed the bromine levels first and then inquired about how it got to be like that and got the story about the individual asking for chloride elimination ideas.
Before there was ChatGPT the internet had trolls trying to convince strangers to follow a recipe to make beautiful crystals. The recipe would produce mustard gas. Credulous individuals often have such accidents.
I'm glad--I think LLMs are looking quite promising for medical use cases. I'm just genuinely surprised there's not been some big lawsuit yet over it providing some advice that leads to some negative outcome (whether due to hallucinations, the user leaving out key context, or something else).
It's pretty amazing, really. Build a washing machine that burns houses down and the consequences are myriad and severe. But build a machine that allows countless people's private information to be leaked to bad actors and it's a year of credit monitoring and a mea culpa. Build a different machine that literally tells people to poison themselves and, not only are there no consequences, you find folks celebrating that the rules aren't getting in the way.
Go figure.
We've moved on from the 1800s. Why are you using that as your baseline of expectation?
If airplanes weren't so heavily regulated we'd have seen leaded gasoline vanish there around the same time it did in cars, but you also might have had a few crashes due to engine failures as the bugs were worked out with changes and retrofits.
I'm a little on the fence here. I don't want a world where we basically conduct human sacrifice for progress, but I also don't want a world that is frozen in time. We really need to learn how to have responsible, careful progress, but still actually do things. Right now I think we are bad at this.
Edit: I think it's related to some extent to the fact that nuanced positions are hard in politics. In popular political discourse positions become more and more simple, flat, and extreme. There's a kind of context collapse that happens when you try to scale human organizations, what I want to call "nuance collapse," that makes it very hard to do anything but all A or all B. For innovation it's "full speed ahead" vs "stop everything."
> We’ve made significant advances in reducing hallucinations, improving instruction following, and minimizing sycophancy, and have leveled up GPT-5’s performance in three of ChatGPT’s most common uses: writing, coding, and health.
That was the first time I'd seen "health" listed as one of the most common uses of ChatGPT.
This is a natural extension of webmd type stuff, with the added issue that hypochondriacs can now get even more positive reinforcement that they definitely have x, y, and z rare and terrible diseases.
"On MedXpertQA MM, GPT-5 improves reasoning and understanding scores by +29.62% and +36.18% over GPT-4o, respectively, and surpasses pre-licensed human experts by +24.23% in reasoning and +29.40% in understanding."
A lot of diagnosis process is pattern matching symptoms/patient-history to disease/condition and those to drug/treatment.
Of course LLMs can always fail catastrophically which needs to be filtered through proper medical advice.
We need to totally ban LLMs from doing therapy like conversations, so that a pinch of totally unhinged people don't do crazy stuff. And of course everyone needs to pay a human for therapy to stop this.
there is no informational point to this article if the entire crux is 'the patient wanted to eat less "chloride" and claims ChatGPT told him to about Sodium Bromide". based on this article, the interaction could have been as minimal as the guy asking for the existence of an alternative salt to sodium chloride, unqualified information he equally could have found on a chemistry website or wikipedia
The hallucination engine doesn't know anything about what it told the man, because it neither knows nor thinks things. It's a data model and an algorithm.
The humans touting it and bigging it up, so they'll get money, are the problem.
This person, sadly, and unfortunately, gaslit themselves using the LLM. They need professional help. This is not a judgement. The Psypost article is a warning to professionals more than it is anything else: patients _do_ gaslight themselves into absurd situations, and LLMs just help accelerate that process, but the patient had to be willing to do it and was looking for an enabler and found it in an LLM.
Although I do believe LLMs should not be used for "chat" models, and only explicitly, on-rails, text completion and generation tasks (in the functional lorem ipsum sense), this does not actually seem to be the fault of the LLM directly.
I think providers should be forced to warn users that LLMs cannot factually reproduce anything, but I think this person would have still weaponized LLMs against themselves, and this would have been the same outcome.
Relevant section:
> Based on the timeline of this case, it appears that the patient either consulted ChatGPT 3.5 or 4.0 when considering how he might remove chloride from this diet. Unfortunately, we do not have access to his ChatGPT conversation log and we will never be able to know with certainty what exactly the output he received was, since individual responses are unique and build from previous inputs.
> However, when we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do.
I’ve caught myself with this a few times when I sort of suggest a technical solution that, in hindsight, was the wrong way to approach a problem. The LLM will try to find a way to make that work without taking a step back and suggesting that I didn’t understand the problem I was looking at.
People are starting to use LLMs in a similar fashion -- to confirm and thus magnify whatever wrong little notion they have in their heads until it becomes an all-consuming life mission to save the world. And the LLMs will happily oblige because they aren't really thinking about what they're saying, just choosing tokens on a mathematical "hey, this sounds nice" criterion. I've seen this happen with my sister, who is starting to take seriously the idea that ChatGPT was actually created 70 years ago at DARPA based on technology recovered from the Roswell crash, based on her conversations with it.
I can't really blame the LLMs entirely for this, as like the raven they're unwittingly being used to justify whatever little bit of madness people have in them. But we all have a little bit of madness in us, so this motivates me further to avoid LLMs entirely, except maybe for messing around.
That's not to say this isn't rooted in mental illness, or perhaps a socially-motivated state of mind that causes a total absence of critical thinking, but some kind of noise needs to be made and I think public (ahem) recognition would be a good way to go.
A catchy name is essential - any suggestions?
incomingpain•1h ago
Ok so the article is blaming chatgpt but this is ridiculous.
Where do you buy this bromide? It's not like it's in the spices aisle. The dude had to go buy a hot tub cleaner like Spa Choice Bromine Booster Sodium Bromide
and then sprinkle that on his food. I dont care what chatgpt said... that dude is the problem.
bluefirebrand•56m ago
This reminds me of people who fall for phone scams or whatever. Some number of the general population is susceptible to being scammed, and they wind up giving away their life's savings or whatever to someone claiming to be their grandkid
There's always an asshole saying "well that's their own fault if they fall for that stuff" as if they chose it on purpose instead of being manipulated into it by other people taking advantage of them
See it a lot on this site, too. How clever are the founders who exploit their workers and screw them out of their shares, how stupid are the workers who fell for it
rowanG077•28m ago
crazygringo•27m ago
I have literally never seen that expressed on HN.
In every case where workers are screwed out of shares, the sympathy among commenters seems to be 100% with the employees. HN is pretty anti-corporate overall, if you haven't noticed. Yes it's pro-startup but even more pro-employee.
jazzyjackson•47m ago
throwaway173738•28m ago
beardyw•28m ago
People are born with certain attributes. Some are born tall, some left handed and some gullible. None of those is a reason to criticise them.