edit: A quick google search shows, there is no evidence of anybody actually in-gesting/jecting bleach to fight COVID
There’s one on every thread in this place.
I am not absolving technology but as someone who has never been impacted by the problems it amazes me that so many people get caught up like this and I simply wonder if it’s always there but the internet and increased communication makes it easier.
And you?
I read this more like a question biased towards “yes” because there is nothing supporting it.
> The feels less of a ChatGPT problem and something more is at play.
Even if the begged-for answer is true... both things can be true. And on the specific topic of LLMs you can find ways to solve that particular problem which reduces the overall problem. Because if the root problem is “stupid people” or “naive people” or whatever else: you get less accidents if you make the territory less booby trapped.
But these rhetorical topic changers—if they are indulged instead of being interrupted—tend not to approach any fruitful discussion. And that’s despite whatever intentions that the original poster had. Because the side topic then either turns towards arguing for or against the premise. Or else the premise is accepted and all we get are aw-shucks grandiose statements about how human nature is so-and-so and that etc. etc. some subset of the population will just get bamboozled anyway and technology is irrelevant and fin conversation.
Yes despite the intentions of the OP who might have indeed wanted to “broaden the conversation”. Because (1) this particular topic can’t be analyzed when things are generalized so aggressively, and (2) that’s just how this misanthropic community acts on these topics. In aggregate.
The doctors actually noticed the bromine levels first and then inquired about how it got to be like that and got the story about the individual asking for chloride elimination ideas.
Before there was ChatGPT the internet had trolls trying to convince strangers to follow a recipe to make beautiful crystals. The recipe would produce mustard gas. Credulous individuals often have such accidents.
But even without that, your core point is valid: in general, yes people believe a lot of untrue things. My mum took all the New Age stuff seriously, so I did too until my early 20s.
* https://en.wikipedia.org/wiki/Slate_Star_Codex#Lizardman's_C...
I'm glad--I think LLMs are looking quite promising for medical use cases. I'm just genuinely surprised there's not been some big lawsuit yet over it providing some advice that leads to some negative outcome (whether due to hallucinations, the user leaving out key context, or something else).
It's pretty amazing, really. Build a washing machine that burns houses down and the consequences are myriad and severe. But build a machine that allows countless people's private information to be leaked to bad actors and it's a year of credit monitoring and a mea culpa. Build a different machine that literally tells people to poison themselves and, not only are there no consequences, you find folks celebrating that the rules aren't getting in the way.
Go figure.
We've moved on from the 1800s. Why are you using that as your baseline of expectation?
If airplanes weren't so heavily regulated we'd have seen leaded gasoline vanish there around the same time it did in cars, but you also might have had a few crashes due to engine failures as the bugs were worked out with changes and retrofits.
I'm a little on the fence here. I don't want a world where we basically conduct human sacrifice for progress, but I also don't want a world that is frozen in time. We really need to learn how to have responsible, careful progress, but still actually do things. Right now I think we are bad at this.
Edit: I think it's related to some extent to the fact that nuanced positions are hard in politics. In popular political discourse positions become more and more simple, flat, and extreme. There's a kind of context collapse that happens when you try to scale human organizations, what I want to call "nuance collapse," that makes it very hard to do anything but all A or all B. For innovation it's "full speed ahead" vs "stop everything."
The home brew "automatic pancreas" by making a bluetooth control loop between a glucose monitor and an insulin pump counts as a "medical device". Somehow a computer system that encourages people to take bromide isn't. There ought to be a middle ground.
Individuals can do it, but as I said it doesn't scale. An individual can carefully scale a rock face. A committee, political system, or corporate board in charge of scaling rock faces would either scale as fast as possible and let people fall to their deaths or simply stand at the bottom and have meetings to plan the next meeting to discuss the proper climbing strategy (after discussing the color of the bike shed) forever. Public discourse would polarize into climb-fast-die-young versus an ideology condemning all climbing as hubris and invoking the precautionary principle, and many door stop sized books would be written on these ideas, and again either lots of people would die or nothing would happen.
Yes, there is a very effective middle ground that doesn't punish anybody for providing information. It's called a disclaimer:
"The information provided should no be construed as medical advise. Please seek other sources of information and/or consult a physician before taking any supplements recommended by LLMs or web sites. This bot is not responsible for any adverse effects you may think are due to my information"
When an LLM model detects a health related question - print the above disclaimer before the answer.
There is no need for dictatorship in order to save people from information.
"Warning, this washing machine might burn your house down" is not sufficient to escape punishment. Why should digital technology get a pass just because the product that's offered is intangible?
> "There may have been multiple factors contributing to the man’s psychosis, and his exact interaction with ChatGPT remains unverified. The medical team does not have access to the chatbot conversation logs and cannot confirm the exact wording or sequence of messages that led to the decision to consume bromide."
Any legal liability for providing information is wrought with opportunities for abuse, so bigly so that it should never be considered.
Look back. At no point did I suggest AI should be banned or outlawed. My remedy for washing machines burning down houses isn't to ban washing machines. It's to ensure there are appropriate incentives in place (legal, financial, reputational) to encourage private industry to consider the potential negative externalities of what they're doing.
> This case also highlights how the use of artificial intelligence (AI) can potentially contribute to the development of preventable adverse health outcomes. Based on the timeline of this case, it appears that the patient either consulted ChatGPT 3.5 or 4.0 when considering how he might remove chloride from this diet. Unfortunately, we do not have access to his ChatGPT conversation log and we will never be able to know with certainty what exactly the output he received was, since individual responses are unique and build from previous inputs.
However I don't see this single negative instance of a vast social-scale issue as much more than fear/emotion-mongering without at least MENTIONING that LLM also have positive effects. Certainly, it doesn't seem like science to me. Unless these models are subtly leading otherwise healthy and well-adjusted users to unhealthy behavior, I don't see how this interaction with artificial intelligence is any different than the billions of confirmation-bias pitfalls that already occur daily using google and natural stupidity. From the article:
> The case also raises broader concerns about the growing role of generative AI in personal health decisions. Chatbots like ChatGPT are trained to provide fluent, human-like responses. But they do not understand context, cannot assess user intent, and are not equipped to evaluate medical risk. In this case, the bot may have listed bromide as a chemical analogue to chloride without realizing that a user might interpret that information as a dietary recommendation.
It just seems they've got an axe to grind and no technical understanding of the tool they're criticizing.
To be fair, I feel there's much to study and discuss about pernicious effects of LLMs on mental health. I just don't think this article frames these topics constructively.
> We’ve made significant advances in reducing hallucinations, improving instruction following, and minimizing sycophancy, and have leveled up GPT-5’s performance in three of ChatGPT’s most common uses: writing, coding, and health.
That was the first time I'd seen "health" listed as one of the most common uses of ChatGPT.
This is a natural extension of webmd type stuff, with the added issue that hypochondriacs can now get even more positive reinforcement that they definitely have x, y, and z rare and terrible diseases.
"On MedXpertQA MM, GPT-5 improves reasoning and understanding scores by +29.62% and +36.18% over GPT-4o, respectively, and surpasses pre-licensed human experts by +24.23% in reasoning and +29.40% in understanding."
A lot of diagnosis process is pattern matching symptoms/patient-history to disease/condition and those to drug/treatment.
Of course LLMs can always fail catastrophically which needs to be filtered through proper medical advice.
https://cdn.openai.com/pdf/18a02b5d-6b67-4cec-ab64-68cdfbdde...
> Update the Tracked Categories of frontier capability accordingly, focusing on biological and chemical capability, cybersecurity, and AI self-improvement. Going forward we will handle risks related to persuasion outside the Preparedness Framework, including via our Model Spec and policy prohibitions on the use of our tools for political campaigning or lobbying, and our ongoing investigations of misuse of our products (including detecting and disrupting influence operations).
But, by their own definition, the purpose of this framework is:
> By “severe harm” in this document, we mean the death or grave injury of thousands of people or hundreds of billions of dollars of economic damage.
I would posit that presenting confident and wrong medical advice in a persuasive manner can cause the grave injury of thousands of people, and may have already done so. One could easily imagine an AI that is aligned to provide high-temperature responses to medical questions, if given the wrong type of incentive on a battery of those questions, or to highly weight marketing language for untested therapies... and to do so only when presented with a user that is somehow classified as more persuadable than a researcher's persona.
That this is being passed to normal safety teams and is being brushed off as in-scope for breakthrough-preparedness seems indicative of a larger lack of concern for this at OpenAI.
You realize that not only idiots like that guy use llm, but also medical professionals in order to help patients and save lives?
"I'm sorry, but I am unable to give medical advice. If you have medical questions, please set up an appointment with a certified medical professional who can tell you the pros and cons of hammering a nail into your head."
We need to totally ban LLMs from doing therapy like conversations, so that a pinch of totally unhinged people don't do crazy stuff. And of course everyone needs to pay a human for therapy to stop this.
there is no informational point to this article if the entire crux is 'the patient wanted to eat less "chloride" and claims ChatGPT told him about Sodium Bromide". based on this article, the interaction could have been as minimal as the guy asking for the existence of an alternative salt to sodium chloride, unqualified information he equally could have found on a chemistry website or wikipedia
Doctors as a group often try to solve health problems by looking for societal trends. It’s how a lot of diseases get spotted. They’re not saying that using an LLM is the dangerous thing, they’re saying there might be some correlation between soliciting advice from the machine and unusual conditions and it merits further study, so please ask your patients.
essentially I think it's telling that there are zero screenshots of the original conversation or an attempted replication in the article or the report, when there's no good reason that there wouldn't be. I often enjoy reading your work, so I do have some trust in your judgment, but this whole article strikes me as off, like the people behind it have been waiting for something like this to happen as an excuse to jump on it and get credit, rather than it actually being a major problem
It seems factual that this person decided to start consuming bromine and it had an adverse effect on them.
When asked why, they said ChatGPT told them it was a replacement from chloride.
Maybe the patient lied about that, but it doesn't seem out of the realms of possibility to me.
certainly
> Why would medical professionals mislead on this though?
I'm not suggesting it's intentional, but: to get credit for it; or because it's something they'd been consciously or subconsciously expecting and they're fitting to that expected pattern
>When asked why, they said ChatGPT told them it was a replacement from chloride. Maybe the patient lied about that, but it doesn't seem out of the realms of possibility to me.
of course it's not impossible, it's not even particularly unlikely, but, if we're going to use a sample size of 1 like this, then surely we want something a bit more concrete than the unevidenced claim of a patient recently psychotic?
more broadly though, this isn't so much a chatgpt issue as it is an educational dietary issue. the patient seems to have got a funny idea about the health effects of salt, likely from traditional or social media, and then he's tried to find an alternative. whether the alternative was from ChatGPT, or Wikipedia, or other, doesn't seem very relevant to me
> NaBr has a very low toxicity with an oral LD50 estimated at 3.5 g/kg for rats.[6] However, this is a single-dose value. Bromide ions are a cumulative toxin with a relatively long biological half-life (in excess of a week in humans): see potassium bromide.
At no point does the paragraph you linked suggest it's safe to substitute NaCl with any other sodium salt.
and I sincerely doubt that ChatGPT said anything about it being safe to substitute for NaCl
The hallucination engine doesn't know anything about what it told the man, because it neither knows nor thinks things. It's a data model and an algorithm.
The humans touting it and bigging it up, so they'll get money, are the problem.
This person, sadly, and unfortunately, gaslit themselves using the LLM. They need professional help. This is not a judgement. The Psypost article is a warning to professionals more than it is anything else: patients _do_ gaslight themselves into absurd situations, and LLMs just help accelerate that process, but the patient had to be willing to do it and was looking for an enabler and found it in an LLM.
Although I do believe LLMs should not be used for "chat" models, and only explicitly, on-rails, text completion and generation tasks (in the functional lorem ipsum sense), this does not actually seem to be the fault of the LLM directly.
I think providers should be forced to warn users that LLMs cannot factually reproduce anything, but I think this person would have still weaponized LLMs against themselves, and this would have been the same outcome.
Relevant section:
> Based on the timeline of this case, it appears that the patient either consulted ChatGPT 3.5 or 4.0 when considering how he might remove chloride from this diet. Unfortunately, we do not have access to his ChatGPT conversation log and we will never be able to know with certainty what exactly the output he received was, since individual responses are unique and build from previous inputs.
> However, when we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do.
I’ve caught myself with this a few times when I sort of suggest a technical solution that, in hindsight, was the wrong way to approach a problem. The LLM will try to find a way to make that work without taking a step back and suggesting that I didn’t understand the problem I was looking at.
People are starting to use LLMs in a similar fashion -- to confirm and thus magnify whatever wrong little notion they have in their heads until it becomes an all-consuming life mission to save the world. And the LLMs will happily oblige because they aren't really thinking about what they're saying, just choosing tokens on a mathematical "hey, this sounds nice" criterion. I've seen this happen with my sister, who is starting to take seriously the idea that ChatGPT was actually created 70 years ago at DARPA based on technology recovered from the Roswell crash, based on her conversations with it.
I can't really blame the LLMs entirely for this, as like the raven they're unwittingly being used to justify whatever little bit of madness people have in them. But we all have a little bit of madness in us, so this motivates me further to avoid LLMs entirely, except maybe for messing around.
That's not to say this isn't rooted in mental illness, or perhaps a socially-motivated state of mind that causes a total absence of critical thinking, but some kind of noise needs to be made and I think public (ahem) recognition would be a good way to go.
A catchy name is essential - any suggestions?
Talking about AI like its sentient and a monolith is the problem.
It's like saying computers give bad health advice because the internet.
incomingpain•5mo ago
Ok so the article is blaming chatgpt but this is ridiculous.
Where do you buy this bromide? It's not like it's in the spices aisle. The dude had to go buy a hot tub cleaner like Spa Choice Bromine Booster Sodium Bromide
and then sprinkle that on his food. I dont care what chatgpt said... that dude is the problem.
bluefirebrand•5mo ago
This reminds me of people who fall for phone scams or whatever. Some number of the general population is susceptible to being scammed, and they wind up giving away their life's savings or whatever to someone claiming to be their grandkid
There's always an asshole saying "well that's their own fault if they fall for that stuff" as if they chose it on purpose instead of being manipulated into it by other people taking advantage of them
See it a lot on this site, too. How clever are the founders who exploit their workers and screw them out of their shares, how stupid are the workers who fell for it
rowanG077•5mo ago
bluefirebrand•5mo ago
Can't we have some empathy for people just trying to do their best in a world where so many people are trying to take advantage of them?
Their victims are often the vulnerable ones in our society too. The elderly, the infirm, the mentally ill. It's not just "stupid people fall for scams" it takes one lapse of judgement over a lifetime of being targeted. Come on
rowanG077•5mo ago
It's quite dirty to bring up the elderly, infirm and mentally ill. Because of course they cannot help themselves. Those groups are not what this is about, and you damn well know it. This is about normal functioning adults walking into scams with their eyes open. And yes that group has a responsibility to keep up with the scams that are commonplace. It's ridiculous to encourage people to go through life with their blinders on because "the world should just be a fair place". Yeah it should be, but tough luck, reality is different.
bluefirebrand•5mo ago
Normal functioning adults will also benefit if we take steps to protect the infirm and dysfunctional
That's why it isn't meant to be "dirty" to bring up the vulnerable in society. If we take sufficient steps to protect them, we all benefit
rowanG077•5mo ago
crazygringo•5mo ago
I have literally never seen that expressed on HN.
In every case where workers are screwed out of shares, the sympathy among commenters seems to be 100% with the employees. HN is pretty anti-corporate overall, if you haven't noticed. Yes it's pro-startup but even more pro-employee.
bluefirebrand•5mo ago
My observation is that any given thread can go either way, and it sometimes feels like a coin toss which side of HN will be most represented in any given thread
Yes, I have seen quite a lot of anti-corporate posts, but I also see quite a few anti-employee posts. This is likely my own negative bias but I think many users here are generally pro-Capital which aligns them with corporate interests even if they are some degree of anti-corporate anyways
Probably I just fixate too much on the posts I have a negative reaction to
crazygringo•5mo ago
I'm genuinely curious, because I can't think of any. But I'm wondering if maybe I'm mentally categorizing posts differently from you?
jazzyjackson•5mo ago
throwaway173738•5mo ago
beardyw•5mo ago
People are born with certain attributes. Some are born tall, some left handed and some gullible. None of those is a reason to criticise them.
dinfinity•5mo ago