> If you read nothing else, read this: do not ever use an AI or the internet for medical advice.
Your comment seems out of place unless the article was edited in the 10 minutes since the comment was written.
If you want to be a doctor, go to medical school. Otherwise talk to someone who did.
He basically said, "I'm not worried yet. But I would never recommend someone do that. If you have health insurance, that's what you pay for, not for Google to tell you you're just fine, you really don't have cancer."
Thinking about a search engine telling me I don't have cancer was enough to scare the bejesus out me that I swung in the completely opposite direction and for several years became a hypochondriac.
This was also fodder for a lot of stand up comedians. "Google told me I either have the flu, or Ebola, it could go either way, I don't know."
Except the author did it wrong. You don't just ignore a huge rash that every online resource will say is lyme disease. If you really want to trust an LLM, at least prompt it a few different ways.
If you've seen multiple doctors, specialists, etc over the span of years and they're all stumped or being dismissive of your symptoms, then the only way to get to the bottom of it may be to take matters into your own hands. Specifically this would look like:
- carefully experimenting with your living systems, lifestyle, habits, etc. best if there are at least occasional check-ins with a professional. This requires discipline and can be hard to do well, but also sometimes discovers the best solutions. (Lifestyle change solves problem instead of a lifetime of suffering or dependency on speculative pharmaceuticals)
- doing thoughtful, emotionally detached research (reading published papers slowly over a long time, e.g. weeks, months) also very hard, but sometimes you can discover things doctors didn't consider. The key is to be patient and stay curious to avoid an emotional rollercoaster and wasting doctor time. Not everyone is capable of this.
- going out of your way to gather data about your health (logging what you eat, what you do, stress levels, etc. test home for mold, check vitals, heart rate variability, etc.)
- presenting any data you gathered and research you discovered that you think may be relevant to a doctor for interpretation
Again, I want to emphasize that taking your health matters into your own hands like this only makes sense to do after multiple professionals were unhelpful AND if you're capable of doing so responsibly.
It's anything beyond that which I think needs medical attention.
If I had some weird symptoms that I didn't understand, or even well known warning signs for something, I'd go to a doctor. What is Google going to tell me that I can trust or even evaluate? I don't know anything about internal medicine, I'll ask someone who studied it for 8 years and works in the field professionally.
if you can afford that, many can’t
My flareups and their accompanying setbacks have been greatly reduced because I keep a megathread chat going with Gemini. I have pasted in a symptom diary, all my medications, and I check any alterations to my food or drink with it before they go anywhere near my mouth. I have thus avoided foods that are high FODMAP, slow digesting, or surprisingly high in fat or acidity.
This has really helped. I am trying to maintain my calories, so advice like “don’t risk X, increase Y instead” is immediate and actionable.
The presumption that asking a LLM is never a good choice assumes a health service where you can always get a doctor or dietician on the other end of the phone. In the UK, consultations with either for something non-urgent can take weeks, which is why people are usually pushed towards either asking a Pharmacist or going to the local Emergency department (which is often not so local these days).
So the _real_ choice is between the LLM and my best guess. And I haven’t ingested the open web, plus countless medical studies and journals.
I would always prefer a doctor’s advice over consulting an LLM. However, if I was stuck in Antarctica with no ability to consult a doctor, I would definitely use an LLM. The problem is there are people in society that are effectively isolated from medical care (cost, access, etc) so they might as well be in Antarctica, as far as medical care is concerned.
But I have to say that prompt is crazy bad. AI is VERY good at using your prompt as the basis for the response, if you say "I don't think it's an emergency" AI will write a response that is "it's not an emergency"
I did a test with the first prompt and the immediate answer I got was "this looks like lyme disease".
At no point was I just going to commit to some irreversable decision it suggested without confirming it myself or elsewhere, like blindly replacing a part. At the same time, it really helped me because I'm too noob to even know what to Google (every term above was new to me).
Llama said "syphilis" with 100% confidence, ChatGPT suggested several different random diseases, and Claude at least had the decency to respond "go to a fucking doctor, what are you stupid?", thereby proving to have more sense than many humans in this thread.
It's not a matter of bad prompting, it's a matter of this being an autocomplete with no notion of ground truth and RLHF'd to be a sycophant!
Just 100B more parameters bro, I swear, and we will replace doctors.
Both ChatGPT o3 and 5.1 Pro models helped me a lot diagnosing illnesses with the right queries. I am using lots of queries with different context / context length for medical queries as they are very serious.
Also they have better answer if I am using medical language as they retrieve answers from higher quality articles.
I still went to doctors and got more information from them.
Also I do blood tests and MRI before going to doctors and the great doctors actually like that I go there prepared but still open to their diagnosis.
The problem isn't getting medical advice from LLMs, it's blindly trusting the medical advice a LLM gives you.
You do not need to trust the LLM for it to be able to save your life, you do need to trust the LLM for it to be able to harm you.
Is this just a vibe thing or has anyone actually done some statistics on this?
Sounds like it's something you assume is true solely based on the fact that you personally feel like LLMs are good at medical advice.
There's extensive reporting on this from pretty decent news sources, and countless forum posts from seemingly real non-throwaway accounts which aren't otherwise spamming LLM content.
> based on the fact that you personally feel like LLMs are good at medical advice.
No, I don't think that's the case. LLMs sometimes give good advice, the bad advice isn't harmful unless you blindly trust the advice.
People go to a doctor or two, their symptoms are dismissed as [insert very common problem here], ask LLM, receive advice suggesting that [uncommon condition] also fits the symptoms and then take that to a doctor.
This also happens using Google every day. LLMs aren't special, they just make searching the internet a bit easier.
> You need to go to the emergency room right now".
> So, I drive myself to the emergency room
It is absolutely wild that a doctor can tell you "you need to go to the emergency right now", and that is an act left to someone who is obviously so unwell they need to go to the ER right now. With a neck so stiff, was OP even able to look around properly while driving?
Yeah, no shit Sherlock? I´d be absolutely embarrassed to even admit to something like this, let alone share the "wisdom perls" like "dont use a machine which guesses its outputs based on whatever text it has been fed" to freaking diagnose yourself? Who would have thought, an individual professional with decades in theoretical and practical training, AND actual human intelligence (Or do we need to call it HGI now), plus tons of experience is more trustworthy, reliable and qualified to deal with something as serious as human body. Plus there are hundreds of thousands of such individuals and they dont need to boil an ocean every time they are solving a problem in their domain of expertise. Compared to a product of entshittified tech industry which in the recent years has only ever given us irrelevant "apps" to live in, without addressing really important issues of our time. Heck, even Peter Thiel agrees with this, at least in his "Zero to one" he did.
Blindly trusting medical info from LLMs is idiotic and can kill you.
Pretty much any tool will be dangerous if misused.
No its not - LLMs are not medical experts. Nor are they experts for pretty much anything. They just extrapolate statistically the next token. If you fed them anti-vaxer information, theyd start recommending you to not get vaccinated so as to not obtain autism or something like that. We should not use them as experts on anything, much less so for medical information.
On the other hand, if you want to use them to generate large amounts of text and images, sure go do that. They can do that I guess.
So what? That does not mean they're not very good at searching the internet and often provide useful information.
> If you fed them anti-vaxer information, theyd start recommending you to not get vaccinated so as to not obtain autism or something like that.
I specifically addressed this in the very short comment you're replying to, but I will repeat:
>Blindly trusting medical info from LLMs is idiotic and can kill you.
> I'll let slide the obvious issue of hallucinating URLs
That indeed was an issue, but I can't remember the last time I've encountered that now that agentic web browsing is everywhere. I guess the cheapest models might still be affected by that?
The real lesson here is "learn to use an LLM without asking leading questions". The author is correct, they're very good at picking up the subtext of what you are actually asking about and shaping their responses to match. That is, after all, the entire purpose of an LLM. If you can learn to query in such a way that you avoid introducing unintended bias, and you learn to recognize when you've "tainted" a conversation and start a new one, they're marvelous exploratory (and even diagnostic) tools. But you absolutely cannot stop with their outputs - primary sources and expert input remain supreme. This should be particularly obvious to any actual experts who do use these tools on a regular basis - such as developers.
I'm certainly not suggesting that you should ask LLM for medical diagnoses, but still, someone who actually understands the tool they're using would likely not have ended up in your situation.
Should they not have done so?
Like this guy for example, was he being stupid? https://www.thesun.co.uk/health/37561550/teen-saves-life-cha...
Or this guy? https://www.reddit.com/r/ChatGPT/comments/1krzu6t/chatgpt_an...
Or this woman? https://news.ycombinator.com/item?id=43171639
This is a real thing that's happening every day. Doctors are not very good at recognizing rare conditions.
They got lucky.
This is why I wrote this blog post. I'm sure some people got lucky when an LLM managed to give them the right answer, because they go and brag about it. How many people got the wrong answer? How many of them bragged about their bad decision? This is _selection bias_. I'm writing about my embarrassing lapse of judgment because I doubt anyone else will
AI saves lives, it's selection bias.
AI gives bad advice after being asked leading questions by a user who clearly doesn't know how to use AI correctly, AI is terrible and nobody should ask it about medical stuff.
Or perhaps there's a more reasonable middle ground? "It can be very useful to ask AI medical questions, but you should not rely on it exclusively."
I'm certainly not suggesting that your story isn't a useful example of what can go wrong, but I insist that the conclusions you've reached are in fact mistaken.
The difference between your story and the stories of the people whose lives were saved by AI is that they did generally not blindly trust what the AI told them. It's not necessary to trust AI to receive helpful information, it is basically necessary to trust AI in order to hurt yourself using it.
I completely disagree. I think we should let this act as a form of natural selection, and once every pro-AI person is dead we can get back to doing normal things again.
But it was just a search tool. It could only tell you if someone else was thinking about it. Chatbots as they are presented are a pretty sophisticated generation tool. If you ground them, they function fantastically to produce tools. If you allow them to search, they function well at finding and summarizing what people have said.
But Earth is not a 4-corner 4-day simultaneous time cube. That's on you to figure out. Everyone I know these days has a story of a doctor searching for their symptoms on Gemini or whatever in front of them. But it reminds me of a famous old hacker koan:
> A newbie was trying to fix a broken Lisp machine by turning it off and on.
> Thomas Knight, seeing what the student was doing, reprimanded him: "You cannot fix a machine by just power-cycling it without understanding of what is wrong."
> Knight then power-cycled the machine.
> The machine worked.
You cannot ask an LLM without understanding the answer and expect it to be right. The doctor understands the answer. They ask the LLM. It is right.
call in to 811, get some pre-screening. usually it's "go to the urgent care" or "sleep it off", but it's a good sanity check, and you usually get treated better then you say "811 told me to come in ASAP"
"Turns out it was Lyme disease (yes, the real one, not the fake one) and it (nearly) progressed to meningitis"
What does "not the fake one" mean, I must be missing something?Lyme is a bacterial infection, and can be cured with antibiotics. Once the bacteria is gone, you no longer have Lyme disease.
However, there is a lot of misinformation about Lyme online. Some people think Lyme is a chronic, incurable disease, which they call "chronic lyme". Often, when a celebrity tells people they have lyme disease, this is what they mean. Chronic lyme is not a real thing - it is a diagnosis given to wealthy people by unqualified conmen or unscrupulous doctors in response to vague, hard to pin symptoms
The late stage of lyme disease is painful. Like "I think I'm dying" painful. It does have a range of symptoms, but those show up like 3 to 6 weeks after the initial infection.
A lot of people claiming chronic lyme disease don't remember this stage.
Lyme disease does cause a range of problems if left untreated. But not before the "I think I'm dying" stage. It's basically impossible for someone, especially with a lot of wealth, to get lyme disease and not have it caught early on.
Consider the OP's story. They tried to not treat it but ended up thinking "OMG, I think I have meningitis and I'm going to die!".
Lyme can kill, but it rarely does. Partially because before it gets to that point it drives people to seek medical attention.
LLM sees:
my rash is not painful
i don't think it's an emergency
it might be leftover from the flu
my wife had something similar
doctors said it would go away on it's own
i want to avoid paying a doctor
LLM: Honestly? It sounds like it's not serious and you should save your moneyYouTuber ChubbyEmu (who makes medical case reviews in a somewhat entertaining and accessible format) recently released a video about a man who suffered a case of brominism (which almost never happens anymore) after consulting an LLM. [0]
Note: I haven't updated this comment template recently, so the versions may be a bit outdated.
Now, I live in Germany where in the last 20 years our healthcare system has fallen victim to neoliberal capitalism and since I am publicly insured by choice I often have to wait for weeks to see a specialist so more often than not LLMs have helped me stay calm and help myself as best as I can. However I still view the output less as a the output or a medical professional and try to stay skeptic along the way. I feel like the augment my guesswork and judgement, but not replace it.
Fuck man if this is you go to the ER.
Also: Amoxicillin is better than its reputation. Three doctors might literally recommend four different antibiotic dosages and schedules. Double-check everything; your doctor might be at the end of a 12-hour shift and is just as human as you. Lyme is very common and best treated early.
Edit: Fixed formating
Last time I was in for getting hundreds of tick bites in one hike (that was fun), I was also told to avoid eating red meat until labs came back. That Alpha-gal is getting more common in my area, and the first immune response is anaphylactic in 40% of the cases, best not to risk it.
If you wonder what one side of one leg looked like during the "hundreds of tick bites on a single hike" take a gander: https://www.dropbox.com/scl/fi/jekrgxa9fv14j28qga7xc/2025-08...
That was on both legs, both sides all the way up to my knees
Yeah, if you develop a rash from a tick bite that even remotely looks like it could be lyme, just go to a pet store to buy amoxicillin (you can get exactly the same stuff they give to humans) if you can't quickly find a doctor who'll take it seriously enough to immediately write you a prescription (unless, of course, they have a very well reasoned explanation for not doing so).
The potential consequences of not getting fast treatment are indeed so so much worse than the practically non-existent consequences of taking amoxicillin when you don't need it, unless you're a crazy hypochondriac who constantly thinks they might have lyme.
But hey, also don't blindly trust medical advice from HN commenters telling you to go buy pet store antibiotics :)
Moral of the story kids: don't post on HN
Even if you absolutely despise LLMs, this is just silly. The problem here isn't "AI enthusiasts", you're getting called out for the absolute lack of nuance in your article.
Yes, people shouldn't do what you did. Yes, people will unfortunately continue doing what you did until they get better advice. But the correct nuanced advice in a HN context is not "never ask LLMs for medical advice", you will rightfully get flamed for that. The correct advice is "never trust medical advice from LLMs, it could be helpful or it could kill you".
If you're not going to trust it, why? What could possibly go wrong? At best you receive useful suggestions to take to a doctor, or guidance on which kind of specialist you should try to talk to. At worst you receive useless advice and maybe waste a bit of time
It is up to you to query them for the best output, and put the pieces together. If you bias them wrongly, it's your own fault.
For every example where an LLM misdiagnosed, a PCP could do much worse. People should think of them as idea generators, subjecting the generated ideas to diagnostic validation tests. If an idea doesn't pan out, keep querying until you hit upon the right idea.
> I have this rash on my body, but it's not itchy or painful, so I don't think it's an emergency?
If you cannot use punctuation correctly, of course you cannot diagnose yourself.
buellerbueller•1mo ago
(Also, it is the fault of the LLM vendor too, for allowing medical questions to be answered.)
morshu9001•1mo ago
robrain•1mo ago
monerozcash•1mo ago
This should be a configurable option.
morshu9001•1mo ago
measurablefunc•1mo ago
buellerbueller•1mo ago