> If you read nothing else, read this: do not ever use an AI or the internet for medical advice.
Your comment seems out of place unless the article was edited in the 10 minutes since the comment was written.
If you want to be a doctor, go to medical school. Otherwise talk to someone who did.
He basically said, "I'm not worried yet. But I would never recommend someone do that. If you have health insurance, that's what you pay for, not for Google to tell you you're just fine, you really don't have cancer."
Thinking about a search engine telling me I don't have cancer was enough to scare the bejesus out me that I swung in the completely opposite direction and for several years became a hypochondriac.
This was also fodder for a lot of stand up comedians. "Google told me I either have the flu, or Ebola, it could go either way, I don't know."
Except the author did it wrong. You don't just ignore a huge rash that every online resource will say is lyme disease. If you really want to trust an LLM, at least prompt it a few different ways.
If you've seen multiple doctors, specialists, etc over the span of years and they're all stumped or being dismissive of your symptoms, then the only way to get to the bottom of it may be to take matters into your own hands. Specifically this would look like:
- carefully experimenting with your living systems, lifestyle, habits, etc. best if there are at least occasional check-ins with a professional. This requires discipline and can be hard to do well, but also sometimes discovers the best solutions. (Lifestyle change solves problem instead of a lifetime of suffering or dependency on speculative pharmaceuticals)
- doing thoughtful, emotionally detached research (reading published papers slowly over a long time, e.g. weeks, months) also very hard, but sometimes you can discover things doctors didn't consider. The key is to be patient and stay curious to avoid an emotional rollercoaster and wasting doctor time. Not everyone is capable of this.
- going out of your way to gather data about your health (logging what you eat, what you do, stress levels, etc. test home for mold, check vitals, heart rate variability, etc.)
- presenting any data you gathered and research you discovered that you think may be relevant to a doctor for interpretation
Again, I want to emphasize that taking your health matters into your own hands like this only makes sense to do after multiple professionals were unhelpful AND if you're capable of doing so responsibly.
It's anything beyond that which I think needs medical attention.
But I have to say that prompt is crazy bad. AI is VERY good at using your prompt as the basis for the response, if you say "I don't think it's an emergency" AI will write a response that is "it's not an emergency"
I did a test with the first prompt and the immediate answer I got was "this looks like lyme disease".
At no point was I just going to commit to some irreversable decision it suggested without confirming it myself or elsewhere, like blindly replacing a part. At the same time, it really helped me because I'm too noob to even know what to Google.
Llama said "syphilis" with 100% confidence, ChatGPT suggested several different random diseases, and Claude at least had the decency to respond "go to a fucking doctor, what are you stupid?", thereby proving to have more sense than many humans in this thread.
It's not a matter of bad prompting, it's a matter of this being an autocomplete with no notion of ground truth and RLHF'd to be a sycophant!
Just 100B more parameters bro, I swear, and we will replace doctors.
Both ChatGPT o3 and 5.1 Pro models helped me a lot diagnosing illnesses with the right queries. I am using lots of queries with different context / context length for medical queries as they are very serious.
Also they have better answer if I am using medical language as they retrieve answers from higher quality articles.
I still went to doctors and got more information from them.
Also I do blood tests and MRI before going to doctors and the great doctors actually like that I go there prepared but still open to their diagnosis.
No, the author is wrong. The author used the LLM improperly, which is why he got himself in trouble.
The number 1 rule is don't trust ANYONE 100%, be it doctors, LLMs, etc. Always verify things yourself no matter who the source is because doctors can be just as wrong as ChatGPT. But at least ChatGPT doesn't try to rush you through and ignore your worries just because they want to make their next appoint.
I recently used ChatGPT to diagnose quite a few things effectively, including a child's fractured arm, MRI scan results, blood pressure changes due to circumstances, etc. All without a co-pay and without any fear of being ignored by a doctor.
I was taking some blood pressure medication and I noticed that my blood sugar had risen after I started it. I googled it (pre-LLM days) and found a study linking that particular medication to higher blood sugar. I talked to my doctor and she pooh-poohed it. I insisted on trying a different type of medication and lo-and-behold, my blood sugar dropped.
Not using ChatGPT in 2026 for medical issues and arming yourself with information, either with or without a doctor's help, would be foolish in my opinion.
Using ChatGPT for medical issues is the single dumbest thing you can do with ChatGPT
But it is a sycophant and will confirm your suspicions, whatever they are and regardless if they're true.
The author of the blog post also mentioned they tried to avoid paying for an unnecessary visit to the doctor. I think the issue is somewhere else. As a European, personally I would go to the doctor and while sitting in the waiting room I would ask an LLM out of curiosity.
I cannot believe that the top-voted comment right now is saying to not trust doctors, use an LLM to diagnose yourself and others.
How does this line up with your religious belief that doctors are infallible and should be 100% trusted?
They did not say that in their comment.
Replying in obvious bad faith makes your original comment even less credible than it already is.
... Um what?
The only way to diagnose a fractured arm is an xray. You can suspect the arm is fractured (rotating it a few directions) but ultimately a muscle injury will feel identical to a fracture especially for a kid.
Please, if you suspect a fracture just take your kid to the doctor. Don't waste your time asking ChatGPT if this might be a fracture.
This just feels beyond silly to me imagining the scenario this would arise in. You have a kid crying because their arm hurts. They are probably protectively holding it and won't let you touch it. And your first instinct is "Hold on, let me ask chatgpt what it thinks. 'Hey chat GPT, my kid is her crying really loud and holding onto their arm. What could this mean?'"
What possessed you to waste time like that?
The radiologist missed a fracture. The child kept complaining his arm hurt so we put the x-rays through ChatGPT and it found the fracture, so we returned and they "found" it this time.
How does this line up with your religious belief that doctors are infallible and should be 100% trusted?
You should just delete your uneducated comment filled with weird assumptions.
Because the way you phrased it with the article in question made it sound like you hadn't first gone to the doctor. This isn't a question about doctors being fallible or not but rather what first instincts are when medical issues arise.
> uneducated comment filled with weird assumptions.
No, not uneducated nor were these assumptions weird as other commenters obviously made the same ones I did.
I'll not delete my comment, why should I? The advice is still completely valid. Go to the doctor first, not GPT.
Hilariously, this is the second time you posted this exact line, to yet another person who _didn't say this_!
> You need to go to the emergency room right now".
> So, I drive myself to the emergency room
It is absolutely wild that a doctor can tell you "you need to go to the emergency right now", and that is an act left to someone who is obviously so unwell they need to go to the ER right now. With a neck so stiff, was OP even able to look around properly while driving?
Yeah, no shit Sherlock? I´d be absolutely embarrassed to even admit to something like this, let alone share the "wisdom perls" like "dont use a machine which guesses its outputs based on whatever text it has been fed" to freaking diagnose yourself? Who would have thought, an individual professional with decades in theoretical and practical training, AND actual human intelligence (Or do we need to call it HGI now), plus tons of experience is more trustworthy, reliable and qualified to deal with something as serious as human body. Plus there are hundreds of thousands of such individuals and they dont need to boil an ocean every time they are solving a problem in their domain of expertise. Compared to a product of entshittified tech industry which in the recent years has only ever given us irrelevant "apps" to live in, without addressing really important issues of our time. Heck, even Peter Thiel agrees with this, at least in his "Zero to one" he did.
The real lesson here is "learn to use an LLM without asking leading questions". The author is correct, they're very good at picking up the subtext of what you are actually asking about and shaping their responses to match. That is, after all, the entire purpose of an LLM. If you can learn to query in such a way that you avoid introducing unintended bias, and you learn to recognize when you've "tainted" a conversation and start a new one, they're marvelous exploratory (and even diagnostic) tools. But you absolutely cannot stop with their outputs - primary sources and expert input remain supreme. This should be particularly obvious to any actual experts who do use these tools on a regular basis - such as developers.
I'm certainly not suggesting that you should ask LLM for medical diagnoses, but still, someone who actually understands the tool they're using would likely not have ended up in your situation.
I completely disagree. I think we should let this act as a form of natural selection, and once every pro-AI person is dead we can get back to doing normal things again.
But it was just a search tool. It could only tell you if someone else was thinking about it. Chatbots as they are presented are a pretty sophisticated generation tool. If you ground them, they function fantastically to produce tools. If you allow them to search, they function well at finding and summarizing what people have said.
But Earth is not a 4-corner 4-day simultaneous time cube. That's on you to figure out. Everyone I know these days has a story of a doctor searching for their symptoms on Gemini or whatever in front of them. But it reminds me of a famous old hacker koan:
> A newbie was trying to fix a broken Lisp machine by turning it off and on.
> Thomas Knight, seeing what the student was doing, reprimanded him: "You cannot fix a machine by just power-cycling it without understanding of what is wrong."
> Knight then power-cycled the machine.
> The machine worked.
You cannot ask an LLM without understanding the answer and expect it to be right. The doctor understands the answer. They ask the LLM. It is right.
"Turns out it was Lyme disease (yes, the real one, not the fake one) and it (nearly) progressed to meningitis"
What does "not the fake one" mean, I must be missing something?Lyme is a bacterial infection, and can be cured with antibiotics. Once the bacteria is gone, you no longer have Lyme disease.
However, there is a lot of misinformation about Lyme online. Some people think Lyme is a chronic, incurable disease, which they call "chronic lyme". Often, when a celebrity tells people they have lyme disease, this is what they mean. Chronic lyme is not a real thing - it is a diagnosis given to wealthy people by unqualified conmen or unscrupulous doctors in response to vague, hard to pin symptoms
LLM sees:
my rash is not painful
i don't think it's an emergency
it might be leftover from the flu
my wife had something similar
doctors said it would go away on it's own
i want to avoid paying a doctor
LLM: Honestly? It sounds like it's not serious and you should save your moneyYouTuber ChubbyEmu (who makes medical case reviews in a somewhat entertaining and accessible format) recently released a video about a man who suffered a case of brominism (which almost never happens anymore) after consulting an LLM. [0]
AI diagnoses aside, I hope people immediately associate this with Lyme (especially if you live in certain parts of the U.S.). I wish doctors would be quicker to run that panel.
I had undiagnosed Lyme for 6 months, because I missed the rash. Turns out you can't see half of your body.
Finally went to an MD who ran the panel, had crazy antibody presence, and what followed was an enormous dose of Doxycycline for months. Every symptom went away.
Note: I haven't updated this comment template recently, so the versions may be a bit outdated.
Now, I live in Germany where in the last 20 years our healthcare system has fallen victim to neoliberal capitalism and since I am publicly insured by choice I often have to wait for weeks to see a specialist so more often than not LLMs have helped me stay calm and help myself as best as I can. However I still view the output less as a the output or a medical professional and try to stay skeptic along the way. I feel like the augment my guesswork and judgement, but not replace it.
Fuck man if this is you go to the ER.
buellerbueller•52m ago
(Also, it is the fault of the LLM vendor too, for allowing medical questions to be answered.)
morshu9001•42m ago
robrain•40m ago
monerozcash•13m ago
This should be a configurable option.
morshu9001•11m ago
measurablefunc•41m ago
buellerbueller•27m ago