frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Gemini Pro 3 hallucinates the HN front page 10 years from now

https://dosaygo-studio.github.io/hn-front-page-2035/news
1346•keepamovin•7h ago•519 comments

PeerTube is recognized as a digital public good by Digital Public Goods Alliance

https://www.digitalpublicgoods.net/r/peertube
292•fsflover•5h ago•43 comments

Django: what’s new in 6.0

https://adamj.eu/tech/2025/12/03/django-whats-new-6.0/
54•rbanffy•1h ago•13 comments

10 Years of Let's Encrypt

https://letsencrypt.org/2025/12/09/10-years
327•SGran•3h ago•119 comments

Mistral Releases Devstral 2 (72.2% SWE-Bench Verified) and Vibe CLI

https://mistral.ai/news/devstral-2-vibe-cli
407•pember•7h ago•194 comments

If you're going to vibe code, why not do it in C?

https://stephenramsay.net/posts/vibe-coding.html
214•sramsay•4h ago•261 comments

Qt, Linux and everything: Debugging Qt WebAssembly

http://qtandeverything.blogspot.com/2025/12/debugging-qt-webassembly-dwarf.html
12•speckx•49m ago•0 comments

Handsdown one of the coolest 3D websites

https://bruno-simon.com/
300•razzmataks•6h ago•79 comments

Pebble Index 01 – External memory for your brain

https://repebble.com/blog/meet-pebble-index-01-external-memory-for-your-brain
309•freshrap6•7h ago•309 comments

Donating the Model Context Protocol and Establishing the Agentic AI Foundation

https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agenti...
100•meetpateltech•5h ago•46 comments

So you want to speak at software conferences?

https://dylanbeattie.net/2025/12/08/so-you-want-to-speak-at-software-conferences.html
73•speckx•3h ago•21 comments

Agentic AI Foundation

https://block.xyz/inside/block-anthropic-and-openai-launch-the-agentic-ai-foundation
28•thinkingkong•2h ago•3 comments

ULID: Universally Unique Lexicographically Sortable Identifier

https://packagemain.tech/p/ulid-identifier-golang-postgres
20•der_gopher•1w ago•5 comments

LLM from scratch, part 28 – training a base model from scratch on an RTX 3090

https://www.gilesthomas.com/2025/12/llm-from-scratch-28-training-a-base-model-from-scratch
428•gpjt•1w ago•97 comments

The stack circuitry of the Intel 8087 floating point chip, reverse-engineered

https://www.righto.com/2025/12/8087-stack-circuitry.html
48•elpocko•3h ago•21 comments

Clearspace (YC W23) Is Hiring a Founding Designer

https://www.ycombinator.com/companies/clearspace/jobs/yamWTLr-founding-designer-at-clearspace
1•roycebranning•5h ago

Kaiju – General purpose 3D/2D game engine in Go and Vulkan with built in editor

https://github.com/KaijuEngine/kaiju
130•discomrobertul8•7h ago•53 comments

"The Matilda Effect": Pioneering Women Scientists Written Out of Science History

https://www.openculture.com/2025/12/matilda-effect.html
66•binning•4h ago•14 comments

My favourite small hash table

https://www.corsix.org/content/my-favourite-small-hash-table
102•speckx•7h ago•21 comments

Joyboard is a balance board peripheral for the Atari 2600

https://en.wikipedia.org/wiki/Joyboard
4•doener•6d ago•0 comments

Agentic QA – Open-source middleware to fuzz-test agents for loops

24•Saurabh_Kumar_•6d ago•5 comments

Launch HN: Mentat (YC F24) – Controlling LLMs with Runtime Intervention

29•cgorlla•5h ago•22 comments

30 Year Anniversary of WarCraft II: Tides of Darkness

https://www.jorsys.org/archive/december_2025.html#newsitem_2025-12-09T07:42:19Z
153•sjoblomj•12h ago•102 comments

Apple's slow AI pace becomes a strength as market grows weary of spending

https://finance.yahoo.com/news/apple-slow-ai-pace-becomes-104658095.html
147•bgwalter•7h ago•169 comments

The Joy of Playing Grandia, on Sega Saturn

https://www.segasaturnshiro.com/2025/11/27/the-joy-of-playing-grandia-on-sega-saturn/
164•tosh•12h ago•107 comments

Constructing the Word's First JPEG XL MD5 Hash Quine

https://stackchk.fail/blog/jxl_hashquine_writeup
99•luispa•1w ago•18 comments

OpenAI economist quits, alleging that they are verging into AI Advocacy

https://www.wired.com/story/openai-economic-research-team-ai-jobs/
22•gsf_emergency_6•1h ago•4 comments

AWS Trainium3 Deep Dive – A Potential Challenger Approaching

https://newsletter.semianalysis.com/p/aws-trainium3-deep-dive-a-potential
57•Symmetry•5d ago•19 comments

Show HN: AlgoDrill – Interactive drills to stop forgetting LeetCode patterns

https://algodrill.io
144•henwfan•11h ago•88 comments

Transformers know more than they can tell: Learning the Collatz sequence

https://www.arxiv.org/pdf/2511.10811
97•Xcelerate•6d ago•39 comments
Open in hackernews

I misused LLMs to diagnose myself and ended up bedridden for a week

https://blog.shortround.space/blog/how-i-misused-llms-to-diagnose-myself-and-ended-up-bedridden-for-a-week/
25•shortrounddev2•1h ago

Comments

buellerbueller•49m ago
Play stupid games; win stupid prizes.

(Also, it is the fault of the LLM vendor too, for allowing medical questions to be answered.)

morshu9001•39m ago
If the LLM starts telling me it can't answer my question, I switch LLMs
robrain•37m ago
And when you’ve finally switched to the last LLM and it tells you it can’t answer you, and you decide not to use LLMs for important life-changing questions anymore, your world will be a better place.
monerozcash•11m ago
Not really, no. I know I can't trust what the LLM tells me, I still want the answer.

This should be a configurable option.

morshu9001•8m ago
Exactly. Fine if they have to put a warning label on it.
measurablefunc•38m ago
The answers will soon have ads for vitamins & minerals.
buellerbueller•24m ago
Dr. Nigel West's Medical Elixir
only-one1701•48m ago
Oh my God dude it’s ok to say “don’t use LLMs to diagnose yourself, that’s not what they’re for.” You don’t need to apologize for using LLMs wrong.
arjie•38m ago
The first sentence of the article says:

> If you read nothing else, read this: do not ever use an AI or the internet for medical advice.

Your comment seems out of place unless the article was edited in the 10 minutes since the comment was written.

only-one1701•31m ago
I was referring to the word “misused” in the title.
SoftTalker•48m ago
More generally don't try to be your own doctor. Whether you're using LLMs or just searching the web for symptoms, it's way too easy for an untrained person to get way off track.

If you want to be a doctor, go to medical school. Otherwise talk to someone who did.

burningChrome•41m ago
When the internet was completely booming and Google was taking over the search engine world, I asked my Doctor if he was afraid that people were going to start getting their medical advice from Google.

He basically said, "I'm not worried yet. But I would never recommend someone do that. If you have health insurance, that's what you pay for, not for Google to tell you you're just fine, you really don't have cancer."

Thinking about a search engine telling me I don't have cancer was enough to scare the bejesus out me that I swung in the completely opposite direction and for several years became a hypochondriac.

This was also fodder for a lot of stand up comedians. "Google told me I either have the flu, or Ebola, it could go either way, I don't know."

herpdyderp•34m ago
I have health insurance. I don't feel like I get anything out of paying for it, I still get massive bills whenever I use it.
morshu9001•26m ago
That whole thing is so scammy that I totally see why people would rather self-diagnose.

Except the author did it wrong. You don't just ignore a huge rash that every online resource will say is lyme disease. If you really want to trust an LLM, at least prompt it a few different ways.

FloorEgg•35m ago
I agree generally with what you're saying as a good rule, I would just add one exception.

If you've seen multiple doctors, specialists, etc over the span of years and they're all stumped or being dismissive of your symptoms, then the only way to get to the bottom of it may be to take matters into your own hands. Specifically this would look like:

- carefully experimenting with your living systems, lifestyle, habits, etc. best if there are at least occasional check-ins with a professional. This requires discipline and can be hard to do well, but also sometimes discovers the best solutions. (Lifestyle change solves problem instead of a lifetime of suffering or dependency on speculative pharmaceuticals)

- doing thoughtful, emotionally detached research (reading published papers slowly over a long time, e.g. weeks, months) also very hard, but sometimes you can discover things doctors didn't consider. The key is to be patient and stay curious to avoid an emotional rollercoaster and wasting doctor time. Not everyone is capable of this.

- going out of your way to gather data about your health (logging what you eat, what you do, stress levels, etc. test home for mold, check vitals, heart rate variability, etc.)

- presenting any data you gathered and research you discovered that you think may be relevant to a doctor for interpretation

Again, I want to emphasize that taking your health matters into your own hands like this only makes sense to do after multiple professionals were unhelpful AND if you're capable of doing so responsibly.

cogman10•30m ago
Eh, depends a little. I think most of the population has experienced and can diagnose/treat the flu or a minor cut.

It's anything beyond that which I think needs medical attention.

cowlby•40m ago
This is also about “don’t avoid going to the doctor”. Whether it was an LLM or a friend that “had that and it was nothing”, confirming that with a doctor is the sane approach no?
daveguy•1m ago
Which essentially means ignore both the LLM and your rando friend saying "don't worry about it". You shouldn't try to substitute licensed medical evaluation with either.
jeffbee•39m ago
Imagine reaching this conclusion but going on to suggest that one should read pop psychology books by Ezra Klein and Jonathan Haidt to understand human cognition.
cogman10•39m ago
Yikes.

But I have to say that prompt is crazy bad. AI is VERY good at using your prompt as the basis for the response, if you say "I don't think it's an emergency" AI will write a response that is "it's not an emergency"

I did a test with the first prompt and the immediate answer I got was "this looks like lyme disease".

morshu9001•34m ago
I figured this out diagnosing car trouble. Tried a few separate chats, and my natural response patterns were always leading it down the path to "your car is totaled and will also explode at any moment." Going about it a different way, I got it to suggest a simple culprit that I was able to confirm pretty thoroughly (fuel pressure sensor), and fixed it.
unyttigfjelltol•23m ago
The problem is, once you start down that sequence of the AI telling you what you want to hear, it disables normal critical reasoning. It’s the “yes man” problem— you’re even less able to solve the problem effectively than with no information. I really enjoy LLMs, but it is a bit of a trap.
morshu9001•19m ago
I hit that too. If I asked it about the O2 sensor, it was the O2 sensor. iirc I had to ask if what PIDs to monitor, give all of that to it at once, then try a few experiments it suggested. It also helped that it told me how to self-confirm by watching that the fuel trim doesn't go too high, which was also my cue to shut off the engine if it did.

At no point was I just going to commit to some irreversable decision it suggested without confirming it myself or elsewhere, like blindly replacing a part. At the same time, it really helped me because I'm too noob to even know what to Google.

observationist•19m ago
The solution is really easy. Make sure you have web search enabled, you're not using the free version of some AI, and then just ask it to research the best way to prompt, and write a tutorial for you to use in the future. Or have it write some exercises and do a practice chat.
andrepd•13m ago
Well I tested the first prompt on ChatGPT and Llama and Claude and not one of them suggested Lyme disease. Goes to show how much these piece of shit clankers are good for.

Llama said "syphilis" with 100% confidence, ChatGPT suggested several different random diseases, and Claude at least had the decency to respond "go to a fucking doctor, what are you stupid?", thereby proving to have more sense than many humans in this thread.

It's not a matter of bad prompting, it's a matter of this being an autocomplete with no notion of ground truth and RLHF'd to be a sycophant!

Just 100B more parameters bro, I swear, and we will replace doctors.

cogman10•6m ago
FTR I used qwen3-coder:30b to get Lyme. Mostly because I already have it setup for local llm fun. I didn't test this on other models.
xiphias2•38m ago
Another poorly written article that doesn't even specify the LLM being used.

Both ChatGPT o3 and 5.1 Pro models helped me a lot diagnosing illnesses with the right queries. I am using lots of queries with different context / context length for medical queries as they are very serious.

Also they have better answer if I am using medical language as they retrieve answers from higher quality articles.

I still went to doctors and got more information from them.

Also I do blood tests and MRI before going to doctors and the great doctors actually like that I go there prepared but still open to their diagnosis.

shortrounddev2•36m ago
Read to the bottom. I didn't specify the LLM because it doesn't matter. It's not the fault of the LLM, it's the fault of the user
only-one1701•32m ago
It’s the fault of the tool!
jfindper•33m ago
The lack of awareness to advocate using LLMs for medical advice on an article about how bad of an idea that is... It's astounding.
reenorap•36m ago
>If you read nothing else, read this: do not ever use an AI or the internet for medical advice.

No, the author is wrong. The author used the LLM improperly, which is why he got himself in trouble.

The number 1 rule is don't trust ANYONE 100%, be it doctors, LLMs, etc. Always verify things yourself no matter who the source is because doctors can be just as wrong as ChatGPT. But at least ChatGPT doesn't try to rush you through and ignore your worries just because they want to make their next appoint.

I recently used ChatGPT to diagnose quite a few things effectively, including a child's fractured arm, MRI scan results, blood pressure changes due to circumstances, etc. All without a co-pay and without any fear of being ignored by a doctor.

I was taking some blood pressure medication and I noticed that my blood sugar had risen after I started it. I googled it (pre-LLM days) and found a study linking that particular medication to higher blood sugar. I talked to my doctor and she pooh-poohed it. I insisted on trying a different type of medication and lo-and-behold, my blood sugar dropped.

Not using ChatGPT in 2026 for medical issues and arming yourself with information, either with or without a doctor's help, would be foolish in my opinion.

shortrounddev2•34m ago
> Not using ChatGPT in 2026 for medical issues and arming yourself with information [...] would be foolish in my opinion.

Using ChatGPT for medical issues is the single dumbest thing you can do with ChatGPT

sofixa•34m ago
> But at least ChatGPT doesn't try to rush you through and ignore your worries just because they want to make their next appoint.

But it is a sycophant and will confirm your suspicions, whatever they are and regardless if they're true.

marcellus23•30m ago
Hence his emphasis on not trusting it, which was right before the sentence you're quoting here.
avra•24m ago
> All without a co-pay

The author of the blog post also mentioned they tried to avoid paying for an unnecessary visit to the doctor. I think the issue is somewhere else. As a European, personally I would go to the doctor and while sitting in the waiting room I would ask an LLM out of curiosity.

WhyOhWhyQ•23m ago
I think it's morally wrong to trust a child's well-being to an LLM over a trained medical professional, and I feel strongly enough about this to express that here.
jfindper•20m ago
I agree with you, and reading some of the takes in this thread is actually blowing my mind.

I cannot believe that the top-voted comment right now is saying to not trust doctors, use an LLM to diagnose yourself and others.

reenorap•17m ago
The radiologist missed a fracture. The child kept complaining his arm hurt so we put the x-rays through ChatGPT and it found the fracture, so we returned and they "found" it this time.

How does this line up with your religious belief that doctors are infallible and should be 100% trusted?

jfindper•16m ago
>How does this line up with your religious belief that doctors are infallible and should be 100% trusted?

They did not say that in their comment.

Replying in obvious bad faith makes your original comment even less credible than it already is.

andrepd•22m ago
I swear on all that is holy, I cannot tell if this is sophisticated satire or you are actually for real. 100% not being facetious.
cogman10•20m ago
> a child's fractured arm

... Um what?

The only way to diagnose a fractured arm is an xray. You can suspect the arm is fractured (rotating it a few directions) but ultimately a muscle injury will feel identical to a fracture especially for a kid.

Please, if you suspect a fracture just take your kid to the doctor. Don't waste your time asking ChatGPT if this might be a fracture.

This just feels beyond silly to me imagining the scenario this would arise in. You have a kid crying because their arm hurts. They are probably protectively holding it and won't let you touch it. And your first instinct is "Hold on, let me ask chatgpt what it thinks. 'Hey chat GPT, my kid is her crying really loud and holding onto their arm. What could this mean?'"

What possessed you to waste time like that?

reenorap•16m ago
As I responded above:

The radiologist missed a fracture. The child kept complaining his arm hurt so we put the x-rays through ChatGPT and it found the fracture, so we returned and they "found" it this time.

How does this line up with your religious belief that doctors are infallible and should be 100% trusted?

You should just delete your uneducated comment filled with weird assumptions.

cogman10•14m ago
> How does this line up with your religious belief that doctors are infallible and should be 100% trusted?

Because the way you phrased it with the article in question made it sound like you hadn't first gone to the doctor. This isn't a question about doctors being fallible or not but rather what first instincts are when medical issues arise.

> uneducated comment filled with weird assumptions.

No, not uneducated nor were these assumptions weird as other commenters obviously made the same ones I did.

I'll not delete my comment, why should I? The advice is still completely valid. Go to the doctor first, not GPT.

jfindper•12m ago
>How does this line up with your religious belief that doctors are infallible and should be 100% trusted?

Hilariously, this is the second time you posted this exact line, to yet another person who _didn't say this_!

nataliste•6m ago
Just want to chime in amidst the ensuing dog-pile to say that my experiences match yours and you're not crazy, but I'm also an empirically-minded arch-skeptic. Curious: you're not left-handed by any chance, are you?
sofixa•35m ago
I'm guessing this is the USA with the absurd healthcare system, because otherwise this part is wild:

> You need to go to the emergency room right now".

> So, I drive myself to the emergency room

It is absolutely wild that a doctor can tell you "you need to go to the emergency right now", and that is an act left to someone who is obviously so unwell they need to go to the ER right now. With a neck so stiff, was OP even able to look around properly while driving?

morshu9001•23m ago
Uh, standard in America is to take an Uber Ambulance
hansmayer•33m ago
> "If you read nothing else, read this: do not ever use an AI or the internet for medical advice. Go to a doctor."

Yeah, no shit Sherlock? I´d be absolutely embarrassed to even admit to something like this, let alone share the "wisdom perls" like "dont use a machine which guesses its outputs based on whatever text it has been fed" to freaking diagnose yourself? Who would have thought, an individual professional with decades in theoretical and practical training, AND actual human intelligence (Or do we need to call it HGI now), plus tons of experience is more trustworthy, reliable and qualified to deal with something as serious as human body. Plus there are hundreds of thousands of such individuals and they dont need to boil an ocean every time they are solving a problem in their domain of expertise. Compared to a product of entshittified tech industry which in the recent years has only ever given us irrelevant "apps" to live in, without addressing really important issues of our time. Heck, even Peter Thiel agrees with this, at least in his "Zero to one" he did.

shortrounddev2•30m ago
To be honest, I am pretty embarrassed about the whole thing, but I figured I'd post my story because of that. There are lots of people who misdiagnose themselves doing something stupid on the internet (or teenagers who kill themselves because they fell in love with some Waifu LLM), but you never hear about it because they either died or were too embarrassed to talk about it. Better to be transparent that I did something stupid so that hopefully someone else reads about it and doesn't do the same thing I did
foobarbecue•30m ago
That's my feeling, but I have a friend who is a MD and an enthusiastic supporter of people getting medical info from LLMs...
cheald•32m ago
I think the author took the wrong lesson here. I've had doctors misdiagnose me just as readily as I've had LLMs misdiagnose me - but I can sit there and plug at an LLM in separate unrelated contexts for hours if I'd like, and follow up assertions with checks to primary sources. That's not to say that LLMs replace doctors, but that neither is perfect and that at the end of the day you have to have your brain turned on.

The real lesson here is "learn to use an LLM without asking leading questions". The author is correct, they're very good at picking up the subtext of what you are actually asking about and shaping their responses to match. That is, after all, the entire purpose of an LLM. If you can learn to query in such a way that you avoid introducing unintended bias, and you learn to recognize when you've "tainted" a conversation and start a new one, they're marvelous exploratory (and even diagnostic) tools. But you absolutely cannot stop with their outputs - primary sources and expert input remain supreme. This should be particularly obvious to any actual experts who do use these tools on a regular basis - such as developers.

shortrounddev2•29m ago
No, the lesson here is never use an LLM to diagnose you, full stop. See a real doctor. Do not make the same mistake as me
monerozcash•16m ago
"Don't ask LLMs leading questions" is a perfectly valid lesson here too. If you're going to ask an LLM for a medical diagnosis, you should at the very least know how to use LLMs properly.

I'm certainly not suggesting that you should ask LLM for medical diagnoses, but still, someone who actually understands the tool they're using would likely not have ended up in your situation.

shortrounddev2•12m ago
If you're going to ask an LLM for a medical diagnosis, stop what you're doing and ask a doctor instead. There is no good advice downstream of the decision to ask an LLM for a medical diagnosis
Starlevel004•31m ago
> If you read nothing else, read this: do not ever use an AI or the internet for medical advice.

I completely disagree. I think we should let this act as a form of natural selection, and once every pro-AI person is dead we can get back to doing normal things again.

arjie•28m ago
This is the Google Search problem all over again. When Google first came out, it was so much better than other search engines that people were finding websites (including obscure ones) that would answer the questions they had. Others at the time would get upset that these people were concluding things from the search. Imagine you asked if Earth was a 4-corner 4-day simultaneous time cube. You'd find a website where someone explained that it was. Many people would then conclude that Earth was indeed a 4-corner 4-day simultaneous time cube where Jesus, Socrates, the Clintons, and Einstein lived in different parts.

But it was just a search tool. It could only tell you if someone else was thinking about it. Chatbots as they are presented are a pretty sophisticated generation tool. If you ground them, they function fantastically to produce tools. If you allow them to search, they function well at finding and summarizing what people have said.

But Earth is not a 4-corner 4-day simultaneous time cube. That's on you to figure out. Everyone I know these days has a story of a doctor searching for their symptoms on Gemini or whatever in front of them. But it reminds me of a famous old hacker koan:

> A newbie was trying to fix a broken Lisp machine by turning it off and on.

> Thomas Knight, seeing what the student was doing, reprimanded him: "You cannot fix a machine by just power-cycling it without understanding of what is wrong."

> Knight then power-cycled the machine.

> The machine worked.

You cannot ask an LLM without understanding the answer and expect it to be right. The doctor understands the answer. They ask the LLM. It is right.

mttch•26m ago
In the UK we have the 111 NHS non-emergency telephone service - they don’t give medical advice but triage you based on your symptoms. Either a doctor will call you back, they will tell you to go the non-urgent care centre or A&E (ER) immediately.
avra•14m ago
In the EU we have 116117 which is not (yet?) implemented in all countries. It's part of the "harmonised service of social value" which uses 116 as a prefix and has also other helpline numbers like hotlines for missing children or emotional support.
blakesterz•23m ago
They wrote...

  "Turns out it was Lyme disease (yes, the real one, not the fake one) and it (nearly) progressed to meningitis"
What does "not the fake one" mean, I must be missing something?
shortrounddev2•9m ago
Disclaimer: not a doctor (obviously), ask someone who is qualified, but this is what the ID doctor told me:

Lyme is a bacterial infection, and can be cured with antibiotics. Once the bacteria is gone, you no longer have Lyme disease.

However, there is a lot of misinformation about Lyme online. Some people think Lyme is a chronic, incurable disease, which they call "chronic lyme". Often, when a celebrity tells people they have lyme disease, this is what they mean. Chronic lyme is not a real thing - it is a diagnosis given to wealthy people by unqualified conmen or unscrupulous doctors in response to vague, hard to pin symptoms

looknee•23m ago
@shortrounddev2 can you please post the response ChatGPT gave in response to your prompt? That seems pertinent.
monerozcash•12m ago
It'd be interesting to see the entire chat, lots of people seem to just keep using the same chat window and end up poisoning the LLM with massive amounts of unhelpful context.
labrador•23m ago
Prompt: "I have this rash on my body, but it's not itchy or painful, so I don't think it's an emergency? I just want to know what it might be. I think I had the flu last week so it might just be some kind of immune reaction to having been sick recently. My wife had pityriasis once, and the doctor told her they couldn't do anything about it, it would go away on its own eventually. I want to avoid paying a doctor to tell me it's nothing. Does this sound right?"

LLM sees:

    my rash is not painful
    i don't think it's an emergency
    it might be leftover from the flu
    my wife had something similar
    doctors said it would go away on it's own
    i want to avoid paying a doctor
LLM: Honestly? It sounds like it's not serious and you should save your money
ikrenji•22m ago
bro described the perfect LLM prompt that would have got him the right diagnosis instantly. instead he put in some garbage and got garbage back
lenerdenator•21m ago
I think you'll see this happen a lot more. Not just in the US where docs cost money, but anywhere there's a shortage of docs and/or it's a pain in the butt to go to one.

YouTuber ChubbyEmu (who makes medical case reviews in a somewhat entertaining and accessible format) recently released a video about a man who suffered a case of brominism (which almost never happens anymore) after consulting an LLM. [0]

[0] https://www.youtube.com/watch?v=yftBiNu0ZNU

monerozcash•19m ago
Don't ask LLMs leading questions if you don't want terrible answers?
kylecazar•19m ago
"but then I developed a small, flat, circular rash"

AI diagnoses aside, I hope people immediately associate this with Lyme (especially if you live in certain parts of the U.S.). I wish doctors would be quicker to run that panel.

I had undiagnosed Lyme for 6 months, because I missed the rash. Turns out you can't see half of your body.

Finally went to an MD who ran the panel, had crazy antibody presence, and what followed was an enormous dose of Doxycycline for months. Every symptom went away.

jtsiskin•18m ago
I gave their example “correct” prompt (“Flat, circular, non-itchy, non-painful red rash with a ring, diffuse throughout trunk. Follows week of chills and intense night sweats, plus fatigue and general malaise”) to both ChatGPT and Gemini. And both said Lyme disease as their #1 diagnosis. So maybe it is okay to diagnose yourself with LLMs, just do it correctly!
maplethorpe•12m ago
They didn't say what model they used. The difference between GPT 3.5 and GPT 4 is night and day. This is exactly what I'd expect from 3.5, but 4 wouldn't make this mistake.

Note: I haven't updated this comment template recently, so the versions may be a bit outdated.

Escapado•11m ago
Interesting story. I want to agree with the general advice not to use it for that - especially if that is how you use it. And I want to preface this with: Don’t take this as advice, I just want to share my experience here. I tend to do it anyway and had fairly large success so far but I use the LLM differently if I have a health issue that bothers me. First I open Gemini, Claude and ChatGPT in their latest, highest thinking budget installment. Then I tell them about my symptoms I give a fairly detailed description of my person and my medical history. I prompt them specifically to ask detailed questions like a physician would and ask them to ask me to perform tests to rule out or zoom in on different hypothesis about what is might have. After going back and forth, if they all agree on a similar thing or a set of similar things I usually take this as a good sign I might be on the right track and check if I should talk to a professional or not (edging on the side of caution). If they can’t agree I would usually try to get an appointment to see a professional and try to get sooner rather than later if anything potentially dangerous popped up during the back and forth or if I feel sufficiently bad.

Now, I live in Germany where in the last 20 years our healthcare system has fallen victim to neoliberal capitalism and since I am publicly insured by choice I often have to wait for weeks to see a specialist so more often than not LLMs have helped me stay calm and help myself as best as I can. However I still view the output less as a the output or a medical professional and try to stay skeptic along the way. I feel like the augment my guesswork and judgement, but not replace it.