frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

My Mom and Dr. DeepSeek (2025)

https://restofworld.org/2025/ai-chatbot-china-sick/
70•kieto•1h ago

Comments

wnissen•1h ago
This was not what I was expecting. The doctors I know are mostly miserable; stuck between the independence but also the burden of running their own practice, or or else working for a giant health system and having no control over their own days. You can see how an LLM might be preferable, especially when managing a chronic, degenerative condition. I have a family member with stage 3 kidney disease who sees a nephrologist, and there's nothing you can actually do. No one in their right mind would recommend a kidney transplant, let alone dialysis for someone with moderately impaired kidneys. All you can do is treat the symptoms as they come up and monitor for significant drops in function.
mhl47•1h ago
Worriesome for sure.

However I would say that the cited studies are somewhat outdated already compared e.g. with GPT-5-Thinking doing 2mins of reasoning/search about a medical question. As far as I know Deepseeks search capabilities are not comparable and non of the models in the study spend a comparable amount of compute answering your specific question.

nsjdkdkdk•57m ago
How much nvidia u holding bro
candiddevmike•1h ago
I think the article can basically be summed up as "GenAI sychophancy should have a health warning similar to social media". It's a helluva drug to be constantly rewarded and flattered by an algorithm.
snitzr•1h ago
A sick family member told me something along the lines of, "I know how to work with AI to get the answer." I interpret that to mean he asks it questions until it tells him what he wants to hear.
Joel_Mckay•49m ago
Indeed, real doctors have the advantage of understanding how to treat humans that are incapacitated. =3

https://www.youtube.com/watch?v=yftBiNu0ZNU

wvbdmp•10m ago
Just seeing that guy’s face and hearing his voice makes me uneasy. That channel is total body-horror. Glad that guy ended up okay, unlike this poor soul:

https://youtube.com/watch?v=NJ7M01jV058

blacksmith_tb•1h ago
The dangers are obvious (and also there are some fascinating insights into how healthcare works practically in China). I wonder if some kind of "second opinion" antagonistic approach might reduce the risks.
torstenvl•24m ago
Medical Advice Generative Adversarial Networks would be a good idea.

I see some of this adversarial second-guessing introspection from Claude sometimes. ("But wait. I just said x y and z, but that's inconsistent with this other thing. Let me rethink that.")

Sometimes when I get the sense that an LLM is too sycophantic, I'll instruct it to steelman the counter l-argument, then assess the persuasiveness of that counter-argument. It helps.

philipwhiuk•49m ago
This almost certainly isn't only a China problem. I've observed UK users asking questions about diabetes and other health advice. We also have an inexpensive (free-at-point of use for most stuff) but stretched healthcare system. Doubtless there are US users looking at the cost of their healthcare and resorting to ChatGPT instead too.

In companies people talk about Shadow-IT happening when IT doesn't cover the user needs. We should probably label this stuff Shadow-Health.

To some extent, the deployment of a publicly funded AI health chat bot, where the responses can be analysed by healthcare professionals to at least prevent future harm is probably significantly less bad than telling people not to ask AI questions and consult the existing stretched infrastructure. Because people will ask the questions regardless.

threetonesun•23m ago
The joke of looking symptoms up on WebMD and determining you have cancer has been around for... geez over 20 years now. Anti-vaccine sentiment mostly derived from Facebook. Google any symptom today and there are about 10 million Quora-esque websites of "doctors" answering questions. I'm not sure that funneling all of this into the singular UI of an AI interface is really better or worse or even all that different.

But I do agree that some focused and well funded public health bot would be ideal, although we'll need the WHO to do it, it's certainly not coming from the US any time soon.

margorczynski•44m ago
The problem is not reliance on AI but that the AI is not ready yet and using general-purpose models.

There isn't simply enough doctors to go around and the average one isn't as knowledgeable as you would want. Everything suggests that when it comes to diagnosis ML systems should be better in the long run on average.

Especially with a quickly aging population there is no alternative if we want people to have healthcare on a sensible level.

guywithahat•42m ago
> At the bot’s suggestion, she reduced the daily intake of immunosuppressant medication her doctor prescribed her and started drinking green tea extract. She was enthusiastic about the chatbot

I don't know enough about medicine to say whether or not this is correct, but it sounds suspect. I wouldn't be surprised if chatbots, in an effort to make people happy, start recommending more and more nonsense natural remedies as time goes on. AI is great for injuries and illnesses, but I wonder if this is just the answer she wants, and not the best answer.

cube00•6m ago
As soon as the model detects user pleasure at not needing a scary surgery (especially if you already confided you're scared) then it'll double down on that line of thinking to please the user.
kingstnap•40m ago
> she said she was aware that DeepSeek had given her contradictory advice. She understood that chatbots were trained on data from across the internet, she told me, and did not represent an absolute truth or superhuman authority

With highly lucid people like the author's mom I'm not too worried about Dr. Deepseek. I'm actually incredibly bullish on the fact that AI models are, as the article describes, superhumanly empathetic. They are infinitely patient, infinitely available, and unbelievably knowledgeable, it really is miraculous.

We don't want to throw the baby out with the bathwater, but there are obviously a lot of people who really cannot handle the seductivity of things that agree with them like this.

I do think there is pretty good potential in making good progress on this front in though. Especially given the level of care and effort being put into making chatbots better for medical uses and the sheer number of smart people working on the problem.

atoav•23m ago
Well yes, but as an extremely patient person I can tell you that infinite patience doesn't come without its own problems. In certain social situations the ethically better thing to do is to actually to lose your patience, may it be to shake the person talking to you up, may it be to indicate they are going down a wrong path or whatnot.

I have experience with building systems to remove that infinite patience from chatbots and it does make interactions much more realistic.

renewiltord•33m ago
Access trumps everything else. A doctor is fine with you dying while you wait on his backlog. The machine will give you some wrong answers. The mother in the story seems to be balancing the concerns. She has become the agent of her own life empowered by a supernatural machine.

> She understood that chatbots were trained on data from across the internet, she told me, and did not represent an absolute truth or superhuman authority. She had stopped eating the lotus seed starch it had recommended.

The “there’s wrong stuff there” fear has existed for the Internet, Google, StackOverflow. Each time people adapted. They will adapt again. Human beings have remarkable ability to use tools.

reenorap•22m ago
I and many of my friends have used ChatGPT extremely effectively to diagnose medical issues. In fact, I would say that ChatGPT is better than most doctors because most doctors don't actually listen to you. ChatGPT took the time to ask me questions and based on my answers, narrowed down a particularly scary diagnosis and gave excellent instructions on how to get to a local hospital in a foreign country, what to ask for, and that I didn't have to worry very much because it sounded very typical for what I had. The level of reassurance that I was doing everything right actually made me feel less scared, because it was a pretty serious problem. Everything it told me was 100% correct and it guided me perfectly.

I was taking one high blood pressure medication but then noticed my blood sugar jumped. I did some research with ChatGPT and it found a paper that did indicate that it could raise blood sugar levels and gave me a recommendation for an alternative I asked my doctor about it and she said I was wrong, but I gently pushed her to switch and gave the recommended medication. She obliged, which is why I have kept her for almost 30 years now, and lo and behold, my blood sugar did drop.

Most people have a hard time pushing back against doctors and doctors mostly work with blinders on and don't listen. ChatGPT gives you the ability to keep asking questions without thinking you are bothering them.

I think ChatGPT is a great advance in terms of medical help in my opinion and I recommend it to everyone. Yes, it might make mistakes and I caution everyone to be careful and don't trust it 100%, but I say that about human doctors as well.

ZhadruOmjar•12m ago
ChatGPT for health questions is the best use case I have found (Claude wins for code). Having a scratch pad where I can ask about any symptom I might feel, using project memory to cross reference things and having someone actually listen is very helpful. I asked about Crohn's disease since my grandfather suffered from it and I got a few tests I could do, stats on likelihood based on genetics, diet ideas to try and questions to ask the doctor. Much better than the current doc experience which is get the quickest review of my bloods, told to exercise and eat healthy and a see you in six months.
jamespo•7m ago
And why should anyone trust you?
cal_dent•5m ago
I’ve heard many people say the same (specifically about ppl being better than doctors because they listen) and I find it odd and wonder if this is a specific country thing?

I’ve been lucky enough to not need much beyond relative minor medical help but in the places I’ve lived always found that when I do see a GP they’re generally helpful.

There’s also something here about medical stuff making people feel vulnerable as a default so feeling heard can overcompensate the relationship? Not sure I’m articulating this last point well but it comes up so frequent (it listened, guided me through it step by step etc.) that I wonder if that has an effect. Feeling more in control than a doctor who has other patients and time constraint just say it’s x or do this

Thoreandan•21m ago
Reminds me of an excellent paper I just read by a former Google DeepMind Ethics Research Team member

https://www.mdpi.com/2504-3900/114/1/4 - Reinecke, Madeline G., et al. "The double-edged sword of anthropomorphism in llms." Proceedings. Vol. 114. No. 1. MDPI, 2025 Author: https://www.mgreinecke.com/

adzm•20m ago
Considering how difficult it is to get patients to talk to doctors, using AI can be a great way to get some suggestions and insight _and then present that to your actual doctor_
delichon•11m ago
I prepared for a new patient appointment with a medical specialist last week by role playing the conversation with a chatbot. The bot turned out to be much more responsive, inquisitive and helpful. The doctor was passive, making no suggestions, just answering questions. I had to prompt him explicitly to get to therapy recommendations, unlike the AI. I was glad that I had learned enough from the bot to ask useful questions. It would have been redundant at best if the doctor was active and interested, but that can't be depended on. This is standard procedure for me now.
heisenbit•2m ago
For major medical issues it may well be best practice to use the four eyes principle like we do for all safety related systems. Access is key and at this time getting a second pair of eyes in close timely proximity is a luxury few have and even fewer will have looking at the demographics in the developed world. Human doctors are failable as is AI. For the time being having a multitude of perspectives may well be the best in most cases.

Project Genie: Experimenting with infinite, interactive worlds

https://blog.google/innovation-and-ai/models-and-research/google-deepmind/project-genie/
223•meetpateltech•3h ago•120 comments

My Mom and Dr. DeepSeek (2025)

https://restofworld.org/2025/ai-chatbot-china-sick/
70•kieto•1h ago•26 comments

Claude Code Daily Benchmarks for Degradation Tracking

https://marginlab.ai/trackers/claude-code/
399•qwesr123•6h ago•215 comments

Drug trio found to block tumour resistance in pancreatic cancer

https://www.drugtargetreview.com/news/192714/drug-trio-found-to-block-tumour-resistance-in-pancre...
110•axiomdata316•4h ago•50 comments

C++ Modules Are Here to Stay

https://faresbakhit.github.io/e/cpp-modules/
51•faresahmed•5d ago•35 comments

Launch HN: AgentMail (YC S25) – An API that gives agents their own email inboxes

73•Haakam21•3h ago•83 comments

OTelBench: AI struggles with simple SRE tasks (Opus 4.5 scores only 29%)

https://quesma.com/blog/introducing-otel-bench/
117•stared•4h ago•64 comments

Europe’s next-generation weather satellite sends back first images

https://www.esa.int/Applications/Observing_the_Earth/Meteorological_missions/meteosat_third_gener...
602•saubeidl•13h ago•83 comments

EmulatorJS

https://github.com/EmulatorJS/EmulatorJS
62•avaer•6d ago•9 comments

US cybersecurity chief leaked sensitive government files to ChatGPT: Report

https://www.dexerto.com/entertainment/us-cybersecurity-chief-leaked-sensitive-government-files-to...
314•randycupertino•4h ago•166 comments

County pays $600k to pentesters it arrested for assessing courthouse security

https://arstechnica.com/security/2026/01/county-pays-600000-to-pentesters-it-arrested-for-assessi...
83•MBCook•1h ago•24 comments

Apple to soon take up to 30% cut from all Patreon creators in iOS app

https://www.macrumors.com/2026/01/28/patreon-apple-tax/
964•pier25•23h ago•790 comments

Reflex (YC W23) Senior Software Engineer Infra

https://www.ycombinator.com/companies/reflex/jobs/Jcwrz7A-lead-software-engineer-infra
1•apetuskey•3h ago

OpenAI's In-House Data Agent

https://openai.com/index/inside-our-in-house-data-agent
34•meetpateltech•2h ago•14 comments

Usenet personality

https://en.wikipedia.org/wiki/Usenet_personality
26•mellosouls•3d ago•10 comments

Box64 Expands into RISC-V and LoongArch territory

https://boilingsteam.com/box64-expands-into-risc-v-and-loong-arch-territory/
9•ekianjo•4d ago•1 comments

Flameshot

https://github.com/flameshot-org/flameshot
6•OsrsNeedsf2P•1h ago•1 comments

Run Clawdbot/Moltbot on Cloudflare with Moltworker

https://blog.cloudflare.com/moltworker-self-hosted-ai-agent/
85•ghostwriternr•5h ago•36 comments

Heating homes with the largest particle accelerator

https://home.cern/news/news/cern/heating-homes-worlds-largest-particle-accelerator
46•elashri•4h ago•15 comments

Making niche solutions is the point

https://ntietz.com/blog/making-niche-solutions-is-the-point/
64•evakhoury•2d ago•24 comments

Why "The AI Hallucinated" is the perfect legal defense

https://niyikiza.com/posts/hallucination-defense/
3•niyikiza•47m ago•4 comments

Networks Hold the Key to a Decades-Old Problem About Waves

https://www.quantamagazine.org/networks-hold-the-key-to-a-decades-old-problem-about-waves-20260128/
4•makira•1h ago•0 comments

How to Choose Colors for Your CLI Applications (2023)

https://blog.xoria.org/terminal-colors/
120•kruuuder•5h ago•74 comments

Computing Sharding with Einsum

https://blog.ezyang.com/2026/01/computing-sharding-with-einsum/
19•matt_d•4d ago•0 comments

MakuluLinux (6.4M Downloads) Ships Persistent Backdoor from Developer's Own C2

https://werai.ca/security-disclosure.html
24•werai•2h ago•8 comments

Playing Board Games with Deep Convolutional Neural Network on 8bit Motorola 6809

https://ipsj.ixsq.nii.ac.jp/records/229345
30•mci•6h ago•8 comments

Waymo robotaxi hits a child near an elementary school in Santa Monica

https://techcrunch.com/2026/01/29/waymo-robotaxi-hits-a-child-near-an-elementary-school-in-santa-...
196•voxadam•6h ago•340 comments

The Sovereign Tech Fund Invests in Scala

https://www.scala-lang.org/blog/2026/01/27/sta-invests-in-scala.html
82•bishabosha•7h ago•58 comments

We can’t send mail farther than 500 miles (2002)

https://web.mit.edu/jemorris/humor/500-miles
616•giancarlostoro•16h ago•103 comments

Break Me If You Can: Exploiting PKO and Relay Attacks in 3DES/AES NFC

https://www.breakmeifyoucan.com/
37•noproto•6h ago•29 comments