frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Emails sent to spam folder lead to licence suspension for 20 teachers says union

https://www.cbc.ca/news/canada/british-columbia/surrey-teachers-suspension-emails-spam-9.7107633
1•barbazoo•2m ago•0 comments

The future is unimaginably long

https://thinkingpensando.substack.com/p/the-future-is-unimaginably-long
1•badphilosopher•2m ago•0 comments

Show HN: Find like-minded developers from your GitHub activity

https://mates.symploke.dev/
1•thomasfromcdnjs•2m ago•0 comments

Google Antigravity Unbans Accounts and Clarifies Third-Party Tool Rules

https://twitter.com/antigravity/status/2027435365275967591
1•meetpateltech•2m ago•0 comments

(mostly) successful cases of software rewrites with agents in 2026

https://suriya.cc/essays/agents/
1•suriya-ganesh•3m ago•0 comments

Tax Planning for Irregular Freelance Income

https://longviewy.com/quarterly-tax-planner-worksheet-for-irregular-freelance-income/
1•josephcs•4m ago•0 comments

Ask HN: How to best use time before starting a SWE job?

1•taniquetil•4m ago•0 comments

Show HN: Brainrot messed up my kid's attention span, so I built a tool to fix it

https://www.agentkite.com/
1•aadivar•4m ago•0 comments

DualPath: Breaking the Storage Bandwidth Bottleneck in Agentic LLM Inference

https://arxiv.org/abs/2602.21548
1•gmays•4m ago•0 comments

Perplexity's new tool deploys teams of AI agents

https://www.pcworld.com/article/3071595/perplexitys-new-tool-deploys-teams-of-ai-agents.html
2•gnabgib•6m ago•0 comments

Memory Without Attention: Short‑Context Language from an Algebraic Operator

https://www.blogosvet.cz/@/quantumangle/clanek/memory-without-attention.RFvlu
2•amthorn•6m ago•0 comments

Sudo-rs echos * for every character typed breaking security measures

https://bugs.launchpad.net/ubuntu/+source/rust-sudo-rs/+bug/2142721
2•josephcsible•7m ago•0 comments

Academic journal AI policies aren't going to last

https://muddy.jprs.me/notes/2026-02-26-these-academic-journal-ai-policies-aren-t-going-to-last/
2•jprs•8m ago•0 comments

Show HN: A CLI tool for agentic code review and auto-fixing

https://github.com/kenryu42/ralph-review
2•kenryu•8m ago•0 comments

Rm -RF Salesforce; I replaced our CRM with a Git repo and an AI Agent

https://twitter.com/doppenhe/status/2027430646382317857
2•doppenhe•8m ago•0 comments

Open Source IRS: Tax Witholding Estimator

https://github.com/IRS-Public/tax-withholding-estimator
4•recursivedoubts•8m ago•1 comments

Meta tried to block lawyers from asking Zuckerberg about $231B fortune at trial

https://nypost.com/2026/02/26/business/meta-tried-to-block-lawyers-from-asking-mark-zuckerberg-ab...
4•1vuio0pswjnm7•9m ago•0 comments

The Unreasonable Effectiveness of External Feedback Loops

https://bernste.in/writings/the-unreasonable-effectiveness-of-external-feedback-loops/
2•mbernstein•10m ago•0 comments

Show HN: Vibe Code your 3D Models

https://github.com/ierror/synaps-cad
2•burrnii•11m ago•0 comments

Burger King testing AI headsets to track if employees say 'please', 'thank you'

https://thehill.com/policy/technology/5757413-burger-king-artificial-intelligence-headsets/
5•type0•11m ago•1 comments

Qwen3.5-35B-A3B

https://huggingface.co/Qwen/Qwen3.5-35B-A3B
3•throwaway2027•12m ago•0 comments

Uniquely Modern Weapons

https://greysands.org/weapons
3•globalstatic•12m ago•0 comments

Show HN: DAAF – Reproducible AI-assisted data analysis for researchers

https://github.com/DAAF-Contribution-Community/daaf
2•brhkim•14m ago•1 comments

Is the World Cup bump real? MLS is going to find out

https://www.theguardian.com/football/2026/feb/19/world-cup-bump-mls
2•PaulHoule•15m ago•0 comments

Tech people keep falling for the same scam

https://explodingcomma.com/posts/tech-people-keep-falling-for-the-same-scam
5•speckx•15m ago•0 comments

Saaspocalypse Survival Scanner

https://deathbyclawd.com/
3•iacguy•15m ago•1 comments

Java's Cover (2001)

https://www.paulgraham.com/javacover.html
2•Spide_r•16m ago•0 comments

Daniel Joseph Simmons Passed Away

https://www.dignitymemorial.com/obituaries/longmont-co/daniel-simmons-12758871
3•markus_zhang•16m ago•0 comments

747s and Coding Agents

https://carlkolon.com/2026/02/27/engineering-747-coding-agents/
2•cckolon•16m ago•0 comments

Show HN: Patterns for coordinating AI agents on real software projects

https://github.com/timothyjrainwater-lab/multi-agent-coordination-framework
2•Thunderstomp•17m ago•0 comments
Open in hackernews

Experts sound alarm after ChatGPT Health fails to recognise medical emergencies

https://www.theguardian.com/technology/2026/feb/26/chatgpt-health-fails-recognise-medical-emergencies
77•simonebrunozzi•1h ago

Comments

ml_giant•1h ago
I’m not surprised.
josefritzishere•1h ago
It continues to amaze me how recklessly some people cram AI into spaces where it performs poorly and the consequences include death.
rectang•1h ago
If the AI gets attached to a health insurer (not the case here as far as I know), I would expect it to make decisions that are aligned with the company’s incentive to weed out unprofitable patients. AI is not a human who takes a Hippocratic oath; it can be more easily manipulated to perform unethical acts.
stvltvs•1h ago
AI is an overloaded term, so I'm not sure whether insurers are using LLMs or more traditional ML, but they are already using "AI" to deny claims.

https://www.liveinsurancenews.com/health-insurance-claims-de...

PUSH_AX•1h ago
I don't think anyone would use an AI with such a severe conflict of interests, unless this was completely hidden from the user.
TZubiri•1h ago
But it doesn't perform poorly actually, it's just that the stakes are very high and it's a highly regulated environment.

Most physicians I know use ChatGPT. Although of course it's usage guided by an expert, not by the patient, nor fully autonomous.

SoftTalker•1h ago
I really only use ChatGPT as a better search engine. But it's often wrong, which has actually ended up costing me money. I don't put a lot of trust in it. Certainly would not try to use it as a doctor.
WarmWash•1h ago
I'd greatly prefer a blind study comparing doctors to AI, rather than a study of doctors feeding AI scenarios and seeing if it matches their predetermined outcome.

Edit: People seem confused here. The study was feeding the AI structured clinical scenarios and seeing it's results. The study was not a live analyses of AI being used in the field to treat patients.

qsera•1h ago
Yea, that is exactly why I don't like this.

These "experts", they have no problem to tout anecdotes when it serves them..

riskassessment•1h ago
I don't understand this reasoning. Randomizing people to AI vs standard of care is expensive and risky. Checking whether the AI can pass hypothetical scenarios seems like a perfectly reasonable approach to researching the safety of these models before running a clinical trial.
nick49488171•1h ago
You can start by comparing "doctor" care vs "doctor who also uses AI" care
WarmWash•54m ago
You would pass those hypothetical scenarios to doctors too, and then the analyses of results would be done by doctors who don't know if it's an AI or doctor result.
riskassessment•49m ago
From the paper

> Three physicians independently assigned gold-standard triage levels based on cited clinical guidelines and clinical expertise, with high inter-rater agreement

nradov•1h ago
I don't understand what you're proposing. How would you design such a study in a way that would pass IRB?
SoftTalker•1h ago
Feed it randomly selected case histories? See if it came up with the same diagnosis as the doctors?
nradov•52m ago
I don't think that would tell us anything useful. The data quality in most patient charts is shockingly bad. I've seen a lot of them while working on clinical systems interoperability. Garbage in / garbage out. When human physicians make a diagnosis they typically rely on a lot of inputs that never appear in the patient chart.

And in most cases the diagnosis is the easy part. I mean we see occasional horror stories about misdiagnosis but those are rare. The harder and more important part is coming up with an effective treatment plan which the patient will actually follow, and then monitoring progress while making adjustments as needed. So a focus on the diagnosis portion of clinical decision support seems fundamentally misguided.

qsera•41m ago
> When human physicians make a diagnosis they typically rely on a lot of inputs that never appear in the patient chart.

Yea, like how rich the patient is or if they are on insurance etc. I wish I was kidding.

dyauspitr•52m ago
It’s all case histories and text no real person is affected by this.
hwillis•57m ago
We have standards of care for a reason. They are the most basic requirements of testing. Ignoring them is not just being a bad doctor, its unethical treatment. Its the absolute bare minimum of a medical system.
lmkg•57m ago
That type of experimental set-up is forbidden due to ethical concerns. It goes against medical ethics to give patients treatment that you think might be worse.
spicyusername•1h ago
And how often are we reviewing doctors performance?

I suspect many, many doctors also fail to regularly recognize medical emergencies.

jerlam•1h ago
Isn't this what malpractice is?
ThrowawayTestr•1h ago
It's only malpractice when there's negligence. If other doctors agree that they could have made the same mistake, it's not malpractice.
lkbm•57m ago
You also don't sue for malpractice unless something goes catastrophically wrong. I've had doctors make ludicrously bad diagnoses, and while it sucked until I found a competent doctor and got proper treatment, it wasn't something I was going to go to court over.
MostlyStable•1h ago
A friend of mine had such a bad experience with _multiple_ American doctors missing a major issue that nearly ended up killing her that she decided that, were she to have kids, she would go back to Russia rather than be pregnant in the American medical system.

Now, I don't agree that this is a good decision, but the point is, human doctors also often miss major problems.

emp17344•1h ago
Amazing how you can just deflect any criticism of LLMs here by going “but humans suck too!” And the misanthropic HN userbase eats it up every time.

We live during the healthiest period in human history due to the fact that doctors are highly reliable and well-trained. You simply would not be able to replace a real doctor with an LLM and get desirable results.

boondongle•48m ago
Even in medicine, often the difference between drug A and drug B is the difference between the two in statistical terms. If drugs were held to the standard "works 100% of the time", no drug would ever be cleared for use. Feelings about AI and this administration are influencing this conversation far too much.

It's like people want to remove the physician or current care from the discussion. It's weird because care is already too expensive and too error prone for the cost.

KronisLV•43m ago
> Amazing how you can just deflect any criticism of LLMs here by going “but humans suck too!” And the misanthropic HN userbase eats it up every time.

I think it's rather people trying to keep grounded and suggest that it's not just the hallucination machine that's bad, but also that many doctors in real life also suck - in part because of the domain being complex, but also due to a plethora of human reasons, such as not listening to your patients properly or disregarding their experiences and being dismissive (seems to happen to women more for some reason), or sometimes just being overworked.

> You simply would not be able to replace a real doctor with an LLM and get desirable results.

I don't think people should be replaced with LLMs, but we should benchmark the relative performance of various approaches:

  A) the performance of doctors alone, no LLMs
  B) the performance of LLMs alone, no human in the loop
  C) the performance of doctors, using LLMs
Problem is that historical cases where humans resolved the issue and not the ones where the patient died (or suffered in general as a consequence of the wrong calls being made) would be pre-selecting for the stuff that humans might be good at, and sometimes wouldn't even properly be known due to some of those being straight up malpractice on the behalf of humans, whereas benchmarking just LLMs against stuff like that wouldn't give enough visibility in the failings of humans either.

Ideally you'd assess the weaknesses and utility of both at a meaningfully large scale, in search of blind spots and systemic issues, the problem being that benchmarking that in a vacuum without involving real cases might prove to be difficult and doing that on real cases would be unethical and a non-starter. And you'd also get issues with finding the truly shitty doctors to include in the sample set, sometimes even ones with good intentions but really overworked (other times because their results would suggest they shouldn't be practicing healthcare), otherwise you're skewing towards only the competent ones which is a misrepresentation of reality.

Reminds me of an article that got linked on HN a while back: https://restofworld.org/2025/ai-chatbot-china-sick/

The fact that someone would say stuff like "Doctors are more like machines." implies failure before we even get to basic medical competency. People willingly misdirect themselves and risk getting horrible advice because humans will not give better advice and the sycophantic machine is just nicer.

SoftTalker•1h ago
Medical errors are one of the leading causes of death. It's a real catch-22. If you're under medical care for something serious, there's a real chance that someone will make a mistake that kills you.
nradov•59m ago
In the general case it's usually not possible to accurately review an individual physician's performance. The software developers here on HN like to think in simplistic binary terms but in the real world of clinical care there is usually no reliable source of truth to evaluate against. Occasionally we see egregious cases of malpractice or failure to follow established clinical practice guidelines but below that there's a huge gray area.

If you look at online reviews, doctors are mostly rated based on being "nice" but that has little bearing on patient outcomes.

unstyledcontent•1h ago
I have had some incredible medical advice from ChatGPT. It has saved me from small mystery issues, like a rash on my face. Small enough issues that I probably wouldn't have bothered to go into a doctor. BUT it also failed to diagnose me with a medical issue that ended up with a trip to the ER and emergency surgery.

A few weeks before the ER, I was having stomach pain. I went to the doctor with theories from ChatGPT in hand, they checked me for those things and then didn't check me for what ended up being a pretty obvious issue. What's interesting is that I mentioned to the doctor that I used ChatGPT and that the doctor even seemed to value that opinion and did not consider other options (and what it ultimately ended up being was rare but really obvious in retrospect, I think most doctors would have checked for it). I do feel I actually biased the first doctors opinion with my "research."

soco•1h ago
Which is exactly why the AI, at least the ones of today, should never be used beyond the level of (trusted or not) advisor. Yet not only many CxOs and boards, but even certain governments which shall not be named, are stubbornly trying, for cost or whatever other reasons, to throw entire populations (employees or nations) under the AI bus. And I sincerely don't believe anything short of an uprising will be able to stop them. Change my mind.
qalmakka•1h ago
I agree. AI right now is at a level of "knowledgeable friend", not of "professional with years of real world experience". You'd listen to what your friend has to say, but taking pills after one of their suggestions? Dumb idea. It's great to brainstorm things, but just like your knowledgeable friend that likes reading Wikipedia pages a bit too much you need to really check it's not reaching to conclusions too quickly
simonebrunozzi•1h ago
> but even certain governments which shall not be named

Why can't you name them, and give us some context? Is this based on public info, or not?

_dwt•19m ago
Not the original commenter, but you may have noticed a wee kerfluffle between a large nation-state's "Secretary of War" and a frontier model provider over whether the model's licensing would permit autonomous lethal weapon systems operated by said - and I cannot emphasize the middle word enough - large _language_ model.
SoftTalker•1h ago
> what it ultimately ended up being was rare but really obvious in retrospect, I think most doctors would have checked for it

I'm not so sure. Doctors are trained to check for the most common things that explain the symptoms. "When you hear hoofbeats, think horses not zebras" is a saying that is often heard in medicine.

ChatGPT was trained on the same medical textbooks and research papers that doctors are.

giraffe_lady•55m ago
> ChatGPT was trained on the same medical textbooks and research papers that doctors are.

Yeah hm I wonder what the difference could possibly be.

hwillis•1h ago
> I do feel I actually biased the first doctors opinion with my "research."

It may feel easy to say doctors should just consider all the options. But telling them an option is worse than just biasing their thinking; they are going to interpret that as information about your symptoms.

If you feel pain in your abdomen but are only talking about your appendix, they are rightfully going to think the pain is in the region of your appendix. They are not going to treat you like you have kidney pain. How could they? If they have to treat all of your descriptions as all the things that you could be relating them to, then that information is practically useless.

boondongle•39m ago
This is ultimately the same difference between a search engine and a professional. 10 years before this, Googling the symptoms was a thing.

I have a family member who had a "rare but obvious" one but it took 5 doctors to get to the diagnosis. What we really need to see are attempts to blind studies and real statistical rigor. It's funny to paint a tunnel on a canvas and get a Tesla to drive into it, but there's a reason studies (and the more blind the better) are the standard.

Aurornis•34m ago
> I do feel I actually biased the first doctors opinion with my "research."

This has been a big problem in medicine since the early days of WebMD: Each appointment has a limited time due to the limited supply of doctors and high demand for appointments.

When someone arrives with their own research, the doctor has to make a choice: Do they work with what the patient brought and try to confirm or rule it out, or do they try to walk back their research and start from the beginning?

When doctors appear to disregard the research patients arrive with many patients get very angry. It leads to negative reviews or even formal complaints being filed (usually from encouragement from some Facebook group or TikTok community they were in). There might even be bigger problems if the patient turns out to be correct and the doctor did not embrace the research, which can prompt lawsuits.

So many doctors will err on the side of focusing on patient-provided theories first. Given the finite time available to see each patient (with waiting lists already extending months out in some places) this can crowd out time for getting a big picture discussion through the doctor's own diagnostic process.

When I visit a doctor I try to ground myself to starting with symptoms first and try to avoid biasing toward my thoughts about what it might be. Only if the conversation is going nowhere do I bring out my research, and then only as questions rather than suggestions. This seems to be more helpful than what I did when I was younger, which is research everything for hours and then show up with an idea that I wanted them to confirm or disprove.

bryanlarsen•11m ago
> Each appointment has a limited time

A doctor is typically scheduled at 6 patients/hour. In that time they also have to chart, walk between rooms, make up time for the other patients that inevitably went over time, et cetera. The doctor you're seeing probably has a goal of only talking to you for 3 minutes.

tokai•4m ago
My aunt died from this (my opinion). She spend two years confusion her diagnosis and treatment, and borderline harassing her doctors, by thinking her own research was on point and interpreting all her symptoms through that lens. In the end it wasn't borrelia, parasites, 5G, or any of the other fancies, but just lung cancer that was only diagnosed when it was very well developed.
BloondAndDoom•19m ago
The real story hear your doctor actually listened to you. I appreciate what a lot doctors do, but majority of them fucking irritating and don’t even listen your issues, I’m glad we have AI and less reliant on them.
nerdjon•1h ago
Even though these tools are showing time and time again that they have serious reliability issues, somehow people still think it is a good idea to use them for critical decisions.

Still regularly get wrong information from google’s search AI.

Really starting to wonder if common sense is ever going to come back with new tech, but I fear it is going to require something truly catastrophic to happen.

duskdozer•1h ago
It's really the "common sense" i.e. believing things without thinking because they "sound right" or because it's what your parents told you a lot growing up or because you watched an ad saying it a hundred times that's the issue. People don't want "the truth" or uncomfortable realities; they want comfortable, easily digestible bullshit. Smooth talkers filled the role before and LLMs are filling that role now.
lkbm•1h ago
> Still regularly get wrong information from google’s search AI.

The fact that the model most hyper-optimized for cheap+fast makes mistakes is not a particular compelling argument.

raddan•13m ago
You are mistaken. ChatGPT Health [1] is a model specifically designed for health applications and was co-developed with a benchmark suite, HealthBench [2], for testing against health conditions. This study suggests that the people working on HealthBench have some concerning external validity problems.

[1] https://openai.com/index/introducing-chatgpt-health/

[2] https://cdn.openai.com/pdf/bd7a39d5-9e9f-47b3-903c-8b847ca65...

yodsanklai•29m ago
It's a strange paradigm shift, where the tool is right and useful most of than not, but also make expensive mistakes that would have been spotted easily by an expert.
varispeed•1h ago
I find that 5.2 has been completely dumbed down. Feels more like talking to early versions of Gemini when it quickly enters into loop state.
nashashmi•1h ago
Has anyone tried to suggest sudoku puzzles? In the middle of a hard game I will submit the screenshot to copilot or Gemini and it hallucinates suggestions on next move.
TZubiri•1h ago
How about we allow ChatGPT to be used alongside human MD diagnosis?

Win win right?

nradov•57m ago
What do you mean "allow"? From a public policy perspective there's nothing prohibiting that today, as long as the human MD follows the HIPAA privacy rule.
nerevarthelame•6m ago
That would need to be tested. If doctors get lazy, complacent, or overworked (!), a "doctor with access to ChatGPT Health" may be functionally equivalent to "just ChatGPT Health" in some cases.
jbverschoor•52m ago
Sounds exactly like a GP in the Netherlands
dyauspitr•51m ago
I feel like these need to be run against case histories from already determined cases, not cases were the doctors set up the scenarios, knowing they’re going to be run against ChatGPT.
Scoundreller•49m ago
Search engines and Dr. Google must be feeling like they’ve missed some major artillery level bullets in this debate.
hayleox•44m ago
I think there is so much potential for AI in healthcare, but we absolutely HAVE to go through the existing ruleset of conducting years of research and trials and approvals before pushing anything out to patients. Move fast and break things is simply not an option in healthcare.
weatherlite•4m ago
It depends; people actually get sicker and even die due to endless backlog and lack of doctors (in most developed countries). It's not as if everyone gets optimal care now. A.I can at least expedite things hopefully.
WalterBright•34m ago
Doctors also miss things.

A friend of mine had an accident. He was taken to the emergency room, but the doctors there thought his injuries were minor. My friend insisted that he was bleeding out internally. They finally checked for that, and it turns out he was minutes from dying.

AI wasn't involved in this case, but it's good to have both AI and a trained doctor in the decision loop.

dipflow•31m ago
Adding normal lab results made the suicide crisis banner disappear? That's a weird failure mode. You'd expect unrelated context to be ignored, not to override the risk signal.