frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•6m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
1•o8vm•8m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•9m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•22m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•25m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
1•helloplanets•27m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•35m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•37m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•38m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•39m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•41m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•42m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•46m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•48m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•48m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•49m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•51m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•54m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•57m ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•1h ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•1h ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•1h ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•1h ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
2•lifeisstillgood•1h ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•1h ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•1h ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•1h ago•1 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•1h ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•1h ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•1h ago•0 comments
Open in hackernews

AI fares better than doctors at predicting deadly complications after surgery

https://hub.jhu.edu/2025/09/17/artificial-intelligence-predicts-post-surgery-complications/
26•_p2zi•4mo ago

Comments

whyandgrowth•4mo ago
The strange thing is that such articles always evoke the idea that AI is replacing humans even in serious work, which is frightening.
siva7•4mo ago
The strange thing is that this potentially life-saving tech will only collect dust because AI in medicine is only good for papers but not for real world usage. See all other AI medicine advancements. Same pattern. Medicine has a problem of not being willed to use modern tech to save lives.
catigula•4mo ago
I can understand reticence to basing conclusions about people's health on minimal evidence.
toss1•4mo ago
The headline definitely evokes such an idea, but the detail in the article simply shows the machine learning system better augmenting the doctors' work
hooverd•4mo ago
Eh, this is about Luddite statistical models and not real AI (chatbots).
deciduously•4mo ago
Let's see how well it draws an SVG of a pelican on a bicycle
reify•4mo ago
"Fares Better" sounds unscientific and very much like click bait

In cases where the numbers suggest that the average treated person "Fares better" than barely over 50% of the control group, or when effects are inconsistent, readers may not interpret the effects as profound.

Providing real numbers that are easily understandable, rather than evocative descriptions, allows readers to form their own conclusions about the results.

philipallstar•4mo ago
It says that doctors could predict accurately if a patient would die after surgery 60% of the time, and AI 85% of the time.
datadrivenangel•4mo ago
If a surgery is extremely risky, the doctors probably won't do it... so there's a systemic bias here in the data.
BrokenCogs•4mo ago
Human doctors have a tendency to underestimate their own complication rate, often because they are too delusional about their own capabilities. I've heard the same doctor say "this has never happened to me in my 20 years of doing surgery" twice, when a complication occurs during a surgical procedure.
catigula•4mo ago
AI seems to explain this better than as framed:

>...the body of the article doesn’t describe a panel of physicians making predictions at all. The headline says “AI fares better than doctors,” but the text says the model outperformed “risk scores currently relied upon by doctors,” i.e., standard scoring tools clinicians use—not the judgments of the surgeons on the case or an outside panel.

Ekaros•4mo ago
This in general should be expected. A ML does certain amount of fitting. As such end results are probably better than human done fitting. The trade-off to my understanding is that you might not understand the algorithm used.

You need to ask do you prefer better black box or weaker white box which you can understand and reason about. For many tasks black box is fine. For this I wonder which one I would prefer...

datavirtue•4mo ago
Until we build in the same financial bias...
bitwize•4mo ago
I get the feeling that this is one of those things where you s/AI/statistics/g. Doctors using a predictive statistical model trained on thousands of patients' worth of data faring better than doctors using the seat of their pants makes total sense.
more_corn•4mo ago
We need better words. This isn’t a chatbot.

Most people think ChatGPT == AI Whereas this is a specially trained model tuned to this exact use case.

estimator7292•4mo ago
I actually think ML models would excel here. Humans are famously bad at estimating and weighing risks and there's really only so much data a single human brain can store and draw conclusions from. Not to mention bias like female patients being chronically under-diagnosed by male doctors.

If you fed a mountain of surgery outcome data into an ML model, I imagine it'd be shockingly effective and (hopefully) less biased on sex and race.

It'd probably be helpful for initial diagnosis, but I'm less confident in that. Postop risk assessment is mostly straight statistics, and statistical inference is what ML models do. Diagnosis is a bit more subjective and complex, though it is in the same general domain.

The real trick is going to be conditioning doctors to not blindly trust the risk assessment model. Though I would hope that it'd be accurate enough for that anyway

nitwit005•4mo ago
The article mentions a previously existing risk scoring system, which was presumably already trying to deal with the problem of humans not being great at evaluating the risk.
dogmatism•4mo ago
1) No, machine learning perfoms better than typical "risk scores" such as the RCRI (it was not tested against doctors clinical judgement)

2) Even so...so what? What we don't have is any reliable way to reduce surgical complications when the benefit outweighs the risk when the risk is elevated

Agingcoder•4mo ago
For 2) I guess that if you know you will die with high probability, you will search for ( if at all possible) an alternative treatment ( which might have side effects but at least you’re alive ) ?
dogmatism•4mo ago
It doesn't really work like that for the most part

If you actually need a really high risk surgery, you probably have a terrible prognosis without it

For instance, in the pivotal trial of transcatheter aortic valve replacement for aortic stenosis (TAVR) the people were deemed too high risk for surgery, so got nothing (well, medicine only which doesn't really change anything for this condition) or TAVR. The medicine arm had 50% mortality (1 year I think?) whereas the TAVR arm was "only" 30%!

Now that didn't mean all those 30% of deaths were due to the procedure or even the aortic stenosis. I think that ran 10% or so (going off memory here). They just had so many other problems. For comparison, TAVR is now done in low-risk people, and I think the 1 year mortality is <3%

The things that go into making someone "high risk" in the STS (cardiac surgery) risk score are for the most part pretty obvious. If your heart muscle is super weak (or you need a machine to keep going before surgery), you have kidney failure, prior strokes, combined heart problems, bad liver or lung disease, etc etc. You can calculate a score, but you probably can guess it from the door of the room