frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Introduction – Agent Client Protocol

https://agentclientprotocol.com/overview/introduction
1•sync•2m ago•0 comments

I Spent over $31,900 on Whiteout Survival – Here's Why I Regret It

https://old.reddit.com/r/whiteoutsurvival/comments/1hki2e9/i_spent_over_31900_on_whiteout_surviva...
1•Ralfp•3m ago•0 comments

Clang 21.1.0 Release Notes

https://releases.llvm.org/21.1.0/tools/clang/docs/ReleaseNotes.html
1•pjmlp•3m ago•0 comments

Show HN: I built TabX – A Chrome extension that makes sense of your tab chaos

https://www.tabx.dev/
1•yashxsagar•5m ago•0 comments

Built a TradingView alternative that creates indicators with AI

https://www.aulico.com/
1•lollobrigo•6m ago•0 comments

Lidar measures the toll of climate disasters

https://www.beautifulpublicdata.com/how-lidar-measures-the-toll-of-climate-disasters/
1•speckx•8m ago•0 comments

What a difference 2 years makes: MariaDB buys back SkySQL

https://www.theregister.com/2025/08/27/mariadb/
1•rntn•9m ago•0 comments

Learning Perl in one day and the importance of building strong foundations

https://guilhermenl.dev/articles/9096ed7725d387606d713e7964e2b3ac06f9bebd2650080b9ca070f0106f5c70
1•henriquegodoy•9m ago•0 comments

Bring Your Own Agent to Zed – Featuring Gemini CLI

https://zed.dev/blog/bring-your-own-agent-to-zed
2•meetpateltech•12m ago•0 comments

Show HN: Test Your Reaction Time

https://trickle.so/apps/traffic
2•Bob_Chen•13m ago•0 comments

Grok Code Fast 1 is rolling out in public preview for GitHub Copilot

https://github.blog/changelog/2025-08-26-grok-code-fast-1-is-rolling-out-in-public-preview-for-gi...
1•MuitoSall•18m ago•0 comments

Nx compromised: malware uses Claude code CLI to explore the filesystem

https://semgrep.dev/blog/2025/security-alert-nx-compromised-to-steal-wallets-and-credentials/
5•neuroo•18m ago•0 comments

The hill where the idea of a Palestinian state may die – CNN

https://www.cnn.com/2025/08/26/middleeast/west-bank-israel-e1-palestinians-latam-intl
1•vinnyglennon•20m ago•0 comments

Tearing Down a Bistable Cholesteric Display [video]

https://www.youtube.com/watch?v=P8mYTyhyB2w
1•iamflimflam1•21m ago•0 comments

Show HN: BYO-database website analytics built for indie hackers

https://berrylog.app/
1•lakshikag•23m ago•0 comments

The A.I. Spending Frenzy Is Propping Up the Real Economy, Too

https://www.nytimes.com/2025/08/27/business/economy/ai-investment-economic-growth.html
1•ryan_j_naughton•24m ago•0 comments

Show HN: A Discord bot that help users keep their problem-solving streaks

https://github.com/mohyware/streak-punisher-bot
1•mohyware•26m ago•0 comments

What Are Traces and Spans in OpenTelemetry?

https://oneuptime.com/blog/post/2025-08-27-traces-and-spans-in-opentelemetry/view
1•ndhandala•26m ago•0 comments

Ask HN: How does Bret Taylor not have a conflict of interest?

1•gkolli•27m ago•0 comments

Running our Docker registry on-prem with Harbor

https://dev.37signals.com/running-our-docker-registry-on-prem-with-harbor/
1•airblade•27m ago•0 comments

ASCIIFlow

https://asciiflow.com/
2•marcodiego•27m ago•0 comments

The Quiet Dance Between Knowing and Doing

https://lightcapai.medium.com/the-quiet-dance-between-knowing-and-doing-4b87c0b65665
1•WASDAai•28m ago•0 comments

Jet-Nemotron

https://github.com/NVlabs/Jet-Nemotron
1•pilooch•30m ago•0 comments

Money Can't Buy You Love: The Story Behind Elon Musk's Berghain Rejection

https://berlinguide.de/money-cant-buy-you-love-the-story-behind-elon-musks-berghain-rejection/
7•speckx•30m ago•1 comments

Show HN: A database of 200 trusted directories to boost your domain rating

https://www.boostdr.xyz/
1•mohitvaswani•31m ago•0 comments

Simple framework to measure product-market fit pre-revenue

https://www.doctormarket.fit/p/the-startup-thermometer
1•coelen•32m ago•0 comments

C# 15 Union proposals overview

https://github.com/dotnet/csharplang/blob/c3325533e57dec6aec3266e066e39abf7260e87a/meetings/worki...
2•Vake93•32m ago•1 comments

Delivering a Robust Power Grid Tech Stack

https://www.smpnet.tech/post/built-for-those-who-code-the-grid-the-grid-tech-stack-that-actually-...
1•gtzi•34m ago•0 comments

CISA warns of actively exploited Git code execution flaw

https://www.bleepingcomputer.com/news/security/cisa-warns-of-actively-exploited-git-code-executio...
1•akyuu•36m ago•1 comments

Ask HN: What's your 2025 "quality stack"?

1•fazlerocks•37m ago•0 comments
Open in hackernews

The Therac-25 Incident (2021)

https://thedailywtf.com/articles/the-therac-25-incident
198•lemper•5h ago

Comments

rokkamokka•4h ago
I was taught this incident in university many years ago. It's undeniably an important lesson that shouldn't be forgotten
napolux•4h ago
The most deadly bug in history. If you know any other deadly bug, please share! I love these stories!
NitpickLawyer•4h ago
The MCAS related bugs @ Boeing led to 300+ deaths, so it's probably a contender.
solids•4h ago
Was that a bug or a failure to inform pilots about a new system?
AdamN•4h ago
Both - and really MCAS was fine but the issue was the metering systems (Pitot tubes) and the handling of conflicting data. That part of the puzzle was definitely a bug in the logic/software.
kijin•3h ago
Remember the Airbus that crashed in the middle of the Atlantic because one of the pilots kept pulling on his yoke, and the computer decided to average his input with normal input from the other pilot?

Conflict resolution in redundant systems seems to be one of the weakest spots in modern aircraft software.

sgerenser•1h ago
Air France 447: https://en.m.wikipedia.org/wiki/Air_France_Flight_447

Inputs were averaged, but supposedly there’s at least a warning: Confused, Bonin exclaimed, "I don't have control of the airplane any more now", and two seconds later, "I don't have control of the airplane at all!"[42] Robert responded to this by saying, "controls to the left", and took over control of the aircraft.[84][44] He pushed his side-stick forward to lower the nose and recover from the stall; however, Bonin was still pulling his side-stick back. The inputs cancelled each other out and triggered an audible "dual input" warning.

phire•3h ago
That wasn't a bug.

They deliberately designed it to only look at one of the Pitot tubes, because if they had designed it to look at both, then they would have had to implement a warning message for conflicting data.

And if they had implemented a warning message, they would have had to tell the pilots about the new system, and train them how to deal with it.

It wasn't a mistake in logic either. This design went through their internal safety certification, and passed.

As far as I'm aware, MCAS functioned exactly as designed, zero bugs. It's just that the design was very bad.

mnw21cam•1h ago
It wasn't pitot tubes that had the hardware problem, it was the angle of attack sensor. The software was poorly designed to believe the input from just one fallible angle of attack sensor.
thyristan•4h ago
In the same vein one could argue that Therac-25 was not actually a software bug but a hardware problem. Interlocks, that could have prevented the accidents and that where present in earlier Therac models, were missing. The software was written with those interlocks in mind. Greedy management/hardware engineers skipped them for the -25 version.

It's almost never just software. It's almost never just one cause.

actionfromafar•3h ago
Just to point it out even clearer - there's almost never a root cause.
NitpickLawyer•3h ago
I would say plenty of both. They obviously had to inform the pilots, but the way the system didn't reset permanently after 2-3 (whatever) sessions of "oh, the pilot trimmed manually, after 10 seconds we keep doing the same thing" was a major major logic blunder. Failure all across the board, if only from the perspective of end-to-end / integration testing if nothing else.

Worryingly, e2e / full integration testing was also the main cause of other Boeing blunders, like the Starliner capsule.

fuckaj•3h ago
Not a bug. A non airworthy plane they tried to patch up with software.
reorder9695•2h ago
The plane was perfectly airworthy without MCAS, that was never the issue. The issue was it handled differently enough at high angles of attack to the 737NG that pilots would've needed additional training or possibly a new type rating without MCAS changing the trim in this situation. The competition (Airbus NEO family) did not need this kind of new training for existing pilots, so airlines being required to do this for new Boeing but not Airbus planes would've been a huge commercial disadvantage.

[edit as I can't reply to the child comment]: The FAA and EASA both looked into the stall characteristics afterwards and concluded that the plane was stable enough to be certified without MCAS and while it did have more of a tenancy to pitch up at high angles of attack it was still an acceptable amount.

fuckaj•2h ago
I may have understood wrong but thought is possible to get into an unrecoverable stall?
echelon•4h ago
The 737 Max MCAS is arguably a bug. That killed 346 people.

Not a "bug" per se, but texting while driving kills ~400 people per year in the US. It's a bug at some level of granularity.

To be tongue in cheek a bit, buggy JIRA latency has probably wasted 10,000 human years. Those are many whole human lives if you count them up.

b_e_n_t_o_n•3h ago
> To be tongue in cheek a bit, buggy JIRA latency has probably wasted 10,000 human years. Those are many whole human lives if you count them up.

These kind of calculations always make me wonder...say someone wasted one minute of everybody's life, is the cost ~250 lives? One minute? Somewhere in between?

benrutter•4h ago
Probably many rather than a single bug, but the botched London Ambulance dispatch software from the 90s, is probably one of the most deadly software issues of all time, although there aren't any estimates I know of that try to quantify the number of lives lost as a result.

http://www0.cs.ucl.ac.uk/staff/a.finkelstein/papers/lascase....

kgwgk•3h ago
Several people killed themselves over this: https://www.wikipedia.org/wiki/British_Post_Office_scandal

https://www.theguardian.com/uk-news/2024/jan/09/how-the-post...

One member of the development team, David McDonnell, who had worked on the Epos system side of the project, told the inquiry that “of eight [people] in the development team, two were very good, another two were mediocre but we could work with them, and then there were probably three or four who just weren’t up to it and weren’t capable of producing professional code”.

What sort of bugs resulted?

As early as 2001, McDonnell’s team had found “hundreds” of bugs. A full list has never been produced, but successive vindications of post office operators have revealed the sort of problems that arose. One, named the “Dalmellington Bug”, after the village in Scotland where a post office operator first fell prey to it, would see the screen freeze as the user was attempting to confirm receipt of cash. Each time the user pressed “enter” on the frozen screen, it would silently update the record. In Dalmellington, that bug created a £24,000 discrepancy, which the Post Office tried to hold the post office operator responsible for.

Another bug, called the Callendar Square bug – again named after the first branch found to have been affected by it – created duplicate transactions due to an error in the database underpinning the system: despite being clear duplicates, the post office operator was again held responsible for the errors.

BoxOfRain•3h ago
More heads should have rolled over this in my opinion, absolutely despicable that they cheerfully threw innocent people in prison rather than admit their software was a heap of crap. It makes me so angry this injustice was allowed to prevail for so long because nobody cared about the people being mistreated and tarred as thieves as long as they were 'little people' of no consequence, while senior management gleefully covered themselves in criminality to cover for their own uselessness.

It's an archetypal example of 'one law for the connected, another law for the proles'.

A1kmm•3h ago
Not even close. Israel apparently has AI bombing target intel & selection systems called Gospel and Lavender - https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai.... Claims are these systems have a selectivity of 90% per bombing, and they were willing to bomb up to 20 civilians per person classified by the system as a Hamas member. So assuming that is true, 90% of the time, they kill one Hamas member, and up to 20 innocents. 10% of the time, they kill up to 21 innocents and no Hamas members.

Killing 20 innocents and one Hamas member is not a bug - it is callous, but that's a policy decision and the software working as intended. But when it is a false positive (10% of the time), due to inadequate / outdated data and inadequate models, that could reasonably classified as a bug - so all 21 deaths for each of those bombings would count as deaths caused by a bug. Apparently (at least earlier versions) of Gospel were trained on positive examples that mean someone is a member of Hamas, but not on negative examples; other problems could be due to, for example, insufficient data, and interpolation outside the valid range (e.g. using pre-war data about, e.g. how quickly cell phones are traded, or people movements, when behaviour is different post-war).

I'd therefore estimate that deaths due to classification errors from those systems is likely in the thousands (out of the 60k+ Palestinian deaths in the conflict). Therac-25's bugs caused 6 deaths for comparison.

danadam•3h ago
Some Google Pixel phones couldn't dial emergency number (still can't?). I don't know if there were any deadly consequences of that.

https://www.androidauthority.com/psa-google-pixel-911-emerge...

throwaway0261•1h ago
There was a news story from Norway last year where a car allegedly accelerated by itself, causing the car to fall off the second floor of a parking garage and kill the driver.
mnw21cam•1h ago
There are plenty of "car allegedly accelerated by itself" incidents, and usually the root cause is the driver mistakenly pressing the accelerator pedal when they think they're pressing the brake pedal. And then swearing blind afterwards that they were braking as hard as they possibly could but the car kept surging forwards.
bobmcnamara•20m ago
Time and time again the introduction of electronic throttle control has spiked the number of unintended acceleration incidents.

There's a chart here that shows it clearly for Toyota's rollout:

https://www.embedded.com/unintended-acceleration-and-other-e...

bobmcnamara•30m ago
In Dhahran, Saudi Arabia, on February 25, 1991, a Patriot missile failed to intercept an Iraqi Scud causing the death of 28 American soldiers.

The patriot missile system used floating point for time, so as uptime extended the clock became more and more granular, eventually to the point where time skipped so far that the range gate was tripped.

The fix was being deployed earlier that year but this unit hadn't been updated yet.

https://www.cs.unc.edu/~smp/COMP205/LECTURES/ERROR/lec23/nod...

benrutter•4h ago
> software quality doesn't appear because you have good developers. It's the end result of a process, and that process informs both your software development practices, but also your testing. Your management. Even your sales and servicing.

If you only take one thing away from this article, it should be this one! The Therac-25 incident is a horrifying and important part of software history, it's really easy to think type-systems, unit-testing and defensive-coding can solve all software problems. They definitely can help a lot, but the real failure in the story of the Therac-25 from my understanding, is that it took far too long for incidents to be reported, investigated and fixed.

There was a great Cautionary Tales podcast about the device recently[0], one thing mentioned was that, even aside from the catasrophic accidents, Therac-25 machines were routinely seen by users to show unexplained errors, but these issues never made it to the desk of someone who might fix it.

[0] https://timharford.com/2025/07/cautionary-tales-captain-kirk...

AdamN•4h ago
This is true but there also needs to be good developers as well. It can't just be great process and low quality developer practices. There needs to be: 1/ high quality individual processes (development being one of them), 2/ high quality delivery mechanisms, 3/ feedback loops to improve that quality, 4/ out of band mechanisms to inspect and improve the quality.
Fr3dd1•3h ago
I would argue that a good process always has a good self correction mechanism built in. This way, the work done by a "low quality" software developer (this includes almost all of us at some point in time), is always taken into account by the process.
quietbritishjim•3h ago
Right, but if everyone is low quality then there's no one to do that correction.

That may seem a bit hypothetical but it can easily happen if you have a company that systematically underpays, which I'm sure many of us don't need to think hard to imagine, in which case they will systematically hire poor developers (because those are the only ones that ever applied).

ZaoLahma•2h ago
Replace the "hire poor developers" with "use LLM driven development", and you have the rough outline for a perfect Software Engineering horror movie.

It used to be that the poor performers (dangerous hip-shootin' code commitin' cowpokes) were limited in the amount of code that they could produce per time unit, leaving enough time for others to correct course. Now the cowpokes are producing ridiculous amount of code that you just can't keep up with.

anal_reactor•2h ago
Sad truth is that average dev is average, but it's not polite to say this out loud. This is particularly important at scale - when you are big tech at some point you hit a wall and no matter how much you pay you can't attract any more good devs, simply because all good devs are already hired. This means that corporate processes must be tailored for average dev, and exceptional devs can only exist in start-ups (or hermetically closed departments). The side effect of that is that whole job market promotes the skill of fitting into corporate environment over the skill of programming. So an a junior dev, for me it makes much more sense to learn how to promote my visibility during useless meetings, rather than learn a new technology. And that's how the bar keeps getting lower.
pjmlp•40m ago
The correction is done by the "lucky" souls doing the onsite, customer facing roles, for the offshoring delivery. Experience from a friend....
varjag•1h ago
My takeaway from observing different teams over years is the talent by a huge margin is the most important component. Throw a team of A performers together and it really doesn't matter what process you make them jump through. This is how a waterfall team got the mankind to the Moon with handwoven core memory but an agile team 10x the size can't fix the software for a family car.
scott_w•27m ago
You conflated, misrepresented and simply ignored so many things in your statement that I really don’t know where to start rebutting it. I’d say at least compare SpaceX to NASA with space exploration but, even then, I doubt you have anywhere near enough knowledge of both programmes to be able to properly analyse, compare and contrast to back up your claim. Hell, do you even know if SpaceX or Tesla are even using an agile methodology for their system development? I know I don’t.

That’s not to say talent is unimportant, however, I’d need to see some real examples of high talent, no process, teams compared to low talent, high process, teams, then some mixture of the groups to make a fair statement. Even then, how do you measure talent? I think I’m talented but I wouldn’t be surprised to learn others think I’m an imbecile who only knows Python!

rcxdude•1h ago
This only works with enough good developers involved in the process. I've seen how the sausage is made, and code quality is often shockingly low in these applications, just in ways that don't set off the metrics (or they do, but they can bend the process to wave them away). Also, the process often makes it very hard to fix latent problems in the software, so it rarely gets better over time, either.
vorgol•3h ago
I was going to recommend that exact podcast episode but you beat me to it. Totally worth listening, especially if you're interested in software bugs.

Another interesting fact mentioned in the podcast is that the earlier (manually operated) version of the machine did have the same fault. But it also had a failsafe fuse that blew so the fault never materialized. Excellent demonstration of the Swiss Cheese Model: https://en.wikipedia.org/wiki/Swiss_cheese_model

bell-cot•39m ago
>> the real failure in the story of the Therac-25 from my understanding, is that it took far too long for incidents to be reported, investigated and fixed.

> the earlier (manually operated) version of the machine did have the same fault. But it also had a failsafe fuse that blew so the fault never materialized.

#1 virtue of electromechanical failsafes is that their conception, design, implementation, and failure modes tend to be orthogonal to those of the software. One of the biggest shortcomings of Swiss Cheese safety thinking is that you too-often end up using "neighbor slices from the same wheel of cheese".

#2 virtue of electromechanical failsafes is that running into them (the fuse blew, or whatever) is usually more difficult for humans to ignore. Or at least it's easier to create processes and do training that actually gets the errors reported up the chain. (Compared to software - where the worker bees all know you gotta "ignore, click 'OK', retry, reboot" all the time, if you actually want to get anything done):

But, sadly, electromechanical failsafes are far more expensive then "we'll just add some code to check that" optimism. And PHB's all know that picking up nickles in front of the steamroller is how you get to the C-suite.

sonicggg•1h ago
Not sure why the article is focusing so much on software development. That was just a piece of the problem. The entire product had design flaws. When the FDA for involved, the company wasn't just told to make software updates.
speed_spread•57m ago
Yet It doesn't take much to swamp a team of good developers. A poorly defined project, mismatched requirements, sent to production too early and then put in support mode with no time planned to plug the holes... There's only so much smart technicians can do when the organization is broken.
0xDEAFBEAD•47m ago
Honestly I wish instead of the Therac-25, we were discussing a system which made use of unit testing and defensive coding, yet still failed. That would be more educational. It's too easy to look at the Therac-25 and think "I would never write a mess like that".
wat10000•20m ago
The lesson is not to write a mess like that. It might seem obvious, but it has to be learned.
pjmlp•37m ago
The worst part is that many devlopers think that by not working with high integrity systems, such quality levels don't apply to them.

Wrong, any software failure can have huge consequences in someone's life, or company, by preventing some critical flow to take place, corrupting data related to someone's life, professional or medical record, preventing a payment on some specific goods that had to be acquired on that moment or never,....

michaelt•4h ago
I'd be interested in knowing how many of y'all are being taught about this sort of thing in college ethics/safety/reliability classes.

I was taught about this in engineering school, as part of a general engineering course also covering things like bathtub reliability curves and how to calculate the number of redundant cooling pumps a nuclear power plant needs. But it's a long time since I was in college.

Is this sort of thing still taught to engineers and developers in college these days?

BoxOfRain•3h ago
I was taught about it in university as a computer science undergrad, thought about it often since I ended up working in medtech.
wocram•3h ago
This was part of our Systems Engineering class, something like this: https://web.mit.edu/6.033/2014/wwwdocs/assignments/therac25....
aDyslecticCrow•3h ago
Im too curious, I made a poll. I for sure wasnt in computer science uni. I only heard about it vaguely online.

https://strawpoll.com/NMnQNX9aAg6

lgeek•3h ago
It was taught in a first year software ethics class on my Computer Science programme. Back in 2010. I'm wondering if they still do
firesteelrain•17m ago
I was taught Computer Ethics back in the early 2000s as part of my CS degree.
3D30497420•3h ago
I studied design and I wish we'd had a design ethics class, which would have covered instances like this.
rvz•4h ago
We're more likely to get a similar incident like this very quickly if we continue with the cult of 'vibe-coding' and throwing away basic software engineering principles out of the window as I said before. [0]

Take this post-mortem here [1] as a great warning and which also highlights exactly what could go horribly wrong if the LLM misreads comments.

What's even more scarier is each time I stumble across a freshly minted project on GitHub with a considerable amount of attention, not only it is 99% vibe-coded (very easy to detect) but it completely lacks any tests written for it.

Makes me question the ability of the user prompting the code in the first place if they even understand how to write robust and battle-tested software.

[0] https://news.ycombinator.com/item?id=44764689

[1] https://sketch.dev/blog/our-first-outage-from-llm-written-co...

voxadam•2h ago
The idea of 'vibe-coding' safety critical software is beyond terrifying. Timing and safety critical software is hard enough to talk about intelligently, even harder to code, harder yet to audit, and damn near impossible to debug, and all that's without neophyte code monkeys introducing massive black boxes full of poorly understood voodoo to the process.
isopede•4h ago
I strongly believe that we will see an incident akin to Therac-25 in the near future. With as many people running YOLO mode on their agents as there are, Claude or Gemini is going to be hooked up to some real hardware that will end up killing someone.

Personally, I've found even the latest batch of agents fairly poor at embedded systems, and I shudder at the thought of giving them the keys to the kingdom to say... a radiation machine.

the-grump•4h ago
The 737 MAX MCAS debacle was one such failure, albeit involving a wider system failure and not purely software.

Agreed on the future but I think we were headed there regardless.

jonplackett•3h ago
Yeah reading this reminded me a lot of MCAS. Though MCAS was intentionally implemented and intentionally kept secret.
Maxion•3h ago
> Personally, I've found even the latest batch of agents fairly poor at embedded systems

I mean even simple crud web apps where the data models are more complex, and where the same data has multiple structures, the LLMs get confused after the second data transformation (at the most).

E.g. You take in data with field created_at, store it as created_on, and send it out to another system as last_modified.

SCdF•3h ago
The Horizon (UK Royal Mail accounting software) incident killed multiple postmasters through suicide, and bankrupted and destroyed the lives of dozens or hundreds more.

The core takeaway developers should have from Therac-25 is not that this happens just on "really important" software, but that all software is important, and all software can kill, and you need to always care.

hahn-kev•3h ago
From what I've read about that incident I don't know what the devs could have done. The company sure was a problem but also the laws basically saying a computer can't be wrong. No dev can solve that problem.
sim7c00•3h ago
as you point out this was a messup on a lot of levels. its an interesting effect tho not to be dismissed. how your software works and how its perceived and trusted can impact people psychologically.
fuckaj•3h ago
Given whole truth testimony?
V__•1h ago
> Engineers are legally obligated to report unsafe conduct, activities or behaviours of others that could pose a risk to the public or the environment. [1]

If software "engineers" want to be taken seriously, then they should also have the obligation to report unsafe/broken software and refuse to ship unsafe/broken software. The developers are just as much to blame as the post office:

> Fujitsu was aware that Horizon contained software bugs as early as 1999 [2]

[1] https://engineerscanada.ca/news-and-events/news/the-duty-to-...

[2] https://en.wikipedia.org/wiki/British_Post_Office_scandal

maweki•2h ago
But there is still a difference here. Provenance and proper traceability would have allowed the subpostmasters to show their innocence and prove the system failable.

In the Therac-25 case, the killing was quite immediate and it would have happened even if the correct radiation dose was recorded.

scott_w•23m ago
I’m not sure it would. Remember that the prosecutors in this case were outright lying to the courts about the system! When you hit that point, it’s really hard to even get a clean audit trail out in the open any more!
sim7c00•3h ago
talk to anyone in the industries about 'automation' on medical or critical infra devices and they will tell you NO. No touching our devices with your rubbish.

i am pretty confident they wont let claude touch if it they dont even let deterministic automations run...

that being said, maybe there are places. but this is always the sentiment i got. no automating, no scanning, no patching. device is delivered certified and any modifications will invalidate that. any changes need to be validated and certified.

its a different world that makin apps thats for sure.

not to say mistakes arent made and change doesnt happen, but i dont think people designing medical devices will be going yolo mode on their dev cycle anytime soon... give the folks in safety critical system engineering some credit..

throwaway0261•2h ago
> but i dont think people designing medical devices will be going yolo mode on their dev cycle anytime soon

I don't have the same faith in corporate leadership as you, at least not when they see potentially huge savings by firing some of the expensive developers and using AI to write more of the code.

grues-dinner•2h ago
Non-agentic AI is already "killing" people by some definitions. There's a post about someone being talked into suicide on the front page right now, and they are 100% going to get used for something like health insurance and benefits where avoidable death is a very possible outcome. Self-driving cars are also full of "AI" and definitely have killed people already.

Which is not to say that software hasn't killed people before (Horizon, Boeing, probably loads of industrial accidents and indirect process control failures leading to dangerous products, etc, etc). Hell, there's a suspicion that austerity is at least partly predicated on a buggy Excel spreadsheet, and with about 200k excess deaths in a decade (a decade not including Covid) in one country, even a small fraction of those being laid at the door of software is a lot of Theracs.

AI will probably often skate away from responsibility in the same way that Horizon does: by being far enough removed and with enough murky causality that they can say "well, sure, it was a bug, but them killing themselves isn't our fault"

I also find AI copilot things do not work well with embedded software. Again, people YOLOing embedded isn't new, but it might be about to get worse.

autonomousErwin•4h ago
This reminds me of the Belgium 2003 election that was impossibly skewered by a supernova light years away sending charged particles which manage to get through our atmosphere (allegedly) and flipping a bit. Not the only case it's happened.
jve•3h ago
On the bright side, wow, those computers are really sturdy: takes a whole supernova to just flip a bit :)
kijin•3h ago
Well the thing is, millions of stars go supernova in the observable universe every single day. Throw in the daily gamma ray burst as well, and you've got bit flips all over the place.
haddonist•4h ago
Well There's Your Problem podcast, Episode 121: Therac-25

https://www.youtube.com/watch?v=7EQT1gVsE6I

dpacmittal•2h ago
There's also this video from Kyle Hill which is pretty good (I think it's a different incident though, not sure) - https://www.youtube.com/watch?v=Ap0orGCiou8
voidUpdate•27m ago
My go-tos are usually Fascinating Horror https://www.youtube.com/watch?v=nU5HbUOtyqk and Plainly Difficult https://www.youtube.com/watch?v=-7gVqBY52MY.

I've gone off Kyle Hill after a lot of people pointed out that he was promoting a scam (BetterHelp) on his video about fraud and his response was just to tell people to deal with it

auggierose•4h ago
Wondering if that "one developer" is here on HN.
Forgret•3h ago
Hahaha, it would be interesting, maybe he just commented on the post here?
mellosouls•3h ago
TIL TheDailyWTF is still active. I'd thought it had settled to greatest hits only some years ago.
greatgib•3h ago
This story is kind of old. But also I'm suspicious that this was an AI generated content due to this weird paragraph (one becoming "they"):

   It's worth noting that there was one developer who wrote all of this code. They left AECL in 1986, and thankfully for them, no one has ever revealed their identity. And while it may be tempting to lay the blame at their feet—they made every technical choice, they coded every bug—it would be wildly unfair to do that.
semv3r•2h ago
Singular "they" has been used since at least the 14th century—was generative AI commonly available then? https://en.wikipedia.org/wiki/Singular_they
edot•2h ago
Isn’t that the pronoun to use when you’re unsure of gender? This article didn’t feel AI-y to me.
pie_flavor•2h ago
'They' is a correct singular form for a person of unknown gender. Modern writing overwhelmingly uses it instead of 'he or she', but it has always been correct, has been predominant for a long time, and furthermore it doesn't have anything to do with AI, nor was AI viable as an authoring tool when this article was written, nor is Remy ever going to sell out. What a bizarre comment.
tbossanova•41m ago
That is 100% standard english, dude. I feel like I might have read that exact sentence 20 years ago...
vemv•3h ago
My (tragically) favorite part is, from wikipedia:

> A commission attributed the primary cause to generally poor software design and development practices, rather than singling out specific coding errors.

Which to me reads as "this entire codebase was so awful that it was bound to fail in some or other way".

rgoulter•2h ago
Hmm. "poor software design" suggests a high risk that something might go wrong; "poor development practice" suggests that mistakes won't get caught/remedied.

By focusing on particular errors, there's the possibility you'll think "problem solved".

By focusing on process, you hope to catch mistakes as early as possible.

rossant•3h ago
The first commenter on this site introduces himself as "a physician who did a computer science degree before medical school." He is now president of the Ray Helfer Society [1], "an honorary society of physicians seeking to provide medical leadership regarding the prevention, diagnosis, treatment and research concerning child abuse and neglect."

While the cause is noble, the medical detection of child abuse faces serious issues with undetected and unacknowledged false positives [2], since ground truth is almost never knowable. The prevailing idea is that certain medical findings are considered proof beyond reasonable doubt of violent abuse, even without witnesses or confessions (denials are extremely common). These beliefs rest on decades of medical literature regarded by many as low quality because of methodological flaws, especially circular reasoning (patients are classified as abuse victims because they show certain medical findings, and then the same findings are found in nearly all those patients—which hardly proves anything [3]).

I raise this point because, while not exactly software bugs, we are now seeing black-box AIs claiming to detect child abuse with supposedly very high accuracy, trained on decades of this flawed data [4, 5]. Flawed data can only produce flawed predictions (garbage in, garbage out). I am deeply concerned that misplaced confidence in medical software will reinforce wrongful determinations of child abuse, including both false positives (unjust allegations potentially leading to termination of parental rights, foster care placements, imprisonment of parents and caretakers) and false negatives (children who remain unprotected from ongoing abuse).

[1] https://hs.memberclicks.net/executive-committee

[2] https://news.ycombinator.com/item?id=37650402

[3] https://pubmed.ncbi.nlm.nih.gov/30146789/

[4] https://rdcu.be/eCE3l

[5] https://www.sciencedirect.com/science/article/pii/S002234682...

elric•3h ago
One of the commenters on the article wrote this:

> Throughout the 80s and 90s there was just a feeling in medicine that computers were dangerous <snip> This is why, when I was a resident in 2002-2006 we still were writing all of our orders and notes on paper.

I was briefly part of an experiment with electronic patient records in an ICU in the early 2000s. My job was to basically babysit the server processing the records in the ICU.

The entire staff hated the system. They hated having to switch to computers (this was many years pre-ipad and similarly sleek tablets) to check and update records. They were very much used to writing medications (what, when, which dose, etc) onto bedside charts, which were very easy to consult and very easy to update. Any kind of dataloss in those records could have fatal consequences. Any delay in getting to the information could be bad.

This was *not* just a case of doctors having unfounded "feelings" that computers were dangerous. Computers were very much more dangerous than pen and paper.

I haven't been involved in that industry since then, and I imagine things have gotten better since, but still worth keeping in mind.

jacquesm•3h ago
Now we have Chipsoft, arguably one of the worst players in the entire IT space that has a near monopoly (around me, anyway) on IT for hospitals. They charge a fortune, produce crap software and the larger they get the less choice there is for the remainder. It is baffling to me that we should be enabling such hostile players.
skinwill•2h ago
Around here we have Epic. If you want a good scare, look up their corporate Willy Wonka-esq jail/campus and their policy of zero remote work.
misja111•2h ago
I worked for them in the early 2000's. There was nothing wrong with the people working there, except for the two founders, a father and son. They were absolutely ruthless. And as so often, that ruthless mentality was what enabled them to gain dominance over the market. I could tell some crazy stories about how they ran the company but better not because it might get me sued. But if you understand Dutch, you can read more about them e.g. here: https://www.quotenet.nl/zakelijk/a41239366/chipsoft-gerrit-h...
greazy•2h ago
It's still an issue. I've heard stories of EMR system going down forcing staff to use pen and paper. It boggles my mind that such systems don't have redundancy.

These are commercial products being deployed.

elric•2m ago
I have a few pet theories of why software in the medical space is so often shitty and insanely expensive. One of them is that working with doctors is often very unpleasant, which makes building software them unpleasant, which drives up the price. I mean some of the ones I worked with were terribly nice, especially the ICU docs and neurologists, but a large majority of them were major aholes.

The other theory is there are soo many bureaucratic hoops to jump through in order to make anything in the medical space, that no one does it willingly.

amelius•3h ago
> The Therac-25 was the first entirely software-controlled radiotherapy device.

This says it all.

mdavid626•3h ago
Some sanity checks are always a good idea before running such destructive action (IF beam_strength > REASONABLY_HIGH_NUMBER THEN error). Of course the UI bug is hard to catch, but the sanity check would have prevented this completely and the machine would just end up in an error, rather than killing patients.
b_e_n_t_o_n•3h ago
invariants are so useful to enforce even for toy projects. they should never be triggered outside of dev, but if they do sometimes it's better to just let it crash.
bzzzt•3h ago
Making sure the beam is off before crashing would be better though.
linohh•3h ago
In my university this case was (and probably still is) subject of the first lecture in the first semester. A lot to learn here and one of the prime examples how the DEPOSE model [Perrow 1984] works for software engineering.
Forgret•3h ago
What surprised me most was that only one developer was working on such an unpredictable technology, whereas I think I need at least 5 developers to be able to discuss options.
throwaway0261•2h ago
One of the benefits of regulations in these areas, is that they require proper tests and documentation. This often requires more than one person to handle the load. We don't want to go back to the 80s YOLO mode just because we need to "move faster".

BTW: Relevant XKCD: https://xkcd.com/2347/

mnw21cam•1h ago
Though this XKCD might be even more relevant:

https://xkcd.com/2030/

voxadam•2h ago
(2021)
haunter•2h ago
My "favorite" part:

>One failure occurred when a particular sequence of keystrokes was entered on the VT100 terminal that controlled the PDP-11 computer: If the operator were to press "X" to (erroneously) select 25 MeV photon mode, then use "cursor up" to edit the input to "E" to (correctly) select 25 MeV Electron mode, then "Enter", all within eight seconds of the first keypress and well within the capability of an experienced user of the machine, the edit would not be processed and an overdose could be administered. These edits were not noticed as it would take 8 seconds for startup, so it would go with the default setup

Kinda reminds me how everything is touchscreen nowadays from car interfaces to industry critical software

OskarS•2h ago
It's interesting to compare this with the Post Office Scandal in the UK. Very different incidents, but reading this, there is arguably a root assumption in both cases that people made, which is that "the software can't be wrong". For developers, this is a hilariously silly thing, but for non-developers looking at it from the outside, they don't have the capability or training to understand that software can be this fragile. And they look at a situation like the post office scandal and think "Either this piece of software we paid millions for and was developed by a bunch of highly trained engineers is wrong, or these people are just ripping us off". Same thing with Therac-25, this software had worked on previous models and the rest of the company just had this unspoken assumption that it simply wasn't possible that there was anything wrong with it, so testing it specifically wasn't needed.
jwr•2h ago
No, this is not a "hilariously silly thing" for developers. In fact, I'd say that most developers place way too much trust in software.

I am a developer and whatever software system I touch breaks horribly. When my family wants to use an ATM, they tell me to stand at a distance, so that my aura doesn't break things. This is why I will not get into a self-driving car in the foreseeable future — I think we place far too much confidence in these complex software systems. And yet I see that the overwhelming majority of HN readers are not only happy to be beta-testers for this software as participants in road traffic, but also are happy to get in those cars. They are OK with trusting their life to new, complex, poorly understood and poorly tested software systems, in spite of every other software system breaking and falling apart around them.

[anticipating immediate common responses: 1) yes, I know that self-driving car companies claim that their cars are statistically safer than human drivers, this is beyond the point here. One, they are "safer" largely because they drive so badly that other road participants pay extra attention and accommodate their weirdness, and two, they are still new, complex and poorly understood systems. 2) "you already trust your life to software systems" — again, beyond the point, not quite true as many software systems are built to have human supervision and override capability (think airplanes), and others are built to strict engineering requirements (think brakes in cars) while self-driving cars are not built that way.]

pfdietz•27m ago
I wonder if this is a desired outcome of fuzzing, the puncturing of the idea that software doesn't have bugs. This goes all the way back to the very start of fuzzing with Barton Miller's work from ~1990.
brazzy•1h ago
> there is arguably a root assumption in both cases that people made, which is that "the software can't be wrong"

I think in this case, the thought process was based on the experience with older, electro-mechanical machines where the most common failure modern was parts wearing out.

Since software can, indeed, not "wear out", someone made the assumption that it was therefore inherently more reliable.

balamatom•20m ago
I think the "software doesn't wear out" assumption is just a conceivable excuse for the underlying "we do not question" assumption. A piece of software can be like a beautiful poem, but the kind of software most people are familiar with is more like a whole lot of small automated bureaucracies.

Bureaucracy being (per Graeber 2006) something like the ritual where by means of a set of pre-fashioned artifacts for each other's sake we all operate at 2% of our normal mental capacities and that's how modern data-driven, conflict-averse societies organize work and distribute resources without anyone being able to have any complaints listened to.

>Bureaucracies public and private appear—for whatever historical reasons—to be organized in such a way as to guarantee that a significant proportion of actors will not be able to perform their tasks as expected. It also exemplifies what I have come to think of the defining feature of a utopian form of practice, in that, on discovering this, those maintaining the system conclude that the problem is not with the system itself but with the inadequacy of the human beings involved.

Most places where a computer system is involved in the administration of a public service or something of the caliber, has that been a grassroots effort, hey computers are cool and awesome let's see what they change? No, it's something that's been imposed in the definitive top-down manner of XX century bureaucracies. Remember the cohort of people who used to become stupid the moment a "thinking machine" was powered within line of sight (before the last uncomputed generation retired and got their excuse to act dumb for the rest of it)? Consider them in view the literally incomprehensible number of layers that any "serious" piece of software consists of; layers which we're stuck producing more of, when any software professional knows the best kind of software is less of it.

But at least it saves time and the forest, right? Ironically, getting things done in a bureaucratic context with less overhead than filling out paper forms or speaking to human beings, makes them even easier to fuck up. And then there's the useful fiction of "the software did it" that e.g. "AI agents" thing is trying to productize. How about they just give people a liability slider in the spinup form, eh, but nah.

Wanna see a miracle? A miracle is when people hype each other into pretending something impossible happened. To the extent user-operated software is involved in most big-time human activities, the daily miracle is how it seems to work well enough, for people to be able to pretend it works any good at all. Many more than 3 such cases. But of course remembering the catastrophal mistakes of the past can be turned into a quaint fun-time activity. Building things that empower people to make less mistakes, meanwhile, is a little different from building artifacts for non-stop "2% time".

throwaway0261•1h ago
One of the comments said this:

> That standard [IEC 62304] is surrounded by other technical reports and guidances recognized by the FDA, on software risk management, safety cases, software validation. And I can tell you that the FDA is very picky, when they review your software design and testing documentation. For the first version and for every design change.

> That’s good news for all of us. An adverse event like the Therac 25 is very unlikely today.

This is a case where regulation is a good thing. Unfortunately I see a trend lately where almost any regulation is seen as something stopping innovation and business growth. There are room for improvements and some areas are over regulated, but we don't want a "DOGE" chainsaw to regulations without knowing what the consequences are.

softwaredoug•1h ago
Safety problems are almost never about one evil / dumb person and frequently involve confusing lines of responsibility.

Which makes me very nervous about AI generated code and people who don’t clam human authorship. The bug that creeps in where we scapegoat the AI isn’t gonna cut it in a safety situation.

0xDEAFBEAD•56m ago
>any bugs we see would have to be transient bugs caused by radiation or hardware errors.

Can't imagine that radiation might be a factor here...

tedggh•28m ago
TL;DR

The Therac-25 was a radiation therapy machine built by Atomic Energy Canada Limited in the 1980s. It was the first to rely entirely on software for safety controls, with no hardware interlocks. Between 1985 and 1987, at least six patients received massive overdoses of radiation, some fatally, due to software flaws.

One major case in March 1986 at the East Texas Cancer Center involved a technician who mistyped the treatment type, corrected it quickly, and started the beam. Because of a race condition, the correction didn’t fully register. Instead of the prescribed 180 rads, the patient was hit with up to 25,000 rads. The machine reported an underdose, so staff didn’t realize the harm until later.

Other hospitals reported similar incidents, but AECL denied overdoses were possible. Their safety analysis assumed software could not fail. When the FDA investigated, AECL couldn’t produce proper test plans and issued crude fixes like telling hospitals to disable the “up arrow” key.

The root problem was not a single bug but the absence of a rigorous process for safety-critical software. AECL relied on old code written by one developer and never built proper testing practices. The scandal eventually pushed regulators to tighten standards. The Therac-25 remains a case study of how poor software processes and organizational blind spots can kill—a warning echoed decades later by failures like the Boeing 737 MAX.

Tenemo•28m ago
The full 1993 report linked in the article has an intetesting statement regarding software developer certfication in the "Lessons learned" chapter:

> Taking a couple of programming courses or programming a home computer does not qualify anyone to produce safety-critical software. Although certification of software engineers is not yet required, more events like those associated with the Therac-25 will make such certification inevitable. There is activity in Britain to specify required courses for those working on critical software. Any engineer is not automatically qualified to be a software engineer — an extensive program of study and experience is required. Safety-critical software engineering requires training and experience in addition to that required for noncritical software.

After 32 years, this didn't go the way the report's authors expected, right?

firesteelrain•19m ago
To add. Safety-critical software is not something you pick up in a classroom, it is something built over years of disciplined practice. There are standards like DO-178 for avionics and IEC 61508 for industrial systems, but how rigorously they are applied often depends on cost and project constraints. That said, when failures happen, the audit trail will not matter to the people harmed. The history of safety engineering shows that almost every rule exists because someone was hurt first.
armcat•26m ago
Therac-25 was part of the mandatory "computer ethics" course at my uni, as part of the Computer Science programme, circa early 2000s.