frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

AI uncovers 38 vulnerabilities in largest open source medical record software

https://aisle.com/blog/aisle-discovers-38-critical-security-vulnerabilities-in-healthcare-software-used-by-100000-providers
58•mmsc•1h ago

Comments

simonw•52m ago
"The values passed to _sort were concatenated directly into SQL ORDER BY clauses with no validation" - sounds to me like this project had some low-hanging fruit!

Looks like every single one of the 38 vulnerabilities were either SQL injection, XSS, path traversal or "Insecure Direct Object Reference" aka failing to check the caller was allowed to access the record.

This is actually a pretty good example of the value of AI security scanners - even really strong development teams still occasionally let bugs like this slip through, having an AI scanner that can spot them feels worthwhile to me.

Taters91•45m ago
These kind of checks were available without AI.
sheikhnbake•32m ago
Math is doable without a calculator
happytoexplain•27m ago
The headline is "AI uncovers...", implying that the standard static analyzers used by basically everybody didn't catch them.
serf•8m ago
isn't this just sort of turning chicken-or-egg?

if an AI uses static analyzers to do work ,is it the tool or the ai ?

if AI is using grep to do the work, is it the AI or grep?

I mean essentially all agent work boils down to "cat or grep?"

positron26•16m ago
Was the human labor?
RA_Fisher•10m ago
AI gives us a means of leverage. We can do more with less. production = f(labor, capital, technology) + eps
hilariously•44m ago
Honestly those all sound like common linters could find things like string concatenation.
gowld•43m ago
I think SQL Injection detectors were pretty mature even before the "AI" version?
EGreg•39m ago
“even really strong development teams”

One would think a single really strong developer, let alone a team, would look for interpolation in strings fed to RDBMS?

srveale•33m ago
And yet here we are
tmoertel•39m ago
> Looks like every single one of the 38 vulnerabilities were either SQL injection, XSS, path traversal or "Insecure Direct Object Reference" aka failing to check the caller was allowed to access the record.

Seems like code review against a checklist of the most common vulnerabilities would have prevented these problems. So I guess there are two takeaways here:

First, AI scanners are useful for catching security problems your team has overlooked.

Second, maintaining a checklist of the most-common vulnerabilities and using it during code review is likely to not only prevent most of the problems that AI is likely to catch, but also show your development team many of their security blind spots at review time and teach them how to light those areas. That is, the team learns how to avoid creating those security mistakes in the first place.

dgb23•34m ago
But by not having a checklist you avoid that your blind spots get exposed.
tmoertel•27m ago
Why would you want to prevent your development team from learning about their blind spots?
capiki•28m ago
What about having the checklist and having an AI tool use it to catch things at review time (or even development time)?
tmoertel•22m ago
Having AI tools do the review against the checklist would probably prevent the problems. However, it would probably be substantially inferior as a teaching tool for your team. The exercise of having reviewers hunt the checklisted vulnerabilities for themselves is what develops the mental muscles needed to understand the vulnerabilities in depth and avoid them when designing and writing future code.

But, yes, I'd augment any manual review with a checklist and AI review as a final step. If the AI catches any problems then, your reviewers will be primed to think about why they overlooked them.

simonw•28m ago
Yee, absolutely. A team with a strong code review culture that incorporates security review against common exploits ideally wouldn't end up with holes like this.
ulimn•9m ago
Checking for OWASP top 10 items during code review is usually a mid level dev interview question IME. It's nothing new. Teams don't have to come up with these. These things exist.
gchamonlive•33m ago
Isn't this something SonarQube catches?
webXL•20m ago
Yes. Isn't this something code review catches? :)
positron26•13m ago
Presuming there is an infinite pool of programmers who tirelessly work for a low price?
gchamonlive•4m ago
[delayed]
camdenreslink•32m ago
I don't think strong development teams are still letting SQL injection vulnerabilities through by manually concatenating strings to build queries with user-provided data. Not in the year 2026.
simonw•31m ago
Good frameworks can protect against SQL injection and XSS (through default escaping of output variables) but protecting against insecure direct object access is a lot harder.
voxic11•11m ago
Keep in mind this project is a 25 year old PHP application.
gostsamo•24m ago
> This is actually a pretty good example of the value of AI security scanners

Are you fuckin' serious? This would be caught with any self-respecting scanner even 5 years ago and with most educated juniors even earlier.

I use AI every day, but I'm not deep enough in the dilulu to believe that everything above two brain cells should be a transformer.

dflock•48m ago
No one knows how many vulnerabilities there are in closed source medical record software - because we can't check. There are _probably_ loads though, because that medical software is super terrible in every way that we _can_ check.
oatmeal1•26m ago
Or voting machines.
0xdeadbeefbabe•23m ago
SQL injection and XSS come up in dynamic analysis too.
0xdeadbeefbabe•45m ago
Also the attackers may become hypochondriacs after reading too much medical stuff.
0123456789ABCDE•43m ago
something i am missing in this area is education and services.

if, during an automated code review, claude finds a vulnerability in a dependency, where should i direct it to share the findings?

who would be willing to take the slop-report, and validate it?

i've never done vulnerability disclosure, yet, with opus at max effort, i have found some security issues in popular frameworks/libraries i depend on.

a proper report can't be one pass, it has to validate it's a real problem, but ask opus to do that and you run the risk of the api refusing the request, endangering your account status. you ask to do it anyway, and write a report and now, you're burning tokens on a report that's likely to be ignore, because slop.

so i sit on this, and hope it doesn't hit me.

0123456789ABCDE•34m ago
i'd be happy to use an official skill for vulnerability reporting

the skill would be manually triggered when vulnerabilities are found; do another pass for details; version, files, lines, then write a lightweight report and submit somewhere. anthropic could host this, or work with h1 to do that. when the models have extra capacity a process comes around and picks up these reports one by one, does another check, maybe with proof-of-concept, reports through proper channels.

hedgehog•32m ago
It often takes strong understanding of the upstream codebase and roadmap to write a good patch. It's easy enough to write a rough PoC and draft patch but getting all the way through the cycle takes up a bunch of time both from you and the maintainers (who are often already overloaded). My advice would be to draft a bunch privately, take one of the highest impact all the way through a deployed fix, and then plan based on what you learn. Some people's answer is to maintain private forks with automated fixes applied, with a periodic rebase on upstream.
0123456789ABCDE•13m ago
i'm well aware that a pull-request with a fix is a lot of work. i don't pretend to have the capacity to do this, with all the rest i have to attend to.

it just doesn't sit well with me that, i am aware of something being broken, and not telling about it to someone who would otherwise want to know about it.

giancarlostoro•42m ago
I've said it a few times, and I will keep saying it. Especially for the anti-AI crowd. Sure, you don't want it to write your code, fine, not bothered at all, but review your code for serious security flaws, and enhancing security audits? You definitely want AI there. I foresee the next few years we will see all sorts of companies, sites, and critical infrastructure being hacked. Heck, we're already seeing more and more of this. It's not going to end very well. If your company is sleeping on its cyber security, tomorrow isn't when you want to deal with it, but get on it before you can.

I say this purely as a Software Engineer, not a security expert, but you have to consider hackers can, are, and will use AI against you.

The Mexican government was hacked by people using Claude[0] this was apparently many government systems and services, all that PII for everyone in the country in these systems. Even if Claude somehow "patches" this, there's so many open source models out there, and they get better every day. I've seen people fully reverse engineer programs from disassmebling their original code into compilable code in its original programmed language, Claude happily churning until it is fully translated, compiles and runs.

Whatever your thoughts on AI are, if you aren't at least considering it for security auditing (or to enhance security auditing) you are sleeping at the wheel just waiting to be hacked by some teenager skiddie with AI.

[0]: https://news.ycombinator.com/item?id=47280739

captainkrtek•38m ago
Amen to this.

I've bounced back and forth on my feelings for AI and have landed in the realm of: - there are certain things it is exceptional at that humans cannot replicate. - there are certain things I do not want to use it for.

And review falls squarely in that first category. Similarly, it is exceptional at working through "low hanging fruit" type problems such as spotting inefficiencies, analyzing a profile to find flaws in software, etc.

dgb23•41m ago
It seems to me that this sort of work is a usecase that’s actually very fitting use case for LLM agents and the like. Because they can be trained and tuned to find commonly known vulnerability patterns.

Here, something that looks like the thing is a strong signal, as long as the probability is high enough to be useful.

Remember Netflix‘ chaos monkey?

Exoristos•38m ago
Now do Epic.
jjwiseman•36m ago
The Aisle "the moat is the system, not the model" blog post comparing Mythos' results to their system's was misleading, and seemed to be an attempt to ride the coattails of attention on Mythos. It was of low enough quality that I'd want to see more details of exactly how these vulnerabilities were found.
mbesto•31m ago
What's probably WAY worse than this is that most healthcare providers running OpenEMR are likely on older versions of OpenEMR where CVEs are already detected.
doctorpangloss•24m ago
Nobody uses OpenEMR. No chance. They are lying about their numbers.
ranger_danger•29m ago
> used by over 100,000 medical providers serving more than 200 million patients across 34 languages

Interesting... I have been working with many different EHR platforms across the country for the last 15 years and I have never heard of OpenEMR before, or any open-source platform for that matter.

rustyhancock•25m ago
I think we'll see a lot more of this (and it's a good thing).

Automation doesn't usually replace humans it just hikes up the floor.

I.e. nearly all of these (most in general?) bugs will be spotted quickly by a train eye. But it's hard to get trained eyes on code all the time. AI will catch all the low hanging fruit.

What's great about this it seems mostly low hanging I.e. even basic AI will help people patch holes.

hereme888•19m ago
OpenEMR? Used by some missionary doctor in remote Afghanistan?
motoxpro•19m ago
A better headline would be "AI finds mistakes made by human" It's not that it's doing something novel, every single person in this thread has made mistakes, and big ones, not because we aren't trying, it just happens. AI helps find some mistakes, not all, not everytime, not without effort, not without slop/false positives, just some mistakes. Thats a very good thing.
demorro•12m ago
Completely normal and expected.

People thinking that this isn't the case everywhere need a reality check. Most software is riddled with obvious security issues. If we can remediate them with AI, great, but don't be thinking that this is something that we could only have dealt with with AI. Enough attention and prioritization of these issues would also have sorted it.

Ask yourself if we weren't currently in an era of AI-focus and AI was just another boring tool, if we would be bothering to do this sort of thing. Loads of us still aren't bothering with basic static analysis.

muglug•9m ago
Most of these vulnerabilities could have been discovered much earlier had the same security researchers pointed a SAST tool at the codebase.

I wrote an OSS PHP SAST tool 6 years ago, but it's suffered from industry neglect — most people only care about security after an incident, and PHP has enough magical behaviour that any tool needs to be tuned to how specific repositories behave.

I agree there's a big opportunity for LLMs to take this work forward, filling in for a lack of human expertise.

zuzululu•7m ago
This is the new trend that keeps me awake at night. It's that adversaries now have access to off the book inference and that they will be able to scan pretty much any widely used open source project and discover and exploit zero days. I think making it closed source offers a bit more security but will only buy time as it is possible to reverse engineer them with current closed source models with extreme ease.

If you are sufficiently funded then you could benefit from the flip side of discovery but it looks bleak if you are a sole maintainer on a large project that is a dependency in many deployed instances without any revenue or donations, plus there is nobody digging deep enough to care or spend inference ( would your company spend the money on extra inference to is the question, more often than not) on both sides of the fence, we are going to see massive disruptions across the board.

Cybersecurity is becoming a proof-of-work of sorts and the race is on. There might be unknown number of zero days being silently discovered and deployed, likely have an impact on the economics too, thus making the access far more widespread.

I do wonder if this means our tech stacks will go back to being boring and simple as possible...you wouldn't hack a static html website being served on nginx would you?

Universal Transformers Need Memory: Depth-State Trade-Offs in Adaptive Recursive

https://arxiv.org/abs/2604.21999
1•che_shr_cat•1m ago•0 comments

Show HN: Art Coding Lab – Learn Creative Coding Through Micro Challenges

https://artcodinglab.com/
1•absurdwebsite•2m ago•0 comments

GraphCompose – declarative PDF layout engine for Java (MIT)

https://github.com/DemchaAV/GraphCompose
1•demchaav•3m ago•0 comments

Show HN: I built a dating SIM that prepares you for your date

https://claude.ai/public/artifacts/98750067-546b-4c9e-ab62-68cae2941329
1•danish00111•6m ago•0 comments

Study Finds a Third of New Websites Are AI-Generated

https://www.404media.co/study-finds-a-third-of-new-websites-are-ai-generated/
2•Brajeshwar•9m ago•0 comments

GB Electricity Bills

https://www.electricitybills.uk/
2•kieranmaine•9m ago•1 comments

OpenAI Models, Codex, and Managed Agents Come to AWS

https://openai.com/index/openai-on-aws/
3•meetpateltech•9m ago•0 comments

Show HN: PastePlop – yet another Mac clipboard manager

https://bendansby.com/apps/pasteplop.html
1•webwielder2•12m ago•0 comments

Warp is now Open-Source

https://github.com/warpdotdev/warp
1•doppp•13m ago•0 comments

Nvidia Nemotron 3 Nano Omni

https://developer.nvidia.com/blog/nvidia-nemotron-3-nano-omni-powers-multimodal-agent-reasoning-i...
1•qainsights•13m ago•0 comments

Tridimensional Visualization of a Blackbird Song [video]

https://www.youtube.com/watch?v=EgWMo4BrKBs
1•vinnyglennon•14m ago•0 comments

Ask HN: What do you check before launching a web app?

1•pagelensai•14m ago•0 comments

Show HN: How to become an Anti-founder, THE MANUAL

https://manual.cochranblock.org
1•cochranblock•14m ago•0 comments

Biggest US airlines spent $1.2B more on fuel in Q1

https://sherwood.news/business/the-6-biggest-us-airlines-spent-1-2-billion-more-on-fuel-in-q1-and...
2•speckx•15m ago•0 comments

Our Uncertain Uncertainties

https://kevinkelly.substack.com/p/our-uncertain-uncertainties
2•nowflux•15m ago•0 comments

You're the Bread in the AI Sandwich

https://every.to/context-window/you-re-the-bread-in-the-ai-sandwich
2•gmays•15m ago•0 comments

The Download: Musk and Altman's legal showdown, and AI's profit problem

https://www.technologyreview.com/2026/04/28/1136479/the-download-musk-altman-openai-trial-ai-prof...
1•joozio•15m ago•0 comments

From GitHub to Codeberg/Forgejo

https://www.jonashietala.se/blog/2026/04/28/from_github_to_codebergforgejo/
1•lawn•16m ago•0 comments

Doofioso (2006)

https://scottaaronson.blog/?p=75
1•Tomte•17m ago•0 comments

Remote Code Execution on Github with a single Git push

https://twitter.com/wiz_io/status/2049153209982140718
1•ramonga•17m ago•0 comments

The Royal Game (2020)

https://codemetas.de/2020/11/22/The-Royal-Game.html
1•tosh•19m ago•0 comments

Humpback whale 'Timmy' being transported towards ocean

https://www.dw.com/en/germany-stranded-whale-timmy-being-transported-towards-ocean-in-special-bar...
1•Tomte•20m ago•0 comments

How to Acquire a Country: A Thought Experiment

https://flyingsolo.bearblog.dev/how-to-acquire-a-country/
1•ankitdce•21m ago•0 comments

SXSW Used AI-Powered Trademark Tool to Censor Dissent on Instagram

https://www.404media.co/sxsw-used-ai-powered-trademark-tool-to-censor-dissent-on-instagram/
1•cdrnsf•23m ago•0 comments

Building a Fast Multilingual OCR Model with Synthetic Data

https://huggingface.co/blog/nvidia/nemotron-ocr-v2
2•ibobev•25m ago•0 comments

DeepSeek-V4: a million-token context that agents can use

https://huggingface.co/blog/deepseekv4
2•ibobev•26m ago•0 comments

An Aristotelian understanding of object-oriented programming

https://dl.acm.org/doi/10.1145/353171.353194
2•b-man•26m ago•0 comments

Adaptive Ultrasound Imaging with Physics

https://huggingface.co/blog/nvidia/raw2insights-adaptive-ultrasound-imaging
1•ibobev•26m ago•0 comments

General Motors says it expects $500M tariff refund after SCOTUS ruling

https://abcnews.com/Business/general-motors-expects-500-million-tariff-refund-after/story?id=1324...
2•testing22321•28m ago•0 comments

AI's Economics Don't Make Sense

https://www.wheresyoured.at/ais-economics-dont-make-sense-ad-free/
3•speckx•28m ago•0 comments