frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
529•klaussilveira•9h ago•146 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
859•xnx•15h ago•518 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
72•matheusalmeida•1d ago•13 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
180•isitcontent•9h ago•21 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
182•dmpetrov•10h ago•79 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
294•vecti•11h ago•130 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
69•quibono•4d ago•12 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
343•aktau•16h ago•168 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
338•ostacke•15h ago•90 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
434•todsacerdoti•17h ago•226 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
237•eljojo•12h ago•147 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
13•romes•4d ago•2 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
373•lstoll•16h ago•252 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
6•videotopia•3d ago•0 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
41•kmm•4d ago•3 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
14•denuoweb•1d ago•2 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
220•i5heu•12h ago•162 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
91•SerCe•5h ago•75 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
62•phreda4•9h ago•11 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
162•limoce•3d ago•82 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
38•gfortaine•7h ago•10 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
127•vmatsiiako•14h ago•53 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
18•gmays•4h ago•2 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
261•surprisetalk•3d ago•35 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1029•cdrnsf•19h ago•428 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
55•rescrv•17h ago•18 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
83•antves•1d ago•60 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
18•denysonique•6h ago•2 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
5•neogoose•2h ago•1 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
109•ray__•6h ago•54 comments
Open in hackernews

Launch HN: Parachute (YC S25) – Guardrails for Clinical AI

62•ariavikram•5mo ago
Hi HN, Aria and Tony here, co-founders of Parachute (https://www.parachute-ai.com/). We’re building governance infrastructure that lets hospitals safely evaluate and monitor clinical AI at scale.

Hospitals are racing to adopt AI. More than 2,000 clinical AI tools hit the U.S. market last year - from ambient scribes to imaging models. But new regulations (HTI-1, Colorado AI Act, California SB 3030, White House AI Action Plan) require auditable proof that these models are safe, fair, and continuously monitored.

The problem is, most hospital IT teams can’t keep up. They can’t vet every vendor, run stress tests, and monitor models 24/7. As a result, promising tools die in pilot hell while risk exposure grows.

We saw this firsthand while deploying AI at Columbia University Irving Medical Center, so we built Parachute. Columbia is now using it to track live AI models in production.

How it works: First, Parachute evaluates vendors against a hospital’s clinical needs and flags compliance and security risks before a pilot even begins. Next, we run automated benchmarking and red-teaming to stress test each model and uncover risks like hallucinations, bias, or safety gaps.

Once a model is deployed, Parachute continuously monitors its accuracy, drift, bias, and uptime, sending alerts the moment thresholds are breached. Finally, every approval, test, and runtime change is sealed into an immutable audit trail that hospitals can hand directly to regulators and auditors.

We’d love to hear from anyone with hospital experience who has an interest in deploying AI safely. We look forward to your comments!

Comments

iamgopal•5mo ago
Are you guys using AI to check on AI ?
j4coh•5mo ago
Yes but don’t worry there’s another YC startup who is building AI to check on AI that’s checking on AI.
nextworddev•5mo ago
Gold
ariavikram•5mo ago
Like I said above, we don’t use AI agents to grade other models. Instead, we run in-house evaluations tailored to each category of clinical AI, giving hospitals an apples-to-apples comparison between similar vendors.
iamgopal•5mo ago
How are you going to protect AI which optimise against your tests instead of actual data ?
jph•5mo ago
Congratulations Aria & Tony, this is much needed for healthcare. I work at UK NHS Wales in software engineering, and would be happy to talk with you personally, and also happy to introduce you to the NHS Wales AI team. Peronal email joel@joelparkerhenderson.com, work email joel.henderson@wales.nhs.uk. And we're hiring-- if anyone here is keen to code for social good healthcare, email me.
ariavikram•5mo ago
Thanks! Would love to see how we can help!
padolsey•5mo ago
Hi jph! Not intending to subvert the thread, but I'd love to chat to someone like you. The non-profit I work at has been working on democratizing evals. This wouldn't be to ensure your in-house AI is up to scratch (parachute looks ideal!), but on ensuring the general landscape of models is up-to-date on best practice, e.g. NICE and other guidance, so that everyday model users aren't misled. Such a demo eval is here: https://weval.org/analysis/uk-clinical-scenarios/08278696ca2...

We're looking for domain experts especially in high risk domains like healthcare, education, therapy. Then we'd work together co-authoring an eval in your specialism to expose and motivate AI labs to do better.

robertlagrant•5mo ago
Just to say - I was impressed chatting to someone in and around NHS Wales software a few years ago when I was leading the development side of a health app. I seem to remember there were some good plans in place to join up patient health records across the Welsh trusts, which sounded very sensible.
potatoman22•5mo ago
This is cool, but I’m a little skeptical. If Parachute uses AI agents to evaluate other models, who’s evaluating the AI agents? It’s hard to imagine it’s safe to entrust model validation and bias assessments to an automated system, especially in healthcare. Validating clinical AI is pretty complex between finding the right data, ensure event timings are accurate, simulating the model, etc. That’s why I’m guessing Parachute is a little less automated than the landing page makes it out to be, which is maybe a good thing. Regardless, this is cool. Hope you make AI in healthcare more safe.
znxnnxnx•5mo ago
My friend, you are overthinking! The funding round just came in and its smooth sailing ahead. Well for the next six months.
fehudakjf•5mo ago
"mummmble murmble ummbble... but that can|will be easily fixed|addressed|solved by future models"
siva7•5mo ago
I wouldn't touch a YC company for this use case. All the marketing from the landing pages is just that - blabla.
jstummbillig•5mo ago
This line of thinking always leaves me confused about other peoples experience in the Pre-AI world. People and systems around me fail all the time because evaluation fails. Yes, the failure modes are different but I don't consider them favorable without AI. In fact, I consider that they are better.

For example, consider what happens in this video: https://www.youtube.com/watch?v=AZhCYisIQB8&t=2s

Please don't make this mistake of thinking "aha, but you see, a human intervened!" This will never happen in the real world for the vast majority of humans in a similar scenario.

potatoman22•5mo ago
I'm afraid I don't quite understand your point. What line of thinking are you referencing? Also risk scores and algorithms have been used in medicine for over 50 years, so evaluating them isn't anything new.
ariavikram•5mo ago
That’s a great point. We don’t use AI agents to grade other models. Instead, we run in-house evaluations tailored to each category of clinical AI, giving hospitals an apples-to-apples comparison between similar vendors.
padolsey•5mo ago
> This is cool, but I’m a little skeptical. If Parachute uses AI agents to evaluate other models, who’s evaluating the AI agents?

Usually you can run human-in-the-loop spot checks to ensure that there's parity between your LLM evaluators and the equivalent specialist human evaluator.

pizzathyme•5mo ago
This is exactly the type of company that I like to see: - Sounds very complicated/thorny to navigate (regulatory, medical, compliance) - Not super "sexy", which keeps competition lower - Clear pain points (fines) for customers that can and are willing to pay (hospitals)

Next up is just great execution by you all!

That list of logos you all have - are those paying customers today?

Best of luck!

creata•5mo ago
> That list of logos you all have - are those paying customers today?

Doesn't look like it. The first list of logos is standards bodies. The second list of logos is integrations.

ariavikram•5mo ago
That's correct!
seriusam•5mo ago
How did you get the numbers on your landing page? It looks like an AI generated product with AI generated "safety". Just like the "2000 clinical AI tools" that hit the US market, this looks like one of the "2000 governance tools" that hit the market. How are you vetting every AI Scribe tool so your product itself isn't biased ? Have you done any work with the companies you have listed in your landing page? It looks like a governance tool that the "trying-to-be" scribe companies would use to not get legit audits.
ariavikram•5mo ago
Thanks for your question.

We use in-house evals (based on existing state-of-the-art benchmarks) to compare ambient scribes.

If you take a deeper look into the companies on our landing page, you will see that the first list refers to the compliance standards our workflows follow and the second refers to the existing tools we integrate with.

seriusam•5mo ago
Hi,

> We use in-house evals (based on existing state-of-the-art benchmarks) to compare ambient scribes.

Have you validated that your in-house evals accurately reflect real-world performance?

> If you take a deeper look into the companies on our landing page, you will see that the first list refers to the compliance standards our workflows follow and the second refers to the existing tools we integrate with.

I am talking about your use of Abridge Ambient Scribe, Nuance and Deepscribe brands in your landing page. You have numbers on the number of beds, hourly efficiency and their costs next to the actual brands. I don't see any proper attributions or disclaimers.

Also, if you were to compare actual numbers you get from the websites, these companies can use different models for different users, have enterprise discounts for different organizations and etc. How are you planning on having access to these to make a proper comparisons to see what they would actually offer to their potential customer?

Fwiw, I am a fan of the "AI marketplace" runs. This one just raises a lot of questions for me.

But, good luck!

richwater•5mo ago
> auditable proof that these models are safe, fair

Impossible to deliver

sgt•5mo ago
Acceptance criteria has entered the chat.
potatoman22•5mo ago
Evidence for safe and fair AI systems is possible as long as you define what "safe" and "fair" mean for your usecase. Fairness might look like "no cohort has >5% higher false positive rate than another" and safety might mean "the model must have a false negative rate of less than 15%". Safety more so encompasses the workflows around the model, including human intervention, auditing, monitoring, etc.

Here's a good overview of fairness: https://learn.microsoft.com/en-us/azure/machine-learning/con... and there's plenty of papers discussing how to safely use predictive analytics and AI in healthcare.

I don't know if this product can give proof for safe and fair ML systems, but it's not impossible to use these things safely and fairly.

ariavikram•5mo ago
Thanks for your question. Parachute's workflows are built around widely accepted conventions for safety and fairness for AI models in healthcare such as NIST AI RMF, CHAI and HAIP's HEAAL framework.
padolsey•5mo ago
You're right; we should not even try. Better to have 0% compliance coverage and your honor than 90% coverage as best-effort.
cactca•5mo ago
These are extraordinary claims for a rapidly evolving field with a huge breadth of intended uses and technologies.

Here are a few questions that should be part of an evaluation of the Parachute platform to pressure test the claims made on the website and this post: 1) How many Parachute customers have passed regulatory audits by CMS, OCR, CLIA/CLAP, and the FDA? 2) What high quality peer-reviewed scientific evidence supports the claims of increased safety and detection of hallucinations and bias? 3) What liability does Parachute assume during production deployment? What are the SLAs? 4) How many years of regulatory experience does the team have with HIPPA, ISO, CFR, FDA, CMS, and state medical board compliance?

tony-yamin•5mo ago
This is a good question. Parachute is not a certification body so we do not help you pass audits. Instead we help with internal decision-making surrounding the implementation of AI tools. We also help hospitals keep track of why decisions we made and who made them to produce to regulators and litigators in the future. Parachute does not assume any liability during production deployment.
fehudakjf•5mo ago
Where are your promises or goals of addressing the fear that these large language model medical paperwork assistants will be implanting subtle time bombs into their reports.

We've all seen how powerful language can be in legal defenses surrounding the for profit healthcare industry of the united states.

What new "pre-existing conditions" alike thought, and legal argument, terminating phrases will these large language models come up with for future generations?

nradov•5mo ago
I don't understand your comment. What sort of time bombs do you mean?
shandrodo•5mo ago
I suppose “It is difficult to get a man to understand something” and all, but I’ll try to help you understand.

The OP provided you one such “time bomb”: pre-existing condition. This was, 40 years ago, a totally innocuous phrase and then it became a rally cry of health insurers’ “delay,deny,defend” modus operandi.

If a large language model is taking notes for a doctor how will you defend against it slipping in phrases such as this to allow insurers to avoid their responsibilities?

Tell me how your product is designed to defend people from health insurers, or admit how your product is designed to help health insurers.

nradov•5mo ago
Your comment is a non sequitur. This is a policy issue, not something that clinical AI or monitoring thereof can solve. The Affordable Care Act (Obamacare) prevents health plans from charging more or denying coverage based on pre-existing conditions.

https://www.hhs.gov/healthcare/about-the-aca/pre-existing-co...

HIPAA also allows individuals to request amendments to their medical records if there are errors such as an incorrect diagnosis.

https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-...

zmmmmm•5mo ago
> Parachute evaluates vendors against a hospital’s clinical needs and flags compliance and security risks before a pilot even begins

this is humans? I'm really not sure how this could be automated given the vast spectrum of applications and specific requirements complex organisations like hospitals have. It would have to boil down to "check box" compliance style analysis which in my experience usually leads to poor outcomes down the track (the worst product from every other point of view gets chosen because it checks the most arbitrary boxes on the security / compliance forms - then the integration bill dwarfs whatever it would have cost to address most of those things bespoke anyway).