frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
623•klaussilveira•12h ago•182 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
924•xnx•18h ago•548 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
32•helloplanets•4d ago•24 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
109•matheusalmeida•1d ago•27 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
9•kaonwarb•3d ago•7 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
40•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
219•isitcontent•12h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
209•dmpetrov•13h ago•103 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
320•vecti•15h ago•142 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
369•ostacke•18h ago•94 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
357•aktau•19h ago•181 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
477•todsacerdoti•20h ago•232 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
272•eljojo•15h ago•160 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
402•lstoll•19h ago•271 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
85•quibono•4d ago•20 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
14•jesperordrup•2h ago•6 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
25•romes•4d ago•3 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
56•kmm•5d ago•3 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
12•bikenaga•3d ago•2 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
243•i5heu•15h ago•187 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
52•gfortaine•10h ago•21 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
139•vmatsiiako•17h ago•62 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
280•surprisetalk•3d ago•37 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1058•cdrnsf•22h ago•433 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
131•SerCe•8h ago•117 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
70•phreda4•12h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
28•gmays•7h ago•10 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
176•limoce•3d ago•96 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
63•rescrv•20h ago•22 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
31•denysonique•9h ago•6 comments
Open in hackernews

Biomni: A General-Purpose Biomedical AI Agent

https://github.com/snap-stanford/Biomni
222•GavCo•7mo ago

Comments

freedomben•7mo ago
Awesome! This is the type of stuff I'm most excited about with AI - improvements to medical research and capabilities. AI can be awesome at identifying patterns in data that humans can't, and there has to be troves of data out there full of patterns that we aren't catching.

Of course there's also the possibility of engineering new drugs/treatments and things, which is also super exciting.

panabee•7mo ago
Agreed. There is deep potential for ML in healthcare. We need more contributors advancing research in this space. One opportunity as people look around: many priors merit reconsideration.

For instance, genomic data that may seem identical may not actually be identical. In classic biological representations (FASTA), canonical cytosine and methylated cytosine are both collapsed into the letter "C" even though differences may spur differential gene expression.

What's the optimal tokenization algorithm and architecture for genomic models? How about protein binding prediction? Unclear!

There are so many open questions in biomedical ML.

The openness-impact ratio is arguably as high in biomedicine as anywhere else: if you help answer some of these questions, you could save lives.

Hopefully, awesome frameworks like this lower barriers and attract more people.

govideo•7mo ago
I'd love to hear more of our thoughts re open questions in biomedical ML. You sound like you have a crisp, nuanced grasp the landscape, which is rare. That would be very helpful to me, as an undergrad in CS (with bio) trying to crystalize research to pursue in bio/ML/GenAI.

Thank you.

panabee•7mo ago
Thanks, but no one truly understands biomedicine, let alone biomedical ML.

Feynman's quote -- "A scientist is never certain" -- is apt for biomedical ML.

Context: imagine the human body as the most devilish operating system ever: 10b+ lines of code (more than merely genomics), tight coupling everywhere, zero comments. Oh, and one faulty line may cause death.

Are you more interested in data, ML, or biology (e.g., predicting cancerous mutations or drug toxicology)?

Biomedical data underlies everything and may be the easiest starting point because it's so bad/limited.

We had to pay Stanford doctors to annotate QA questions because existing datasets were so unreliable. (MCQ dataset partially released, full release coming).

For ML, MedGemma from Google DeepMind is open and at the frontier.

Biology mostly requires publishing, but still there are ways to help.

After sharing preferences, I can offer a more targeted path.

govideo•7mo ago
ML first, then Bio and Data. Of course, interconnectedness runs high (eg just read about ML for non-random missingness in med records) and that data is the foundational bottleneck/need across the board.

Interesting anecdote abt Stanford doctors annotating QA question!

Each of your comments get my mind going... I'm going to think about them more and may ping you on other channels, per your profile. Thanks!

panabee•7mo ago
More like alarming anecdote. :) Google did a wonderful job relabeling MedQA, a core benchmark, but even they missed some (e.g., question 448 in the test set remains wrong according to Stanford doctors).

For ML, start with MedGemma. It's a great family. 4B is tiny and easy to experiment with. Pick an area and try finetuning.

Note the new image encoder, MedSigLIP, which leverages another cool Google model, SigLIP. It's unclear if MedSigLIP is the right approach (open question!), but it's innovative and worth studying for newcomers. Follow Lucas Beyer, SigLIP's senior author and now at Meta. He'll drop tons of computer vision knowledge (and entertaining takes).

For bio, read 10 papers in a domain of passion (e.g., lung cancer). If you (or AI) can't find one biased/outdated assumption or method, I'll gift a $20 Starbucks gift card. (Ping on Twitter.) This matters because data is downstream of study design, and of course models are downstream of data.

Starbucks offer open to up to three people.

AIorNot•7mo ago
very cool -passed on to my friend who is working a Crispr lab
Edmond•7mo ago
This is nice, a lot of possibilities regarding AI use for scientific research.

There is also the possibility of building intelligent workspaces that could prove useful in aiding scientific research:

https://news.ycombinator.com/item?id=44509078

SalmoShalazar•7mo ago
Not to take away from this or its usefulness (not my intent), but it is wild to me how many pieces of software of this type are being developed. We’re seeing endless waves of specialized wrappers around LLM API calls. There’s very little innovation happening beyond specializing around particular niches and invoking LLMs in slightly different ways with carefully directed context and prompts.
gronky_•7mo ago
I see it a bit differently - LLMs are an incredible innovation but it’s hard to do anything useful with them without the right wrapper.

A good wrapper has deep domain knowledge baked into it, combined with automation and expert use of the LLM.

It maybe isn’t super innovative but it’s a bit of an art form and unlocks the utility of the underlying LLM

mrlongroots•7mo ago
Exactly.

To present a potential usecase: there's a ridiculous and massive backlog in the Indian judicial system. LLMs can be let loose on the entire workflow: triage cases (simple, complicated, intractable, grouped by legal principles or parties), pull up related caselaw, provide recommendations, throw more LLMs and more reasoning at unclear problems. Now you can't do this with just a desktop and chatgpt, you need a systemic pipeline of LLM-driven workflows, but doing that unlocks potentially billions of dollars of value that is otherwise elusive.

lawlessone•7mo ago
>pull up related caselaw

Or just make some up...

mrlongroots•7mo ago
At the token layer an LLM can make things up, but not as part of a structured pipeline that validates an invariant that all suggestions are valid entities in the database.

Can google search hallucinate webpages?

slacktivism123•7mo ago
>doing that unlocks potentially billions of dollars of value that is otherwise elusive

What's more, it unlocks potentially new additions to the 206 legal cases where generative AI produced hallucinated (fake) content.

https://www.damiencharlotin.com/hallucinations/

tedy1996•7mo ago
How is something that cant admit it doesnt know, and hallucinates a good innovation?
knowaveragejoe•7mo ago
Modern LLMs frequently do state that they "don't know", for what it's worth. Like everything, it highly depends on the question.
okdood64•7mo ago
> We’re seeing endless waves of specialized wrappers around LLM API calls.

AFAIK, doing proper RAG is much, much more than this.

What's your technical background if you don't mind me asking?

SalmoShalazar•7mo ago
I’m a software engineer in the biotech space. I haven’t worked with RAG though, maybe I’m underestimating the complexity.
agpagpws•7mo ago
I work at a top three lab. RAG is just Mumbai magic. Throwaway. Hi dang.
jjtheblunt•7mo ago
What is a top three lab?
zachthewf•7mo ago
We know they don't work at OpenAI or Anthropic, but beyond that have no information
epistasis•7mo ago
The application of a new technology to new fields always looks like this. SQL databases become widespread, there's a wave of specialized software development for business practices. The internet becomes widespread, and there's a wave of SaaS solving specialized use cases.

We are going to see the same for anything that Claude or similar can't handle out of the box.

mlboss•7mo ago
By that argument every SaaS is a db wrapper
goda90•7mo ago
Think of it this way: before the internal combustion engine people used animal power, steam power, human power, wind power, etc to move cargo, passengers, and even specialized loads like water pumps for the fire brigade. Then with internal combustion they did those things faster and at greater scale. That wasn't innovating on the ICE itself, or solving new problems. But it was still useful. Of course they also eventually did innovate on the ICE, and they solved new problems with it(heavier than air flight, for example) but it took awhile.
ImaCake•7mo ago
I suspect it's jumping on the hype train. Especially since its from a big Uni. Funding in research is all about marketing and latching onto the right keywords (just like VC really) so the most successful researchers are those who can market themselves effectively. Whether this tool is actually any good is secondary to whether it achieves the real goal of getting future funding for it's author.
andy99•7mo ago
I'm sure they've thought of this but curious how it fared on evaluations for supporting biological threats, ie elevating threat actor capabilities with respect to making biological weapons.

I'm personally sceptical that LLMs can currently do this (and it's based on Claude that does test this) but still interesting to see.

greazy•7mo ago
Creating a biological weapon requires a whole bunch of unique and specialised skills, equipment, safety measures (so you don't infect/kill yourself/your people) and even multidisciplinary skill sets. Take for example the Kameido (Japan) incident by the Aum Shinrikyo cult/religious group [1]. Same group which committed the Sarin attack [2].

> The use of an attenuated B. anthracis strain, low spore concentrations, ineffective dispersal, a clogged spray device, and inactivation of the spores by sunlight are all likely contributing factors to the lack of human cases.

Now you may say, that's bacteria, what about viruses? A similar set of problems would arise, how do you successfully grow virus to high titers? Even vaccine companies struggle to do this with certain viruses. Then the issue of dispersal, infectivity and mortality arise (too quick, it kills the host without spreading and authorities will notice, too slow, same problem: authorities will notice). I haven't even mentioned biological engineering which requires years of technical knowledge and laboratory experience combined with a intimate knowledge of the organism you're working with.

What worries me the most is nature springing a new influenza subtype. Our farming practices, especially in developing countries, is bound to breed a new subtype. It happened in 2009 (H1N1pdm) and it is bound to happen again. We got lucky with H1N1pdm.

1. https://pmc.ncbi.nlm.nih.gov/articles/PMC3322761/ 2. https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack

spwa4•7mo ago
> Creating a biological weapon requires a whole bunch of unique and specialised skills, equipment, safety measures

I just tell some investors our god tells me to do that.

> too quick, it kills the host without spreading and authorities will notice, too slow, same problem: authorities will notice

The current authorities are Trump's authorities and don't believe in vaccines, have said in an interview Covid is either a Jewish or Chinese conspiracy (and "they made themselves immune"), and that the "disease epidemic" needs to end (this last one is easy to misunderstand. Kennedy Jr. doesn't believe any particular disease is an epidemic. Epidemics of that sort don't exist according to him. That people believe they get sick, THAT is the epidemic that must be stopped)

deepdarkforest•7mo ago
Interesting. It's just an agent loop with access to python exec and web search as standard, BUT with premade, curated, 150 tools like analyze_circular_dichroism_spectra, with very specific params that just execute a hardcoded python function. Also with easy to load databases that conform to the tools' standards.

The argument is that if you just ask claude code to do niche biomed tasks, it will not have the knowledge to do it like that by just searching pubmed and doing RAG on the fly, which is fair, given the current gen of LLM's. It's an interesting approach, they show some generalization on the paper(with well known tidy datasets), but real life data is messier, and the approach here(correct me if im wrong) is to identify the correct tool for a task, and then use the generic python exec tool to shape the data into the acceptable format if needed, try the tool and go again.

It would be useful to use the tools just as a guidance to inform a generic code agent imo, but executing the "verified" hardcoded tools narrows the error scope, as long as you can check your data is shaped correctly, the analysis will be correct. Not sure how much of an advantage this is in the long term for working with proprietary datasets, but it's an interesting direction

epistasis•7mo ago
This is great, I've been on the waitlist for their website for a while and am now excited to be able to try it out!
teenvan_1995•7mo ago
I wonder if giving 150+ tools is really a good idea considering context limitations. Need to check out if this works IRL.
Herring•7mo ago
There's an inner ToolRetriever which is a LLM call to select the most relevant tools/data/libraries.
dmezzetti•7mo ago
Very interesting work!

If biomedical research and paper analysis is of interest to you, I've been working on a set of open source projects that enable RAG over medical literature for a while.

PaperAI: https://github.com/neuml/paperai

PaperETL: https://github.com/neuml/paperetl

There is also this tool that annotates papers inline.

AnnotateAI: https://github.com/neuml/annotateai

joelthelion•7mo ago
This is really cool, but I think the big question is whether it works and whether it's useful to a professional.

Is there anyone in the field who could comment on this?

monadoid•7mo ago
that's definitely a big question but I don't think it's the big question. this is 100% progress and it's standalone cool
i000•7mo ago
If utility/value is not "the" big question, what is?
dbcooper•7mo ago
Anyone have a spare invite?