frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Nitro: A tiny but flexible init system and process supervisor

https://git.vuxu.org/nitro/about/
53•todsacerdoti•2h ago•12 comments

The First Media over QUIC CDN: Cloudflare

https://moq.dev/blog/first-cdn/
64•kixelated•2h ago•44 comments

FFmpeg 8.0

https://ffmpeg.org/index.html#pr8.0
562•gyan•5h ago•148 comments

Should the web platform adopt XSLT 3.0?

https://github.com/whatwg/html/issues/11578
34•protomolecool•3h ago•16 comments

Scientists just found a protein that reverses brain aging

https://www.sciencedaily.com/releases/2025/08/250820000808.htm
96•stevenjgarner•2h ago•47 comments

Sprinkling self-doubt on ChatGPT

https://justin.searls.co/posts/sprinkling-self-doubt-on-chatgpt/
103•ingve•3h ago•59 comments

Launch HN: BlankBio (YC S25) - Making RNA Programmable

26•antichronology•4h ago•13 comments

Show HN: Clyp – Clipboard Manager for Linux

https://github.com/murat-cileli/clyp
50•timeoperator•5h ago•31 comments

Io_uring, kTLS and Rust for zero syscall HTTPS server

https://blog.habets.se/2025/04/io-uring-ktls-and-rust-for-zero-syscall-https-server.html
433•guntars•17h ago•130 comments

LabPlot: Free, open source and cross-platform Data Visualization and Analysis

https://labplot.org/
171•turrini•11h ago•31 comments

Waymo granted permit to begin testing in New York City

https://www.cnbc.com/2025/08/22/waymo-permit-new-york-city-nyc-rides.html
394•achristmascarl•4h ago•363 comments

The issue of anti-cheat on Linux

https://tulach.cc/the-issue-of-anti-cheat-on-linux/
40•todsacerdoti•20h ago•60 comments

Show HN: Pinch – macOS voice translation for real-time conversations

https://www.startpinch.com/
42•christiansafka•2d ago•14 comments

Leaving Gmail for Mailbox.org

https://giuliomagnifico.blog/post/2025-08-18-leaving-gmail/
59•giuliomagnifico•3h ago•85 comments

DeepSeek-v3.1

https://api-docs.deepseek.com/news/news250821
730•wertyk•1d ago•252 comments

DeepSeek v3.1 is not having a moment

https://thezvi.wordpress.com/2025/08/22/deepseek-v3-1-is-not-having-a-moment/
13•speckx•5h ago•0 comments

Does MHz Still Matter?

https://www.ubicloud.com/blog/does-mhz-still-matter
52•furkansahin•6h ago•35 comments

Closing the Nix Gap: From Environments to Packaged Applications for Rust

https://devenv.sh/blog/2025/08/22/closing-the-nix-gap-from-environments-to-packaged-applications-for-rust/
28•domenkozar•5h ago•4 comments

Harper Evolves

https://elijahpotter.dev/articles/harper_evolves
21•chilipepperhott•2h ago•4 comments

Ejabberd 25.08 / ProcessOne – Erlang Jabber/XMPP/Matrix Server – Communication

https://www.process-one.net/blog/ejabberd-25-08/
9•neustradamus•46m ago•0 comments

What about using rel="share-url" to expose sharing intents?

https://shkspr.mobi/blog/2025/08/what-about-using-relshare-url-to-expose-sharing-intents/
69•edent•9h ago•30 comments

Build Log: Macintosh Classic

https://www.jeffgeerling.com/blog/2025/build-log-macintosh-classic
29•speckx•6h ago•8 comments

Launch HN: Inconvo (YC S23) – AI agents for customer-facing analytics

30•ogham•8h ago•19 comments

Making LLMs Cheaper and Better via Performance-Efficiency Optimized Routing

https://arxiv.org/abs/2508.12631
87•omarsar•6h ago•19 comments

Control shopping cart wheels with your phone (2021)

https://www.begaydocrime.com/
255•mystraline•20h ago•119 comments

Everything is correlated (2014–23)

https://gwern.net/everything
225•gmays•19h ago•103 comments

It’s not wrong that "\u{1F926}\u{1F3FC}\u200D\u2642\uFE0F".length == 7 (2019)

https://hsivonen.fi/string-length/
133•program•14h ago•183 comments

VHS-C: When a lazy idea stumbles towards perfection [video]

https://www.youtube.com/watch?v=HFYWHeBhYbM
172•surprisetalk•4d ago•96 comments

A guide to Gen AI / LLM vibecoding for expert programmers

https://www.stochasticlifestyle.com/a-guide-to-gen-ai-llm-vibecoding-for-expert-programmers/
105•ChrisRackauckas•6h ago•95 comments

The Minecraft Code (2024) [video]

https://www.youtube.com/watch?v=nz2LeXwJOyI
46•zichy•13h ago•61 comments
Open in hackernews

Launch HN: BlankBio (YC S25) - Making RNA Programmable

25•antichronology•4h ago
Hey HN, we're Phil, Ian and Jonny, and we're building BlankBio (https://blank.bio). We're training RNA foundation models to power a computational toolkit for therapeutics. The first application is in mRNA design where our vision is for any biologist to design an effective therapeutic sequence (https://www.youtube.com/watch?v=ZgI7WJ1SygI).

BlankBio started from our PhD work in this area, which is open-sourced. There’s a model [2] and a benchmark with APIs access [0].

mRNA has the potential to encode vaccines, gene therapies, and cancer treatments. Yet designing effective mRNA remains a bottleneck. Today, scientists design mRNA by manually editing sequences AUGCGUAC... and testing the results through trial and error. It's like writing assembly code and managing individual memory addresses. The field is flooded with capital aimed at therapeutics companies: Strand ($153M), Orna ($221M), Sail Biomedicines ($440M) but the tooling to approach these problems remains low-level. That’s what we’re aiming to solve.

The big problem is that mRNA sequences are incomprehensible. They encode properties like half-life (how long RNA survives in cells) and translation efficiency (protein output), but we don't know how to optimize them. To get effective treatments, we need more precision. Scientists need sequences that target specific cell types to reduce dosage and side effects.

We envision a future where RNA designers operate at a higher level of abstraction. Imagine code like this:

  seq = "AUGCAUGCAUGC..."
  seq = BB.half_life(seq, target="6 hours")
  seq = BB.cell_type(seq, target="hepatocytes")
  seq = BB.expression(seq, level="high")
To get there we need generalizable RNA embeddings from pre-trained models. During our PhDs, Ian and I worked on self-supervised learning (SSL) objectives for RNA. This approach allows us to train on unlabeled data and has advantages: (1) we don't require noisy experimental data, and (2) the amount of unlabeled data is significantly greater than labeled. However the challenge is that standard NLP approaches don't work well on genomic sequences.

Using joint embedding architecture approaches (contrastive learning), we trained model to recognize functionally similar sequences rather than predict every nucleotide. This worked remarkably well. Our 10M parameter model, Orthrus, trained on 4 GPUs for 14 hours, beats Evo2, a 40B parameter model trained on 1000 GPUs for a month [0]. On mRNA half-life prediction, just by fitting a linear regression on our embeddings, we outperform supervised models. This work done during our academic days is the foundation for what we're building. We're improving training algorithms, growing the pre-training dataset, and making use of parameter scaling with the goal of designing effective mRNA therapeutics.

We have a lot to say about why other SSL approaches work better than next-token prediction and masked language modeling: some of which you can check out in Ian's blog post [1] and our paper [2]. The big takeaway is that the current approaches of applying NLP to scaling models for biological sequences won't get us all the way there. 90% of the genome can mutate without affecting fitness so training models to predict this noisy sequence results in suboptimal embeddings [3].

We think there are strong parallels between the digital and RNA revolutions. In the early days of computing, programmers wrote assembly code, managing registers and memory addresses directly. Today's RNA designers are manually tweaking sequences, improving stability or reduce immunogenicity through trial and error. As compilers freed programmers from low-level details, we're building the abstraction layer for RNA.

We currently have pilots with a few early stage biotechs proving out utility of our embeddings and our open source model is used by folks at Sanofi & GSK. We're looking for: (1) partners working on RNA adjacent modalities (2) feedback from anyone who's tried to design RNA sequences what were your pain points?, and (3) Ideas for other applications! We chatted with some biomarker providing companies, and some preliminary analyses demonstrate improved stratification.

Thanks for reading. Happy to answer questions about the technical approach, why genomics is different from language, or anything else.

- Phil, Ian, and Jonny

founders@blankbio.com

[0] mRNABench: https://www.biorxiv.org/content/10.1101/2025.07.05.662870v1

[1] Ian’s Blog on Scaling: https://quietflamingo.substack.com/p/scaling-is-dead-long-li...

[2] Orthrus: https://www.biorxiv.org/content/10.1101/2024.10.10.617658v3

[3] Zoonomia: https://www.science.org/doi/10.1126/science.abn3943

Comments

anyg•3h ago
How are the RNA sequences used? Are there any clinical trials running?
antichronology•2h ago
There is a number of different technologies. Some of the big ones are:

- mRNA therapies: These therapies deliver a synthetically created messenger RNA (mRNA) molecule, typically protected within a lipid nanoparticle (LNP), to a patient's cells. The cell's own machinery then uses this mRNA as a temporary blueprint to produce a specific protein.

The big example here is CAR-T therapy from Capstan which just got acquired for 2.1B. Their asset,CPTX2309 , is currently in Phase 1. Previously to do Car-T therapy you had to extract a patient's T-cells and genetically engineer them in a special facility. Now the mRNA gets delivered directly to the patient's t cells which significantly lowers the cost and technical hurdles.

- RNA interferences (RNAi): Used for gene expression knockdown through natural cellular mechanisms for viral detection. The big example here is Alnylam with 5 approved therapies and a number in clinical trials.

- Antisense Oligonucleotides (ASOs): Short single stranded RNA molecules that get delivered directly to the cell and target an existing mRNA. The big win here is Spinraza which is the first approved treatment for Spinal Muscular Atrophy (SMA) which previously didn't have a treatment. The Spinraza clinical trial (ENDEAR) was so effective that they deemed it unethical to continue it because the control arm wasn't receiving the treatment. Prior to Spinraza most patients would pass away prior to two years of age.

tennysont•2h ago
Fun to see talk of "a compiler for DNA"---I've been hoping for that for a long time.

I have to admit, at a _glance_ this feels like a promising idea with few results and lots of marketing. I'll try to be clear about my confusion, feel free to explain if I'm off base.

- There's not a lot of talk of your "ground truth" for evaluations. Are you using mRNABench?

- Has you mRNABench paper been peer reviewed? You linked a preprint. (I know paper submission can be touch or stressful, and it's a superficial metric to be judged on!)

- Do any of your results suggest that this foundation model might be any good on out of sequence mRNA sequences? If not, then is the (current) model supposed to predict properties of natural mRNA sequences rather than of synthetic mRNA sequences?

- Did a lot mRNA sequences have experimental verification of their predicted properties? At a quick glance, I see this 66 number in the paper---but I truly have no idea.

I'm super happy to praise both incremental progress and putting forth a vision, I just also want to have a clear understanding of the current state-of-the-art as well!

antichronology•2h ago
> ground truth

Hey yes, the ground truth for our evaluations is measured experimental data. Our models are benchmarked using mRNABench, which aggregates results from high-throughput wet lab experiments.

Our goal, however, is to move beyond predicting existing experimental outcomes. We intend to design novel sequences and validate their function in our own lab. At that stage, the functional success of the RNA we design will become the ground truth.

> peer reviewed?

Both mRNA bench and Orthrus are in submission (at a big ML conference and a big name journal) - unfortunately the academic systems move slow but we're working on getting them out there.

> synthetic mRNA sequences

I think you're asking on generalizing out of distribution to unnatural sequences. There are two ways that we do this: (1) There are these screens called Massively Parallel Reporter Assays (MPRAs) and we eval for example on https://pubmed.ncbi.nlm.nih.gov/31267113/

Here all the sequences are synthetic and randomly designed and we do observe generalization. Ultimately it depends on the problem that we're tackling: some tasks like gene therapy design require endogenous sequences.

(2) The other angle is variant effect prediction (VEP). It can be thought of as a counterfactual prediction problem where you ask the model whether a small change in the input predicts a large change in the output. This is a good example of the study (https://www.biorxiv.org/content/10.1101/2025.02.11.637758v2)

> experimental verification of their predicted properties

all our model evaluations are predictions of experimental results! The datasets we use are collections of wet lab measurements, so the model is constantly benchmarked against ground-truth biology.

The evaluation method involves fitting a linear probe on the model's learned embeddings to predict the experimental signal. This directly tests whether the model's learned representation of an RNA sequence contains a linear combination of features that can predict its measured biological properties.

Thanks for the feedback I understand the caution around pre-prints. We believe a self-supervised learning approach is well-suited for this problem because it allows the model to first learn patterns from millions of unlabeled sequences before being fine-tuned on specific, and often smaller, experimental datasets.

andy99•6m ago
> mRNABench

Just curious, in other areas of ML, I think it's widely acknowledged that benchmarks have pretty limited real world value, just end up getting saturated, and (my view) are all pretty correlated, regardless of their ostensible speciality and don't really tell you that much.

Do you think mRNABench is different, or where do you see the limitations? Do you imagine this or any benchmark will be useful for anything beyond comparing how different models do on the benchmark?

mfld•2h ago
Maybe another application could be the ranking of candidate variants for cancer immunotherapy? As far as I know, lncRNAs are sometimes assessed.
antichronology•29m ago
We haven't looked into this deeply yet sounds interesting. Do you have any resources where to start looking at this? Feel free to reach out to us

founders@blankbio.com

carlsborg•2h ago
Cool. Could we train a "potential oncoprotein" classifier on Orthrus embeddings? IMO self serve diagnosis and detection is a far larger market than synthesis.
antichronology•36m ago
This is a really interesting direction. There is this big field of Cell Free (cfRNA) cancer detection. We talked to a few people in the field and think that embedding sequences for this direction could be really valuable. One challenge here is that it's hard to set up evaluation tasks since the public data is scarce
westurner•1h ago
The other day I paired an article on pyroptosis caused by marine spongiibacter exopolysaccharide and an mRNA Cancer vaccine article. I started to just forward the article on bacterially-induced pyroptosis to the cancer vaccine researchers but stopped to ask an LLM whether the approaches shared common pathways or mechanisms of action and - fish my wish - they are somehow similar and I had asked a very important question that broaches a very active area of research.

How would your AI solution help with finding natural analogs of or alternatives to or foils of mRNA procedures?

westurner•1h ago
Can EPS3.9 cause pyroptosis cause IFN-I cause epitope spreading for cancer treatment?

Re: "Sensitization of tumours to immunotherapy by boosting early type-I interferon responses enables epitope spreading" (2025) https://www.nature.com/articles/s41551-025-01380-1

How is this relevant to mRNA vaccines?:

"Ocean Sugar Makes Cancer Cells Explode" (2025) https://scitechdaily.com/ocean-sugar-makes-cancer-cells-expl... ... “A Novel Exopolysaccharide, Highly Prevalent in Marine Spongiibacter, Triggers Pyroptosis to Exhibit Potent Anticancer Effects” (2025) DOI: 10.1096/fj.202500412R https://faseb.onlinelibrary.wiley.com/doi/10.1096/fj.2025004...

forgotpwagain•1h ago
I am totally onboard with the premise (as a TechBio-adjacent person), and some of the approaches you're taking (focused domain-specific models like Orthrus, rather than massive foundation models like Evo2).

I'm curious about what your strategy is for data collection to fuel improved algorithmic design. Are you building out experimental capacity to generate datasets in house, or is that largely farmed out to partners?

antichronology•30m ago
We think that Orthrus can be applied in a bunch of ways to non-coding and coding RNA sequences but it's definitely fair we're a bit more focused on RNA sequences currently instead of non-coding parts of the genome like promoters and intergenic sequences.

For the data - Orthrus is trained on non experimentally collected data so our pre-training dataset is large by biological standards. It adds up to about 45 million unique sequences and assuming 1k tokens per sequence it's about 50b tokens.

We're thinking about this as large pre-training run on a bunch of annotation data from Refseq and Gencode in conjunction with more specialized Orthology datasets that are pooling data across 100s of species.

Then for specific applications we are fine tuning or doing linear probing for experimental prediction. For example we can predict half life using publicly available data collected by the awesome paper from: https://genomebiology.biomedcentral.com/articles/10.1186/s13...

Or translation efficency: https://pubmed.ncbi.nlm.nih.gov/39149337/

Eventually as we ramp up out wet lab data generation we're thinking about what does post-training look like? There is an RL analog here that we can use on these generalizable embeddings to demonstrate "high quality samples".

There are some early attempts at post-training in bio and I think it's a really exciting direction