frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Tiny C Compiler

https://bellard.org/tcc/
27•guerrilla•1h ago•10 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
139•valyala•5h ago•23 comments

You Are Here

https://brooker.co.za/blog/2026/02/07/you-are-here.html
15•mltvc•1h ago•9 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
68•zdw•3d ago•28 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
32•gnufx•3h ago•35 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
72•surprisetalk•4h ago•85 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
111•mellosouls•7h ago•214 comments

Italy Railways Sabotaged

https://www.bbc.co.uk/news/articles/czr4rx04xjpo
49•vedantnair•1h ago•29 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
21•randycupertino•31m ago•13 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
150•AlexeyBrin•10h ago•28 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
860•klaussilveira•1d ago•263 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
109•vinhnx•8h ago•14 comments

LLMs as the new high level language

https://federicopereiro.com/llm-high/
7•swah•4d ago•1 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1106•xnx•1d ago•621 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
71•thelok•7h ago•13 comments

First Proof

https://arxiv.org/abs/2602.05192
72•samasblack•7h ago•57 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
17•mbitsnbites•3d ago•1 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
249•jesperordrup•15h ago•82 comments

I write games in C (yes, C) (2016)

https://jonathanwhiting.com/writing/blog/games_in_c/
152•valyala•5h ago•132 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
527•theblazehen•3d ago•196 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
36•momciloo•5h ago•5 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
17•languid-photic•3d ago•5 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
96•onurkanbkrc•10h ago•5 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
203•1vuio0pswjnm7•11h ago•305 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
41•marklit•5d ago•6 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
51•rbanffy•4d ago•13 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
640•nar001•9h ago•280 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
128•videotopia•4d ago•40 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
266•alainrk•9h ago•443 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
38•sandGorgon•2d ago•17 comments
Open in hackernews

Inferring the Phylogeny of Large Language Models

https://arxiv.org/abs/2404.04671
69•weinzierl•9mo ago

Comments

PunchTornado•9mo ago
Intuitive and expected result (maybe without the prediction of performance). I'm glad somebody did the hard work of proving it.

Though, if this is so clearly seen, how come AI detectors perform so badly?

haltingproblem•9mo ago
It might be because detecting if output is AI generated and mapping output which is known to be from an LLM to a specific LLM or class of LLMs are different problems.
Calavar•9mo ago
This experiment involves each LLM responding to 128 or 256 prompts. AI detection is generally focused on determining the writer of a single document, not comparing two analagous sets of 128 documents and determining if the same person/tool wrote both. Totally different problem.
light_hue_1•9mo ago
They're discovering the wrong thing. And the analogy with biology doesn't hold.

They're sensitive not to architecture but to training data. That's like grouping animals by what environment they lived in, so lions and alligators are closer to one another than lions and cats.

The real trick is to infer the underlying architecture and show the relationships between architectures.

That's not something you can tell easily by just looking at the name of the model. And that would actually be useful. This is pretty useless.

refulgentis•9mo ago
This is provocative but off-base in order to be so: why would we need to work backwards to determine architecture?

Similarly, "you can tell easily by just looking at the name of the model" -- that's an unfounded assertion. No, you can't. It's perfectly cromulent, accepted, and quite regular to have a fine-tuned model that has nothing in its name indicating what it was fine-tuned on. (we can observe the effects of this even if we aren't so familiar with domain enough to know this, i.e. Meta in Llama 4 making it a requirement to have it in the name)

littlestymaar•9mo ago
You are the one making a wrong biological analogy. Architecture isn't comparable to genes any more than training data is comparable to genes, and training data isn't comparable to environment, doing these kind of analogies brings you nothing but false confidence and misunderstanding.

What they do in the paper on the other hands is to apply the methods of biology, and get a result that is akin to phylogeny, not from a biological analogy but from a biologically-inspired method.