frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
155•ColinWright•1h ago•118 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
27•surprisetalk•1h ago•28 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
122•AlexeyBrin•7h ago•24 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
136•alephnerd•2h ago•91 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
64•vinhnx•5h ago•9 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
830•klaussilveira•22h ago•249 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
57•thelok•4h ago•8 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
16•valyala•2h ago•3 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
14•valyala•2h ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
114•1vuio0pswjnm7•8h ago•143 comments

Brookhaven Lab's RHIC Concludes 25-Year Run with Final Collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
4•gnufx•47m ago•1 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1060•xnx•1d ago•611 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
76•onurkanbkrc•7h ago•5 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
485•theblazehen•2d ago•177 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
210•jesperordrup•12h ago•72 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
563•nar001•6h ago•257 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
224•alainrk•6h ago•347 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
7•momciloo•2h ago•0 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
37•rbanffy•4d ago•7 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
8•languid-photic•3d ago•1 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
19•brudgers•5d ago•4 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
29•marklit•5d ago•3 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
114•videotopia•4d ago•31 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
76•speckx•4d ago•80 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
273•isitcontent•22h ago•38 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
201•limoce•4d ago•111 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
287•dmpetrov•22h ago•154 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
22•sandGorgon•2d ago•11 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
556•todsacerdoti•1d ago•268 comments
Open in hackernews

Inferring the Phylogeny of Large Language Models

https://arxiv.org/abs/2404.04671
69•weinzierl•9mo ago

Comments

PunchTornado•9mo ago
Intuitive and expected result (maybe without the prediction of performance). I'm glad somebody did the hard work of proving it.

Though, if this is so clearly seen, how come AI detectors perform so badly?

haltingproblem•9mo ago
It might be because detecting if output is AI generated and mapping output which is known to be from an LLM to a specific LLM or class of LLMs are different problems.
Calavar•9mo ago
This experiment involves each LLM responding to 128 or 256 prompts. AI detection is generally focused on determining the writer of a single document, not comparing two analagous sets of 128 documents and determining if the same person/tool wrote both. Totally different problem.
light_hue_1•9mo ago
They're discovering the wrong thing. And the analogy with biology doesn't hold.

They're sensitive not to architecture but to training data. That's like grouping animals by what environment they lived in, so lions and alligators are closer to one another than lions and cats.

The real trick is to infer the underlying architecture and show the relationships between architectures.

That's not something you can tell easily by just looking at the name of the model. And that would actually be useful. This is pretty useless.

refulgentis•9mo ago
This is provocative but off-base in order to be so: why would we need to work backwards to determine architecture?

Similarly, "you can tell easily by just looking at the name of the model" -- that's an unfounded assertion. No, you can't. It's perfectly cromulent, accepted, and quite regular to have a fine-tuned model that has nothing in its name indicating what it was fine-tuned on. (we can observe the effects of this even if we aren't so familiar with domain enough to know this, i.e. Meta in Llama 4 making it a requirement to have it in the name)

littlestymaar•9mo ago
You are the one making a wrong biological analogy. Architecture isn't comparable to genes any more than training data is comparable to genes, and training data isn't comparable to environment, doing these kind of analogies brings you nothing but false confidence and misunderstanding.

What they do in the paper on the other hands is to apply the methods of biology, and get a result that is akin to phylogeny, not from a biological analogy but from a biologically-inspired method.