frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
85•valyala•4h ago•16 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
23•gnufx•2h ago•14 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
35•zdw•3d ago•4 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
89•mellosouls•6h ago•166 comments

I write games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
131•valyala•4h ago•99 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
47•surprisetalk•3h ago•52 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
143•AlexeyBrin•9h ago•26 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
96•vinhnx•7h ago•13 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
850•klaussilveira•23h ago•256 comments

First Proof

https://arxiv.org/abs/2602.05192
66•samasblack•6h ago•51 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1092•xnx•1d ago•618 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
64•thelok•5h ago•9 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
4•mbitsnbites•3d ago•0 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
232•jesperordrup•14h ago•80 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
516•theblazehen•3d ago•191 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
93•onurkanbkrc•8h ago•5 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
13•languid-photic•3d ago•4 comments

We mourn our craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
333•ColinWright•3h ago•400 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
254•alainrk•8h ago•412 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
182•1vuio0pswjnm7•10h ago•251 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
611•nar001•8h ago•269 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
35•marklit•5d ago•6 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
27•momciloo•4h ago•5 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
47•rbanffy•4d ago•9 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
124•videotopia•4d ago•39 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
96•speckx•4d ago•108 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
20•brudgers•5d ago•5 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
211•limoce•4d ago•117 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
32•sandGorgon•2d ago•15 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
287•isitcontent•1d ago•38 comments
Open in hackernews

The End of the Train-Test Split

https://folio.benguzovsky.com/train-test
35•gmays•2mo ago

Comments

elpakal•2mo ago
> Since the data will always be flawed and the test set won't be blind, the machine learning engineer's priority should be spent working with policy teams to improve the data.

It's interesting to watch this dynamic change from data set size measuring contests to quality and representativeness. In "A small number of samples can poison LLMs of any size" from Claude they hit on the same shift, but their position is more about security considerations than quality.

https://www.anthropic.com/research/small-samples-poison

henning•2mo ago
> Two months later, you've cracked it

Hehe.

roadside_picnic•2mo ago
> You make an LLM decision tree, one LLM call per policy section, and aggregate the results.

I can never understand why people jump to these weird direct calls to the LLM rather than working with embeddings for classification tasks.

I have a hard time believing that

- the context text embedding

- the image vector representation

- the policy text embedding(s)

Cannot be combined to create a classification model is likely several orders of magnitude faster than chaining calls to an LLM, and I wouldn't be remotely surprised to see it perform notably better on the task described.

I have used LLM as classifier and it does make sense in cases of extremely limited data (though they rarely work well enough), but if you're going to be calling the LLM in such complex ways it's better to stop thinking of this as a classic ML problem and rather think of it as an agentic content moderator.

In this case you can ignore the train/test split in favor of evals which you would create as you would for any other LLM agent workflow.

stephantul•2mo ago
I don’t really believe this is a paradigm shift with regards to train/test splits.

Before LLMs you would do a lot of these things, it’s just become a lot easier to get started and not train. What the author describes is very similar to the standard ml product loop in companies, including it being very difficult to “beat” the incumbent model because it has been overfit on the test set that is used compare the incumbent to your own model.