frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

France's homegrown open source online office suite

https://github.com/suitenumerique
427•nar001•4h ago•201 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
132•bookofjoe•1h ago•108 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
437•theblazehen•2d ago•157 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
26•thelok•1h ago•2 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
86•AlexeyBrin•5h ago•16 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
778•klaussilveira•19h ago•241 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
35•vinhnx•3h ago•4 comments

First Proof

https://arxiv.org/abs/2602.05192
38•samasblack•2h ago•23 comments

Software Factories and the Agentic Moment

https://factory.strongdm.ai/
19•mellosouls•2h ago•18 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
55•onurkanbkrc•4h ago•3 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1027•xnx•1d ago•584 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
172•alainrk•4h ago•225 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
167•jesperordrup•10h ago•61 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
24•rbanffy•4d ago•5 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
17•simonw•2h ago•15 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
103•videotopia•4d ago•27 comments

Vinklu Turns Forgotten Plot in Bucharest into Tiny Coffee Shop

https://design-milk.com/vinklu-turns-forgotten-plot-in-bucharest-into-tiny-coffee-shop/
5•surprisetalk•5d ago•0 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
12•marklit•5d ago•0 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
265•isitcontent•20h ago•33 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
152•matheusalmeida•2d ago•42 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
277•dmpetrov•20h ago•147 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
35•matt_d•4d ago•10 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
546•todsacerdoti•1d ago•263 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
418•ostacke•1d ago•110 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
65•helloplanets•4d ago•68 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
364•vecti•22h ago•164 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
16•sandGorgon•2d ago•4 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
338•eljojo•22h ago•206 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
457•lstoll•1d ago•301 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
372•aktau•1d ago•195 comments
Open in hackernews

Universal pre-training by iterated random computation

https://arxiv.org/abs/2506.20057
37•liamdgray•7mo ago

Comments

liamdgray•7mo ago
Abstract: "We investigate the use of randomly generated data for the sake of pre-training a model. We justify this approach theoretically from the perspective of algorithmic complexity, building on recent research that shows that sequence models can be trained to approximate Solomonoff induction. We derive similar, but complementary theoretical results. We show empirically that synthetically generated data can be used to pre-train a model before the data is seen. We replicate earlier results that models trained this way show zero-shot in-context learning across a variety of datasets, and that this performance improves with scale. We extend earlier results to real-world data, and show that finetuning a model after pre-training offers faster convergence and better generalization."
bionhoward•7mo ago
This is a cool concept, but for comparison, I can’t help but wish there was more comparison between the treatment group and a control group that doesn’t see any universal pretraining data.

It’s good to compare various model sizes and evaluation tasks and random data generators. I just think the paper would more effectively prove its point if it could show models of same sizes which see this random data can learn better from evaluation data later on.

Could even take the initial checkpoint of the model before universal pretraining against the pretrained checkpoint. If the method works, the one that did UP will win.

Maybe I’m way off, I’ll admit I only skimmed it so far. Seems promising, just wishing for some controls.

yorwba•7mo ago
In figures 2, 4, and 6, the top left end of the training curves represents models that have not seen any pretraining data. In figure 5, they're represented by dashed curves.
visarga•7mo ago
Results are modest, maybe 20-30% fewer training steps to reach target performance. This won't solve the problem of organic data exhaustion. We need 100x more data.

They didn't test against actual language model pretraining, only tested against a random init.

- A: Pre-trained on their synthetic LSTM data -> fine-tuned on Wikipedia

- B: Pre-trained on different natural language corpus -> fine-tuned on Wikipedia

- C: Random initialization -> fine-tuned on Wikipedia

They only test A vs C, not A vs B.

WithinReason•7mo ago
This paper addresses the problem of running out of data. You can't do B when you ran out of data so it's irrelevant.
impossiblefork•7mo ago
20-30% isn't modest. I think there is a big problem though, but it's that it's character level prediction.

It's not obvious how generate this kind of good synthetic data when it's to be fed to a tokenized model.