frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
80•ColinWright•1h ago•43 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
21•surprisetalk•1h ago•19 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
121•AlexeyBrin•7h ago•24 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
105•alephnerd•2h ago•56 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
58•vinhnx•4h ago•7 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
824•klaussilveira•21h ago•248 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
54•thelok•3h ago•6 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
105•1vuio0pswjnm7•8h ago•123 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1058•xnx•1d ago•608 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
76•onurkanbkrc•6h ago•5 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
479•theblazehen•2d ago•175 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
205•jesperordrup•11h ago•69 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
549•nar001•6h ago•253 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
217•alainrk•6h ago•335 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
8•languid-photic•3d ago•1 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
35•rbanffy•4d ago•7 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
28•marklit•5d ago•2 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
4•momciloo•1h ago•0 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
4•valyala•1h ago•1 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
113•videotopia•4d ago•30 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
4•valyala•1h ago•0 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
73•speckx•4d ago•74 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
68•mellosouls•4h ago•73 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
273•isitcontent•22h ago•38 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
199•limoce•4d ago•111 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
285•dmpetrov•22h ago•153 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
21•sandGorgon•2d ago•11 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
555•todsacerdoti•1d ago•268 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
43•matt_d•4d ago•18 comments
Open in hackernews

Is there a balance to be struck between simple hierarchical models and

https://statmodeling.stat.columbia.edu/2024/05/26/is-there-a-balance-to-be-struck-between-simple-hierarchical-models-and-more-complex-hierarchical-models-that-augment-the-simple-frameworks-with-more-modeled-interactions-when-analyzing-real-data/
40•luu•9mo ago

Comments

Onawa•9mo ago
Full Title: Is there a balance to be struck between simple hierarchical models and more complex hierarchical models that augment the simple frameworks with more modeled interactions when analyzing real data?
a-dub•9mo ago
"When working on your particular problem, start with simple comparisons and then fit more and more complicated models until you have what you want."

sounds algorithmic...

mnky9800n•9mo ago
Yes and you can even build symbolic engines that do this for you. I think the real question we must ask ourselves as data scientists or statisticians or whatever is whether we believe these data models represent the space of data fully or by happenstance. And if by happenstance is it because the data doesn’t capture the underlying processes that produced the data or are they uncapturable in this way and function approximators like neural networks or gradient booster machines are better. And is that because those function approximators capture interactions between the driving processes that otherwise go unseen or is it because those processes have fractional dimensions that control their impact that are not captured by data models. This all is summed up well by Leo Breimans two cultures paper in my opinion. I have gone back and forth on which “culture” is the correct representation of how processes produce data. If you buy that only function approximators truly capture the complexity of whatever processes you are observing then you have to wonder why physics works so well. That’s because, at least in my opinion, from the statistical point of view physics has spent centuries developing equations that are linear combinations of variables that are essentially data models according to Leo. I hope this opinion generates discussion because I don’t know what the answer is or if it matters that there is one.
a-dub•9mo ago
seems to me that one approach is fueled by data and the other is fueled by understanding. in the former, the observations form a view of behavior which is then modeled with high fidelity. in the latter, active inquiry, adversarial data collection and careful reasoning produce simpler models of hypothsized underlying processes that often prove to have nearly perfect generalization.

the interesting future is probably the one where the former produces new building blocks for the latter. (ie, the computer generates new simple and easy to understand constructs from which it explains previously not understood or well modeled phenomena.)

joe_the_user•9mo ago
Well, my impression is that the statistic paradigm itself limits the complexity of a model through it's basic aims and measures. Especially, a statistical model aims to be an unbiased predictor of a variable whereas machine learning/"AI" just aims for prediction and doesn't care about bias in the sense of statistics.
klysm•9mo ago
I think they have totally different goals typically. For example, let’s say we are doing a sampling procedure. How do you estimate the sampling error? I’m not aware of a machine learning technique that will help, but you can use Bayesian and MCMC techniques
usgroup•9mo ago
I think this is accurate but mostly because statistical modelling aims for interpretable parameters. That very strongly regularises complexity.