frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
39•thelok•2h ago•3 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
101•AlexeyBrin•6h ago•18 comments

First Proof

https://arxiv.org/abs/2602.05192
52•samasblack•3h ago•39 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
789•klaussilveira•20h ago•243 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
39•vinhnx•3h ago•5 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
63•onurkanbkrc•5h ago•5 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1040•xnx•1d ago•587 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
464•theblazehen•2d ago•165 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
510•nar001•5h ago•235 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
184•jesperordrup•10h ago•65 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
51•mellosouls•3h ago•52 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
63•1vuio0pswjnm7•7h ago•60 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
189•alainrk•5h ago•282 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
27•rbanffy•4d ago•5 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
19•marklit•5d ago•0 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
108•videotopia•4d ago•27 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
59•speckx•4d ago•62 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
268•isitcontent•21h ago•34 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
198•limoce•4d ago•107 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
281•dmpetrov•21h ago•150 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
152•matheusalmeida•2d ago•47 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
169•bookofjoe•2h ago•153 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
549•todsacerdoti•1d ago•266 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
422•ostacke•1d ago•110 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
39•matt_d•4d ago•14 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
365•vecti•23h ago•167 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
465•lstoll•1d ago•305 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
342•eljojo•23h ago•210 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
66•helloplanets•4d ago•70 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
18•sandGorgon•2d ago•8 comments
Open in hackernews

We Must Seize the Means of Compute

https://thompson2026.com/blog/seize-the-means-of-compute/
24•NickForLiberty•1mo ago

Comments

EGreg•1mo ago
I disagree with this person on utilitarian grounds. Nay even grounds of existential risk to humanity.

And just like him, when it comes to AI, I am making a huge exception for my usual principles.

My usual principles are that open-source gift economies benefit the world and break people free from gatekeepers. The World Wide Web liberated people from having to pay Payola to radio stations just to get their song played, from TV, Magazines, Newspapers, etc. It let anyone publish worldwide within a second, and make changes just as easily. It is what led to Facebook, Amazon, Google, LinkedIn, X etc. even existing (walled gardens like AOL would never allow it).

Wikipedia has made everyone forget about Britannica and Encarta. Linux runs most computers in the world. Open protocols like VoIP and packet switching brought marginal costs of personal communication down to zero. And so on and so forth.

But when it comes to AI, we can't have everyone do whatever they want with AI models, for the same reason we can't give everyone nuclear weapons technology. The probability that no one will misuse it becomes infinitesimally small real fast. And it takes just a few people to create a designer virus with a long incubation period, that infects and kills everyone, as just one example. Even in the digital world we are headed towards a dark forest where everything is adversarial, nothing can be trusted, and anyone's reputation, wealth and peace of mind can be destroyed at scale, by swarms of agents. That's coming.

For now, we know where the compute is. We can see it from space, even. We can trace the logistics, and we can make sure that it runs only "safe" models that refuse to do these things. All the stories you read about some provider "stopping" large-scale hacking is because they ran the servers.

So yes, for this one thing, I make a strong exception. I don't want to see proliferation of AI models everywhere. Sadly, though, as long as the world runs on "competition" instead of "cooperation", destruction is inevitable. Because if we don't do it, then China will, etc. etc.

There have been a few times in recent history that humanity successfully came together to ban dangerous things. Chemical weapons ban. Nuclear non-proliferation. Montreal Protocol and CFCs (repair the hole in the ozone layer). We can still do this for AI models running on dark compute pools. But time is running out. Chaos is coming.

wswope•1mo ago
Does his degrowth proposal not seem like the next best option if you believe Pandora’s box is open?

Your train of thought makes sense, but relies on the assumption that people and small groups wouldn’t keep tinkering at scale to do bad things even if we had a united world government trying to stop it.

EGreg•1mo ago
Better to have systems in place to stop people stockpiling weapons, than not have it. Just because not all murders can be prevented doesn't mean we shouldn't have laws and systems in place to try to prevent as many as we can. The FBI and Interpol does all kinds of stuff, but when it comes to AI they are letting the horse leave the barn. In any case, I prefer to have systems that prevent all kinds of problems (e.g. blockchain-based smart contracts, yes I know LOL) than let them happen and try to clean up the mess after the fact.

In general, cleaning up a mess is easier when the mess isn't self-preserving and being grown at an exponential scale by swarms of agents running on dark compute.

godelski•1mo ago
Even if he doesn't win, it may be useful to have someone like this in the race. Don't forget that you don't have to win to make change. These small players are often good at signaling to big players that people really do care about certain issues. Helps them become less disconnected
jmclnx•1mo ago
>The hardware is already here. The gaming PCs and laptops we use every day are powerful enough to run these systems if we optimize the software correctly.

I agree with this but there is one issue, AFAIK, the languages used do not lend themselves to optimization. And I expect the databases in use have the same issue.

It is almost like you need to put optimizations in the hardware kind of like what IBM does with its mainframes for transaction processing. Instead, AI companies is doing the usual 'race to be there first', ignoring about the consequences of the design.

godelski•1mo ago
I don't think it is so much the languages as the algorithms themselves. I'll put it one way, my most cited paper continually got rejected from AI conferences because 1) it wasn't novel enough 2) "why would you want to train a transformer from scratch?". The big reason for most of my citations is because researchers outside ML wanted to use transformers and either tuning or distilling a large model was insufficient (either computationally intractable or their problem didn't transfer learn very well so they had to do lots of training anyways).

The ML research community is very focused on scaling. As an example that doesn't (fully) deanonymize me look at how people reacted to things like KAN or MAMBA. Even in HN comments questions about scale are always front and center. Don't get me wrong, scale is an important question, but the context matters. These questions are out of place because they are not giving the new frameworks a chance to scale. As any sane researcher would do, you scale iteratively. To be resource efficient you test your ideas at a small scale and then move up, fixing other problems along the way. It's not even just a matter of resource efficiency, but even for solving the problems. By jumping straight to scale you add a lot of complexity into the mix and make it difficult to decouple the problems. This hyper-fixation on making everything bigger is really hindering us. Yeah, sure there are papers that do make it out (as I even gave examples) but these are still harder for smaller labs to pursue and get through review (review isn't completely blind and we'd be ignorant to ignore the politics that goes on).

daft_pink•1mo ago
I just don’t believe that this is a long term problem. The costs and chips are going to come down, the machines are going to be more optimized for these types of functions and local AI is going to become a bigger and bigger thing.
enzosaba•1mo ago
I agree. it’ll be like the early days of personal computers, when only big mainframes existed. But with AI there’s a problem: there will always be someone with a bigger machine than yours, and an AI smarter than the one you can run locally. This is something that scares me a little...
fittingopposite•1mo ago
Just wondered: are there any studies that estimated what the approximate minimum size of the human knowledge/reasoning would be? The article mentions a 4GB model. Is it theoretically possible to compress the human knowledge to this size without losing intelligence? There must be somewhere a an approximate minimum size that is independent of the RAM market. Curious to hear if someone is aware of any theoretical estimates?