frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
44•ColinWright•52m ago•11 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
53•thelok•3h ago•6 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
14•surprisetalk•1h ago•7 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
119•AlexeyBrin•7h ago•22 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
90•alephnerd•1h ago•33 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
54•vinhnx•4h ago•7 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
822•klaussilveira•21h ago•248 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
99•1vuio0pswjnm7•8h ago•114 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1057•xnx•1d ago•607 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
75•onurkanbkrc•6h ago•5 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
475•theblazehen•2d ago•175 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
198•jesperordrup•11h ago•68 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
543•nar001•5h ago•252 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
8•languid-photic•3d ago•1 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
212•alainrk•6h ago•328 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
34•rbanffy•4d ago•6 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
27•marklit•5d ago•1 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
113•videotopia•4d ago•30 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
72•speckx•4d ago•74 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
66•mellosouls•4h ago•72 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
273•isitcontent•21h ago•37 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
199•limoce•4d ago•111 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
285•dmpetrov•22h ago•153 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
21•sandGorgon•2d ago•11 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
554•todsacerdoti•1d ago•268 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
424•ostacke•1d ago•110 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
42•matt_d•4d ago•17 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
472•lstoll•1d ago•310 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
348•eljojo•1d ago•214 comments
Open in hackernews

Hill Space: Neural nets that do perfect arithmetic (to 10⁻¹⁶ precision)

https://hillspace.justindujardin.com/
70•peili7•6mo ago

Comments

roomey•6mo ago
Would someone be able to say if this is somehow related to encoding data as polar coordinates, because at my knowledge level it looks like it could be related?

For some context, to learn more about quantum computing, I was trying to build an evolutionary style ML algo to generate quantum circuits using the quantum machine primitives. The type where the fittest survive and mutate.

In terms of computing (this was a few years ago), I was limited to the number of qubits I could simulate (as there had to be many simulations).

The solution I found was to encode data into the spin of the qubit (which is an analog value). So I used polar coordinates to "encode data"

The matrix values looked a lot like this, so I was wondering if hill space is related? I was having to make up some stuff as I went along, and finding out the correct area to learn about more would be useful.

yorwba•6mo ago
The author seems a bit too excited about the discovery that the dot product of the vectors [a, b] and [1, 1] is a + b. I don't think the problem with getting neural nets to do arithmetic is that they literally can't add two coefficients of a vector, but that the input and output modalities are something different (e.g. digit sequences) and you want to use a generic architecture that can also do other tasks (e.g. text prediction in general). If you knew in advance that you just need to calculate a + b, you could skip the neural network altogether.
tatjam•6mo ago
I'm going to guess the main take-away point is that the weights can be trained reliably if your transfer functions are sufficiently "stiff"? Not like you need the training for the operations presented, anyone could choose the weights manually, but it could maybe extend to more complex mathematical operations?

To be honest, it does feel a bit like Claude output (which the author states they used), reads convincingly "academic", but it seems like a drawn out tautology. For example, it's no surprise its precision is the same as floating point, as it's essentially carrying out the exact same operations on the CPU.

Please do correct me if I'm wrong! I've not read the cited paper on "Neural Arithmetic Logic Units", which may clear some stuff up.

trueismywork•6mo ago
Stiff function observation is not new. It exists in general linear solver theory for decades/centuries now. But stiff function do not scale as is needed for training
moralestapia•6mo ago
You didn't get the point of this.

The point of this is not to calculate a + b; that is trivial, as you smahtly pointed out.

The point of this is to be able to solve arithmetic problems in an architecture that is compatible with neural networks.