frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
122•yi_wang•4h ago•34 comments

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
53•RebelPotato•3h ago•10 comments

SectorC: A C Compiler in 512 bytes (2023)

https://xorvoid.com/sectorc.html
247•valyala•12h ago•48 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
165•surprisetalk•11h ago•155 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
195•mellosouls•14h ago•349 comments

Total surface area required to fuel the world with solar (2009)

https://landartgenerator.org/blagi/archives/127
16•robtherobber•4d ago•5 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
73•gnufx•10h ago•59 comments

LLMs as the new high level language

https://federicopereiro.com/llm-high/
62•swah•4d ago•113 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
180•AlexeyBrin•17h ago•35 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
171•vinhnx•15h ago•17 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
319•jesperordrup•22h ago•97 comments

First Proof

https://arxiv.org/abs/2602.05192
134•samasblack•14h ago•77 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
62•chwtutha•2h ago•10 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
81•momciloo•12h ago•16 comments

Wood Gas Vehicles: Firewood in the Fuel Tank (2010)

https://solar.lowtechmagazine.com/2010/01/wood-gas-vehicles-firewood-in-the-fuel-tank/
31•Rygian•2d ago•7 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
104•thelok•13h ago•22 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
40•mbitsnbites•3d ago•4 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
112•randycupertino•7h ago•233 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
577•theblazehen•3d ago•208 comments

Why there is no official statement from Substack about the data leak

https://techcrunch.com/2026/02/05/substack-confirms-data-breach-affecting-email-addresses-and-pho...
13•witnessme•1h ago•4 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
304•1vuio0pswjnm7•18h ago•482 comments

Microsoft account bugs locked me out of Notepad – Are thin clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
144•josephcsible•9h ago•177 comments

I write games in C (yes, C) (2016)

https://jonathanwhiting.com/writing/blog/games_in_c/
188•valyala•12h ago•173 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
233•limoce•4d ago•125 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
904•klaussilveira•1d ago•276 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
33•languid-photic•4d ago•15 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
150•speckx•4d ago•235 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
146•videotopia•4d ago•48 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
303•isitcontent•1d ago•39 comments

The silent death of good code

https://amit.prasad.me/blog/rip-good-code
91•amitprasad•6h ago•87 comments
Open in hackernews

Hill Space: Neural nets that do perfect arithmetic (to 10⁻¹⁶ precision)

https://hillspace.justindujardin.com/
70•peili7•6mo ago

Comments

roomey•6mo ago
Would someone be able to say if this is somehow related to encoding data as polar coordinates, because at my knowledge level it looks like it could be related?

For some context, to learn more about quantum computing, I was trying to build an evolutionary style ML algo to generate quantum circuits using the quantum machine primitives. The type where the fittest survive and mutate.

In terms of computing (this was a few years ago), I was limited to the number of qubits I could simulate (as there had to be many simulations).

The solution I found was to encode data into the spin of the qubit (which is an analog value). So I used polar coordinates to "encode data"

The matrix values looked a lot like this, so I was wondering if hill space is related? I was having to make up some stuff as I went along, and finding out the correct area to learn about more would be useful.

yorwba•6mo ago
The author seems a bit too excited about the discovery that the dot product of the vectors [a, b] and [1, 1] is a + b. I don't think the problem with getting neural nets to do arithmetic is that they literally can't add two coefficients of a vector, but that the input and output modalities are something different (e.g. digit sequences) and you want to use a generic architecture that can also do other tasks (e.g. text prediction in general). If you knew in advance that you just need to calculate a + b, you could skip the neural network altogether.
tatjam•6mo ago
I'm going to guess the main take-away point is that the weights can be trained reliably if your transfer functions are sufficiently "stiff"? Not like you need the training for the operations presented, anyone could choose the weights manually, but it could maybe extend to more complex mathematical operations?

To be honest, it does feel a bit like Claude output (which the author states they used), reads convincingly "academic", but it seems like a drawn out tautology. For example, it's no surprise its precision is the same as floating point, as it's essentially carrying out the exact same operations on the CPU.

Please do correct me if I'm wrong! I've not read the cited paper on "Neural Arithmetic Logic Units", which may clear some stuff up.

trueismywork•6mo ago
Stiff function observation is not new. It exists in general linear solver theory for decades/centuries now. But stiff function do not scale as is needed for training
moralestapia•6mo ago
You didn't get the point of this.

The point of this is not to calculate a + b; that is trivial, as you smahtly pointed out.

The point of this is to be able to solve arithmetic problems in an architecture that is compatible with neural networks.