frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

How does a screen even work?

https://www.makingsoftware.com/chapters/how-a-screen-works
43•chkhd•2h ago•3 comments

Reading Neuromancer for the first time in 2025

https://mbh4h.substack.com/p/neuromancer-2025-review-william-gibson
236•keiferski•8h ago•218 comments

Bypassing Google's big anti-adblock update

https://0x44.xyz/blog/web-request-blocking/
869•deryilz•21h ago•732 comments

Show HN: Learn LLMs LeetCode Style

https://github.com/Exorust/TorchLeet
30•Exorust•3h ago•2 comments

Forget borrow checkers: C3 solved memory lifetimes with scopes

https://c3-lang.org/blog/forget-borrow-checkers-c3-solved-memory-lifetimes-with-scopes/
49•lerno•2d ago•35 comments

Axon's Draft One AI Police Report Generator Is Designed to Defy Transparency

https://www.eff.org/deeplinks/2025/07/axons-draft-one-designed-defy-transparency
111•zdw•2d ago•51 comments

Notes on Graham's ANSI Common Lisp (2024)

https://courses.cs.northwestern.edu/325/readings/graham/graham-notes.html
58•oumua_don17•3d ago•18 comments

The upcoming GPT-3 moment for RL

https://www.mechanize.work/blog/the-upcoming-gpt-3-moment-for-rl/
108•jxmorris12•3d ago•37 comments

Local Chatbot RAG with FreeBSD Knowledge

https://hackacad.net/post/2025-07-12-local-chatbot-rag-with-freebsd-knowledge/
10•todsacerdoti•2h ago•0 comments

The Decipherment of the Dhofari Script

https://www.science.org/content/article/mysterious-pre-islamic-script-oman-finally-deciphered
33•pseudolus•6h ago•12 comments

Zig's New Async I/O

https://kristoff.it/blog/zig-new-async-io/
282•afirium•17h ago•214 comments

Bay Area restaurants are vetting your social media before you even walk in

https://www.sfgate.com/food/article/data-deep-dives-bay-area-fine-dining-restaurants-20404434.php
14•borski•1h ago•21 comments

Understanding Tool Calling in LLMs – Step-by-Step with REST and Spring AI

https://muthuishere.medium.com/understanding-tool-function-calling-in-llms-step-by-step-examples-in-rest-and-spring-ai-2149ecd6b18b
45•muthuishere•6h ago•6 comments

Chrome's hidden X-Browser-Validation header reverse engineered

https://github.com/dsekz/chrome-x-browser-validation-header
299•dsekz•2d ago•98 comments

The Robot Sculptors of Italy

https://www.bloomberg.com/features/2025-robot-sculptors-marble/
6•helsinkiandrew•3d ago•2 comments

Gaming cancer: How citizen science games could help cure disease

https://thereader.mitpress.mit.edu/how-citizen-science-games-could-help-cure-disease/
76•pseudolus•6h ago•31 comments

Monitoring My Homelab, Simply

https://b.tuxes.uk/simple-homelab-monitoring.html
32•Bogdanp•3d ago•10 comments

Let me pay for Firefox

https://discourse.mozilla.org/t/let-me-pay-for-firefox/141297
552•csmantle•8h ago•422 comments

Show HN: ArchGW – an intelligent edge and service proxy for agents

23•honorable_coder•16h ago•5 comments

Hacking Coroutines into C

https://wiomoc.de/misc/posts/hacking_coroutines_into_c.html
132•jmillikin•15h ago•32 comments

Lua beats MicroPython for serious embedded devs

https://www.embedded.com/why-lua-beats-micropython-for-serious-embedded-devs
45•willhschmid•8h ago•31 comments

Switching to Claude Code and VSCode Inside Docker

https://timsh.org/claude-inside-docker/
209•timsh•2d ago•117 comments

Aeron: Efficient reliable UDP unicast, UDP multicast, and IPC message transport

https://github.com/aeron-io/aeron
61•todsacerdoti•20h ago•30 comments

A Mental Model for C++ Coroutine

https://uvdn7.github.io/cpp-coro/
8•uvdn7•1d ago•1 comments

Parse, Don’t Validate – Some C Safety Tips

https://www.lelanthran.com/chap13/content.html
98•lelanthran•4d ago•55 comments

C++: Maps on Chains

http://bannalia.blogspot.com/2025/07/maps-on-chains.html
36•signa11•2d ago•14 comments

Experimental imperative-style music sequence generator engine

https://github.com/renoise/pattrns
47•bwidlar•4d ago•6 comments

Lost Chapter of Automate the Boring Stuff: Audio, Video, and Webcams in Python

https://inventwithpython.com/blog/lost-av-chapter.html
191•AlSweigart•23h ago•12 comments

Edward Burtynsky's monumental chronicle of the human impact on the planet

https://www.newyorker.com/culture/photo-booth/earths-poet-of-scale
75•pseudolus•13h ago•11 comments

Capturing the International Space Station (2022)

https://cosmicbackground.io/blogs/learn-about-how-these-are-captured/capturing-the-international-space-station
28•LorenDB•3d ago•2 comments
Open in hackernews

Hill Space: Neural nets that do perfect arithmetic (to 10⁻¹⁶ precision)

https://hillspace.justindujardin.com/
59•peili7•10h ago

Comments

roomey•7h ago
Would someone be able to say if this is somehow related to encoding data as polar coordinates, because at my knowledge level it looks like it could be related?

For some context, to learn more about quantum computing, I was trying to build an evolutionary style ML algo to generate quantum circuits using the quantum machine primitives. The type where the fittest survive and mutate.

In terms of computing (this was a few years ago), I was limited to the number of qubits I could simulate (as there had to be many simulations).

The solution I found was to encode data into the spin of the qubit (which is an analog value). So I used polar coordinates to "encode data"

The matrix values looked a lot like this, so I was wondering if hill space is related? I was having to make up some stuff as I went along, and finding out the correct area to learn about more would be useful.

yorwba•7h ago
The author seems a bit too excited about the discovery that the dot product of the vectors [a, b] and [1, 1] is a + b. I don't think the problem with getting neural nets to do arithmetic is that they literally can't add two coefficients of a vector, but that the input and output modalities are something different (e.g. digit sequences) and you want to use a generic architecture that can also do other tasks (e.g. text prediction in general). If you knew in advance that you just need to calculate a + b, you could skip the neural network altogether.
tatjam•7h ago
I'm going to guess the main take-away point is that the weights can be trained reliably if your transfer functions are sufficiently "stiff"? Not like you need the training for the operations presented, anyone could choose the weights manually, but it could maybe extend to more complex mathematical operations?

To be honest, it does feel a bit like Claude output (which the author states they used), reads convincingly "academic", but it seems like a drawn out tautology. For example, it's no surprise its precision is the same as floating point, as it's essentially carrying out the exact same operations on the CPU.

Please do correct me if I'm wrong! I've not read the cited paper on "Neural Arithmetic Logic Units", which may clear some stuff up.

trueismywork•5h ago
Stiff function observation is not new. It exists in general linear solver theory for decades/centuries now. But stiff function do not scale as is needed for training