frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
2•oxxoxoxooo•2m ago•0 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•3m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•7m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•8m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•9m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•12m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•14m ago•3 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•15m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
1•1vuio0pswjnm7•17m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
1•1vuio0pswjnm7•19m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•21m ago•1 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•24m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•29m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•30m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•34m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•46m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•48m ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•48m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
2•basilikum•1h ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•1h ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•1h ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
4•throwaw12•1h ago•3 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•1h ago•2 comments
Open in hackernews

Reverse engineering a neural network's clever solution to binary addition (2023)

https://cprimozic.net/blog/reverse-engineering-a-small-neural-network/
87•Ameo•3mo ago

Comments

IlikeKitties•3mo ago
>As I mentioned, before, I had imagined the network learning some fancy combination of logic gates to perform the whole addition process digitally, similarly to how a binary adder operates. This trick is yet another example of neural networks finding unexpected ways to solve problems.

My intuition is that this solution allows for some form of gradient approach to a solution, which is why it's unintuitive. We think about solutions as all or nothing and look for complete solutions.

arjvik•3mo ago
The more interesting question is is it even possible to learn the logic gates solution through gradient descent?
scarmig•3mo ago
You could riff off an approach similar to https://google-research.github.io/self-organising-systems/di...
elteto•3mo ago
Right, binary gates are discrete elements but neural networks operate on a continuous domain.

I'm reminded of the Feynman anecdote when he went to work for Thinking Machines and they gave him some task related to figuring out routing in the CPU network of the machine, which is a discrete problem. He came back with a solution that used partial differential equations, which surprised everyone.

rnhmjoj•3mo ago
Original submission: https://news.ycombinator.com/item?id=34399142
drougge•3mo ago
This seems interesting, but I got stuck fairly early on when I read "all 32,385 possible input combinations". There are two 8 bit numbers, 16 totally independent bits. That's 65_536 combinations. 32_285 is close to half that, but not quite. Looking at it in binary it's 01111110_10000001, i.e. two 8 bit words that are the inverse of each other. How was this number arrived at, and why?

Looking later there's also a strange DAC that gives the lowest resistance to the least significant bit, thus making it the biggest contributor to the output. Very confusing.

dahart•3mo ago
Is that the number of adds that don’t overflow an 8-bit result?

On that hunch, I just checked and I get 32896.

Edit: if I exclude either input being zero, I get 32385.

You also get the same number when including input zeros but excluding results above 253. But I’d bet on the author’s reason being filtering of input zeros. Maybe the NN does something bad with zeros, maybe can’t learn them for some reason.

jtsiskin•3mo ago
Interesting puzzle. 32385 is 255 pick 2. My guess would be, to hopefully make interpretation easier, they always had the larger number on one side. So (1,2) but not (2,1). And also 0 wasn’t included. So perhaps their generation loop looks like [[(i,j) for j (i-1 -> 1) for i (256 -> 1)]
joshribakoff•3mo ago
You are potentially conflating combinations with permutations.
bob1029•3mo ago
> While playing around with this setup, I tried re-training the network with the activation function for the first layer replaced with sin(x) and it ends up working pretty much the same way.

There is some evidence that the activation functions and weights can be arbitrarily selected assuming you have a way to evolve the topology of the network.

https://arxiv.org/abs/1906.04358

anon291•3mo ago
Very nice. I think people don't appreciate enough the correspondence between linear algebra, differential equations, and wave behavior.

Roughly speaking, it seems the network is essentially converting binary digits to orthogonal basis functions and then manipulating those basis functions. Finally a linear transformation back into the binary digit space.

YeGoblynQueenne•3mo ago
>> I created training data by generating random 8-bit unsigned integers and adding them together with wrapping.

So, binary addition in [0,256] (base 10). Did the author try the trained network on numbers outside the training range?

It's one thing to find that your neural net discovered this one neat trick for binary addition with 8-bit numbers, and something completely different to find that it figured out binary addition in the general case.

How hard the latter would be... depends. What were the activation functions? E.g. it is quite possible to learn how to add two (arbitrary, base-10) integers with a simple regression for no other reason than regression being itself based on addition (ok, summation).

xg15•3mo ago
This is really cool and I hope there will be more experiments like this.

My takeaway is also that we don't really have a good intuition yet how the internal representations of neuronal networks "work" or what kind of internal representations can even be learned through SGD+backpropagation. (And also how those representations depend on the architecture)

Like in this case, where the author first imagined the network would learn a logic network, but the end result was more like an analog circuit.

It's possible to construct the "binary adder" network the author imagined "from scratch" by handpicking the weights. But the question would be interesting if it could also be learned or if SGD would always produce an "analog" solution like this one.

bgnn•3mo ago
The second step, passing the analog output through shifted tanh functions, is implementing an analog to digital converter (ADC). This type ADCs were common back in the BJT days.

So: DAC + sum in analog domain+ ADC is what the NN is doing.

krbaccord94f•3mo ago
Binary layer functions, whether for DACs which convert 4-bit or 8-bit inputs to a unitary neuron allows the network to both sum the inputs as well as convert the sum to analog all within a single layer ... [to] do it all before any [Ameo] activation functions even come into play." This is sin⁻¹(tan)x in the absence of asymptote.
1xD9B4BEF9•3mo ago
Preserving the Golden Circle in a network, whether a φ ratio, π ratio is 1:1.618, which is on the one hand a rectangle's L/W or a circle which successively folds outward.