frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
177•ColinWright•1h ago•161 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
124•AlexeyBrin•7h ago•24 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
20•valyala•2h ago•7 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
16•valyala•2h ago•1 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
65•vinhnx•5h ago•9 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
153•alephnerd•2h ago•105 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
831•klaussilveira•22h ago•250 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
57•thelok•4h ago•8 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
117•1vuio0pswjnm7•8h ago•148 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1060•xnx•1d ago•612 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
79•onurkanbkrc•7h ago•5 comments

Brookhaven Lab's RHIC Concludes 25-Year Run with Final Collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
4•gnufx•55m ago•1 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
486•theblazehen•3d ago•177 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
212•jesperordrup•12h ago•72 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
567•nar001•6h ago•258 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
225•alainrk•6h ago•354 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
39•rbanffy•4d ago•7 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
9•momciloo•2h ago•0 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
19•brudgers•5d ago•4 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
8•languid-photic•3d ago•1 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
29•marklit•5d ago•3 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
114•videotopia•4d ago•32 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
77•speckx•4d ago•82 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
274•isitcontent•22h ago•38 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
201•limoce•4d ago•112 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
287•dmpetrov•22h ago•155 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
22•sandGorgon•2d ago•12 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
557•todsacerdoti•1d ago•269 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
427•ostacke•1d ago•111 comments
Open in hackernews

A robust, open-source framework for Spiking Neural Networks on low-end FPGAs

https://arxiv.org/abs/2507.07284
69•PaulHoule•6mo ago

Comments

kingstnap•6mo ago
I don't understand the point of spiking in the context of computer hardware.

Your energy costs are a function of activity factor. Which is how many 0 to 1 transitions you have.

If you wanted to be efficient, the correct thing to do is have most voltages remain unchanged.

What makes more sense to me is something like mixture of experts routing but you only update the activated experts. Stock fish does something similar with partial updating NN for board positions.

imtringued•6mo ago
A spiking neural network encodes analog values through time based encoding. The duration between two transitions encodes an analog value with a single connection in a similar manner to PWM. You need fewer connections and the gaps between transitions are larger.

For those who don't know why this matters. Transistors and all electrical devices including wires are tiny capacitors. For a transistor to switch from one state to another it needs to charge or discharge as quickly as possible. This charging/discharging process costs energy and the more you do it, the more energy is used.

A fully trained SNN does not change its synapses, which means that the voltages inside the routing hardware, that most likely dominate the energy costs by far, do not change. Meanwhile classic ANNs have to perform the routing via GEMV over and over again.

npatrick04•6mo ago
This is a good paper exploring how computation with spiking neural networks is likely to work.

https://www.izhikevich.org/publications/spnet.htm

bob1029•6mo ago
It is fairly obvious to me that FPGAs and ASICs would do a really good job at optimizing the operation of a spiking neural network. I think the biggest challenge is not the operation of a SNN though. It's searching for them.

Iterating topology is way more powerful than iterating weights. As far as I am aware, FPGAs can only be reprogrammed a fixed # of times, so you won't get very far into the open sea before you run out of provisions. It doesn't matter how fast you can run the network if you can't find any useful instances of it.

The fastest thing we have right now for searching the space of SNNs is the x86/ARM CPU. You could try to build something bespoke, but it would probably start to look like the same thing after a while. Decades of OoO, prefetching and branch prediction optimizations go a very long way in making this stuff run fast. Proper, deterministic SNNs have a requirement for global serialization of spiking events, which typically suggests use of a priority queue. These kinds of data structures and operational principles are not very compatible with mass scale GPU compute we have on hand. A tight L1 latency domain is critical for rapidly evaluating many candidates per unit time.

Of all the computational substrates available to us, spiking neural networks are probably the least friendly when it comes to practical implementation, but they also seem to offer the most interesting dynamics due to the sparsity and high dimensionality. I've seen tiny RNNs provide seemingly impossible performance in small-scale neuroevolution experiments, even with wild constraints like all connection weights being fixed to a global constant.

imtringued•6mo ago
>As far as I am aware, FPGAs can only be reprogrammed a fixed # of times, so you won't get very far into the open sea before you run out of provisions

That's not true unless you're talking about mask programmed FPGAs where the configuration is burned into the metal layers to avoid the silicon area overhead of configuration memory and even in this case the finite number is exactly one, because the FPGA comes preprogrammed out of the fab.

Almost every conventional FPGA stores its configuration in SRAM. This means you have the opposite problem. You need an extra SPI chip to store your FPGA configuration and program the FPGA every time you start it up.

The big problem with SNNs is that there is no easy way to train them. You train them like ANNs with back propagation, which means SNNs are just an exotic inference target and not a full platform for both training and inference.

checker659•6mo ago
An FPGA is programmed every time it’s turned on.
b112•6mo ago
That weird Russian hacker guy I arrested 4 years ago, wasn't making this up?

He had hacked 60%+ of the world's IoT at one point. Largest botnet we ever saw. Everyone's devices had developed weird delayed connectivity issues, variable pingtimes, random packet losses with re-transmits.

He kept saying he was trying to bring about emergent AGI, blathering on about spiking and variable network delay between IoT nodes, clusters, and blah blah blah.

"There's 10B IoT devices already!". He was frantic. Wild eyed. "I have to finish this".

Well I arrested his ass, and this won't work as a defense Segrey!

  -- Random NSA agent