frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•4m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•5m ago•0 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•5m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
4•bookofjoe•6m ago•1 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•7m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
1•ilyaizen•7m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•8m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•8m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
2•funnycoding•9m ago•0 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
1•thelok•9m ago•0 comments

Towards Self-Driving Codebases

https://cursor.com/blog/self-driving-codebases
1•edwinarbus•9m ago•0 comments

VCF West: Whirlwind Software Restoration – Guy Fedorkow [video]

https://www.youtube.com/watch?v=YLoXodz1N9A
1•stmw•10m ago•1 comments

Show HN: COGext – A minimalist, open-source system monitor for Chrome (<550KB)

https://github.com/tchoa91/cog-ext
1•tchoa91•11m ago•1 comments

FOSDEM 26 – My Hallway Track Takeaways

https://sluongng.substack.com/p/fosdem-26-my-hallway-track-takeaways
1•birdculture•11m ago•0 comments

Show HN: Env-shelf – Open-source desktop app to manage .env files

https://env-shelf.vercel.app/
1•ivanglpz•15m ago•0 comments

Show HN: Almostnode – Run Node.js, Next.js, and Express in the Browser

https://almostnode.dev/
1•PetrBrzyBrzek•15m ago•0 comments

Dell support (and hardware) is so bad, I almost sued them

https://blog.joshattic.us/posts/2026-02-07-dell-support-lawsuit
1•radeeyate•16m ago•0 comments

Project Pterodactyl: Incremental Architecture

https://www.jonmsterling.com/01K7/
1•matt_d•16m ago•0 comments

Styling: Search-Text and Other Highlight-Y Pseudo-Elements

https://css-tricks.com/how-to-style-the-new-search-text-and-other-highlight-pseudo-elements/
1•blenderob•18m ago•0 comments

Crypto firm accidentally sends $40B in Bitcoin to users

https://finance.yahoo.com/news/crypto-firm-accidentally-sends-40-055054321.html
1•CommonGuy•19m ago•0 comments

Magnetic fields can change carbon diffusion in steel

https://www.sciencedaily.com/releases/2026/01/260125083427.htm
1•fanf2•19m ago•0 comments

Fantasy football that celebrates great games

https://www.silvestar.codes/articles/ultigamemate/
1•blenderob•19m ago•0 comments

Show HN: Animalese

https://animalese.barcoloudly.com/
1•noreplica•20m ago•0 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
3•simonw•20m ago•0 comments

John Haugeland on the failure of micro-worlds

https://blog.plover.com/tech/gpt/micro-worlds.html
1•blenderob•21m ago•0 comments

Show HN: Velocity - Free/Cheaper Linear Clone but with MCP for agents

https://velocity.quest
2•kevinelliott•22m ago•2 comments

Corning Invented a New Fiber-Optic Cable for AI and Landed a $6B Meta Deal [video]

https://www.youtube.com/watch?v=Y3KLbc5DlRs
1•ksec•23m ago•0 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
2•nmfccodes•23m ago•1 comments

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
2•eatitraw•30m ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
2•anipaleja•30m ago•0 comments
Open in hackernews

A robust, open-source framework for Spiking Neural Networks on low-end FPGAs

https://arxiv.org/abs/2507.07284
69•PaulHoule•6mo ago

Comments

kingstnap•6mo ago
I don't understand the point of spiking in the context of computer hardware.

Your energy costs are a function of activity factor. Which is how many 0 to 1 transitions you have.

If you wanted to be efficient, the correct thing to do is have most voltages remain unchanged.

What makes more sense to me is something like mixture of experts routing but you only update the activated experts. Stock fish does something similar with partial updating NN for board positions.

imtringued•6mo ago
A spiking neural network encodes analog values through time based encoding. The duration between two transitions encodes an analog value with a single connection in a similar manner to PWM. You need fewer connections and the gaps between transitions are larger.

For those who don't know why this matters. Transistors and all electrical devices including wires are tiny capacitors. For a transistor to switch from one state to another it needs to charge or discharge as quickly as possible. This charging/discharging process costs energy and the more you do it, the more energy is used.

A fully trained SNN does not change its synapses, which means that the voltages inside the routing hardware, that most likely dominate the energy costs by far, do not change. Meanwhile classic ANNs have to perform the routing via GEMV over and over again.

npatrick04•6mo ago
This is a good paper exploring how computation with spiking neural networks is likely to work.

https://www.izhikevich.org/publications/spnet.htm

bob1029•6mo ago
It is fairly obvious to me that FPGAs and ASICs would do a really good job at optimizing the operation of a spiking neural network. I think the biggest challenge is not the operation of a SNN though. It's searching for them.

Iterating topology is way more powerful than iterating weights. As far as I am aware, FPGAs can only be reprogrammed a fixed # of times, so you won't get very far into the open sea before you run out of provisions. It doesn't matter how fast you can run the network if you can't find any useful instances of it.

The fastest thing we have right now for searching the space of SNNs is the x86/ARM CPU. You could try to build something bespoke, but it would probably start to look like the same thing after a while. Decades of OoO, prefetching and branch prediction optimizations go a very long way in making this stuff run fast. Proper, deterministic SNNs have a requirement for global serialization of spiking events, which typically suggests use of a priority queue. These kinds of data structures and operational principles are not very compatible with mass scale GPU compute we have on hand. A tight L1 latency domain is critical for rapidly evaluating many candidates per unit time.

Of all the computational substrates available to us, spiking neural networks are probably the least friendly when it comes to practical implementation, but they also seem to offer the most interesting dynamics due to the sparsity and high dimensionality. I've seen tiny RNNs provide seemingly impossible performance in small-scale neuroevolution experiments, even with wild constraints like all connection weights being fixed to a global constant.

imtringued•6mo ago
>As far as I am aware, FPGAs can only be reprogrammed a fixed # of times, so you won't get very far into the open sea before you run out of provisions

That's not true unless you're talking about mask programmed FPGAs where the configuration is burned into the metal layers to avoid the silicon area overhead of configuration memory and even in this case the finite number is exactly one, because the FPGA comes preprogrammed out of the fab.

Almost every conventional FPGA stores its configuration in SRAM. This means you have the opposite problem. You need an extra SPI chip to store your FPGA configuration and program the FPGA every time you start it up.

The big problem with SNNs is that there is no easy way to train them. You train them like ANNs with back propagation, which means SNNs are just an exotic inference target and not a full platform for both training and inference.

checker659•6mo ago
An FPGA is programmed every time it’s turned on.
b112•6mo ago
That weird Russian hacker guy I arrested 4 years ago, wasn't making this up?

He had hacked 60%+ of the world's IoT at one point. Largest botnet we ever saw. Everyone's devices had developed weird delayed connectivity issues, variable pingtimes, random packet losses with re-transmits.

He kept saying he was trying to bring about emergent AGI, blathering on about spiking and variable network delay between IoT nodes, clusters, and blah blah blah.

"There's 10B IoT devices already!". He was frantic. Wild eyed. "I have to finish this".

Well I arrested his ass, and this won't work as a defense Segrey!

  -- Random NSA agent