frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
1•tablets•1m ago•0 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
1•breve•3m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•6m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
1•pastage•6m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
1•billiob•7m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
1•birdculture•12m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•18m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•19m ago•1 comments

Slop News - HN front page right now hallucinated as 100% AI SLOP

https://slop-news.pages.dev/slop-news
1•keepamovin•24m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•26m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
2•tosh•32m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
3•oxxoxoxooo•35m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•36m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•40m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•41m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•42m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•45m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
3•myk-e•47m ago•5 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•48m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
4•1vuio0pswjnm7•50m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
2•1vuio0pswjnm7•52m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•54m ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•57m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•1h ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•1h ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•1h ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•1h ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•1h ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•1h ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments
Open in hackernews

A robust, open-source framework for Spiking Neural Networks on low-end FPGAs

https://arxiv.org/abs/2507.07284
69•PaulHoule•6mo ago

Comments

kingstnap•6mo ago
I don't understand the point of spiking in the context of computer hardware.

Your energy costs are a function of activity factor. Which is how many 0 to 1 transitions you have.

If you wanted to be efficient, the correct thing to do is have most voltages remain unchanged.

What makes more sense to me is something like mixture of experts routing but you only update the activated experts. Stock fish does something similar with partial updating NN for board positions.

imtringued•6mo ago
A spiking neural network encodes analog values through time based encoding. The duration between two transitions encodes an analog value with a single connection in a similar manner to PWM. You need fewer connections and the gaps between transitions are larger.

For those who don't know why this matters. Transistors and all electrical devices including wires are tiny capacitors. For a transistor to switch from one state to another it needs to charge or discharge as quickly as possible. This charging/discharging process costs energy and the more you do it, the more energy is used.

A fully trained SNN does not change its synapses, which means that the voltages inside the routing hardware, that most likely dominate the energy costs by far, do not change. Meanwhile classic ANNs have to perform the routing via GEMV over and over again.

npatrick04•6mo ago
This is a good paper exploring how computation with spiking neural networks is likely to work.

https://www.izhikevich.org/publications/spnet.htm

bob1029•6mo ago
It is fairly obvious to me that FPGAs and ASICs would do a really good job at optimizing the operation of a spiking neural network. I think the biggest challenge is not the operation of a SNN though. It's searching for them.

Iterating topology is way more powerful than iterating weights. As far as I am aware, FPGAs can only be reprogrammed a fixed # of times, so you won't get very far into the open sea before you run out of provisions. It doesn't matter how fast you can run the network if you can't find any useful instances of it.

The fastest thing we have right now for searching the space of SNNs is the x86/ARM CPU. You could try to build something bespoke, but it would probably start to look like the same thing after a while. Decades of OoO, prefetching and branch prediction optimizations go a very long way in making this stuff run fast. Proper, deterministic SNNs have a requirement for global serialization of spiking events, which typically suggests use of a priority queue. These kinds of data structures and operational principles are not very compatible with mass scale GPU compute we have on hand. A tight L1 latency domain is critical for rapidly evaluating many candidates per unit time.

Of all the computational substrates available to us, spiking neural networks are probably the least friendly when it comes to practical implementation, but they also seem to offer the most interesting dynamics due to the sparsity and high dimensionality. I've seen tiny RNNs provide seemingly impossible performance in small-scale neuroevolution experiments, even with wild constraints like all connection weights being fixed to a global constant.

imtringued•6mo ago
>As far as I am aware, FPGAs can only be reprogrammed a fixed # of times, so you won't get very far into the open sea before you run out of provisions

That's not true unless you're talking about mask programmed FPGAs where the configuration is burned into the metal layers to avoid the silicon area overhead of configuration memory and even in this case the finite number is exactly one, because the FPGA comes preprogrammed out of the fab.

Almost every conventional FPGA stores its configuration in SRAM. This means you have the opposite problem. You need an extra SPI chip to store your FPGA configuration and program the FPGA every time you start it up.

The big problem with SNNs is that there is no easy way to train them. You train them like ANNs with back propagation, which means SNNs are just an exotic inference target and not a full platform for both training and inference.

checker659•6mo ago
An FPGA is programmed every time it’s turned on.
b112•6mo ago
That weird Russian hacker guy I arrested 4 years ago, wasn't making this up?

He had hacked 60%+ of the world's IoT at one point. Largest botnet we ever saw. Everyone's devices had developed weird delayed connectivity issues, variable pingtimes, random packet losses with re-transmits.

He kept saying he was trying to bring about emergent AGI, blathering on about spiking and variable network delay between IoT nodes, clusters, and blah blah blah.

"There's 10B IoT devices already!". He was frantic. Wild eyed. "I have to finish this".

Well I arrested his ass, and this won't work as a defense Segrey!

  -- Random NSA agent