frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
193•theblazehen•2d ago•56 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
679•klaussilveira•14h ago•203 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
954•xnx•20h ago•552 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
125•matheusalmeida•2d ago•33 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
25•kaonwarb•3d ago•21 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
62•videotopia•4d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
235•isitcontent•15h ago•25 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
39•jesperordrup•5h ago•17 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
227•dmpetrov•15h ago•121 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
332•vecti•17h ago•145 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
499•todsacerdoti•22h ago•243 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
384•ostacke•21h ago•96 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
360•aktau•21h ago•183 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
292•eljojo•17h ago•182 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
21•speckx•3d ago•10 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
413•lstoll•21h ago•279 comments

ga68, the GNU Algol 68 Compiler – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
6•matt_d•3d ago•1 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
20•bikenaga•3d ago•10 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
66•kmm•5d ago•9 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
93•quibono•4d ago•22 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
260•i5heu•17h ago•202 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
33•romes•4d ago•3 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
38•gmays•10h ago•13 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1073•cdrnsf•1d ago•459 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
60•gfortaine•12h ago•26 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
291•surprisetalk•3d ago•43 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
150•vmatsiiako•19h ago•71 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
8•1vuio0pswjnm7•1h ago•0 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
154•SerCe•10h ago•144 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
187•limoce•3d ago•102 comments
Open in hackernews

GHz spiking neuromorphic photonic chip with in-situ training

https://arxiv.org/abs/2506.14272
115•juanviera23•6mo ago

Comments

rf15•6mo ago
Appreciating that not everyone tries to optimise for LLMs and we are still doing things like this. If you're looking at HN alone, it sometimes feels like the hype could drown out everything else.
danielbln•6mo ago
There is massive hype, no doubt about it, but lets also not forget how LLMs have basically solved NLP, are a step change in many dimensions and are disrupting and changing things like software engineering like nothing else before it.

So I hear you, but on the flip side we _should_ be reading a lot about LLMs here, as they have a direct impact on the work that most of us do.

That said, seeing other papers pop up that are not related to transformer based networks is appreciated.

larodi•6mo ago
Thank you, brother. Besides not all that goes in HN is strictly LLM, really dunno why the scare.
karanveer•6mo ago
I couldnt agree more.
msgodel•6mo ago
It's just a single linear layer and it's not clear to me that the technology is capable of anything more. If I'm reading it correctly it sounds like running the model forward couldn't even use the technology, they had to record the weights and do it the old fashion way.
roflmaostc•6mo ago
Would you have discredited early AI work because they could only train and compute a couple of weights?

This is about first prototypes and scaling is often easier than the basic principle.

msgodel•6mo ago
Is this actually capable of propagating the gradient and training more complex layers though?

A lot of these novel AI accelerators run into problems like that because they're not capable of general purpose computing. A good example of that are the boltzman machines on Dwave's stuff. Yeah it can do that but it can only do that because the machine is only capable of doing QUBO.

roflmaostc•6mo ago
For inference we do not care about training, right?

But if we could make cheaper inference machines available, everyone would profit. Isn't it that LLMs use more energy in inference than training these days?

fjfaase•6mo ago
Nice that they can do the processing in the GHz range, but from some pictures in the paper, it seems the system has only 60 'cells', which is rather low compared to the number of cells found in brains of animals that display complex behavior. To me it seems this is an optimization in the wrong dimension.
_jab•6mo ago
I suspect practicality is not the goal here, but rather a proof of concept. Perhaps they saw speed as an important technical barrier to cross
khalic•6mo ago
A lot of unrigorous claims for an abstract…
kadushka•6mo ago
Maybe try simulating the algorithms in software before building hardware? People have been trying to get spiking networks to work for several decades now, with zero success. If it does not work in software, it's not going to work in hardware.
vessenes•6mo ago
This seems to work in hardware, per the paper. At least to 80% accuracy.
good_stuffs•6mo ago
>If it does not work in software, it's not going to work in hardware.

Aren't there limits to what can be simulated in software? Analog systems dealing with infinite precision, and having large numbers of connections between neurons is bound to hit the von Neumann bottleneck for classical computers where memory and compute are separate?

naasking•6mo ago
It's not clear that "infinite precision" is a meaningful thing. All inputs and outputs even to analog systems will only ever have finite precision.
juliangamble•6mo ago
“Zero success” seems a bit strong. People have been able to get 96% accuracy on MINST digits on their local machine. https://norse.github.io/notebooks/mnist_classifiers.html I think it may be more accurate to say “1970s level neural net performance”. The evidence suggests it is a nascent field of research.
cwmoore•6mo ago
Retina-inspired video recognition using light. Cool. May be a visual cortex next year.
vessenes•6mo ago
Ghz speed video processing, even if we only get very basic segmentation or recognition out of it, is probably crazy useful. Need to face recognize every seat at a stadium?

Well, if you have enough cameras, 60,000 seats could be scanned 250 thousand times a second. Or if you want to scan a second of video at 60fps, you'd be able to check all of them at a mere 4 thousand times a second.

Anyway, good to see interesting raw research. I imagine there are a number of military and security use cases here that could fund something to market (at least a small initial market).