frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
233•theblazehen•2d ago•68 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
694•klaussilveira•15h ago•206 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
6•AlexeyBrin•1h ago•0 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
962•xnx•20h ago•555 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
130•matheusalmeida•2d ago•35 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
67•videotopia•4d ago•6 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
54•jesperordrup•5h ago•24 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
36•kaonwarb•3d ago•27 comments

ga68, the GNU Algol 68 Compiler – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
10•matt_d•3d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
236•isitcontent•15h ago•26 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
233•dmpetrov•16h ago•124 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
32•speckx•3d ago•21 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
335•vecti•17h ago•147 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
502•todsacerdoti•23h ago•244 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
386•ostacke•21h ago•97 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
300•eljojo•18h ago•186 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
361•aktau•22h ago•185 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
10•__natty__•3h ago•0 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
425•lstoll•21h ago•282 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
68•kmm•5d ago•10 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
96•quibono•4d ago•22 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
21•bikenaga•3d ago•11 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
19•1vuio0pswjnm7•1h ago•5 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
264•i5heu•18h ago•216 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
33•romes•4d ago•3 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
64•gfortaine•13h ago•28 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1076•cdrnsf•1d ago•460 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
39•gmays•10h ago•13 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
298•surprisetalk•3d ago•44 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
154•vmatsiiako•20h ago•72 comments
Open in hackernews

A New Kind of Computer (April 2025)

https://lightmatter.co/blog/a-new-kind-of-computer/
62•gkolli•7mo ago

Comments

croemer•7mo ago
I stopped reading after "Soon, you will not be able to afford your computer. Consumer GPUs are already prohibitively expensive."
kevin_thibedeau•7mo ago
This is always a hilarious take. If you inflation adjust a 386 PC from the early 90s when 486's were on the market you'd find they range in excess of $3000 and the 486s are in the $5000 zone. Computers are incredibly cheap now. What isn't cheap is the bleeding edge. A place fewer and fewer people have to be at, which leads to lower demand and higher prices to compensate.
ge96•7mo ago
It is crazy you can buy a used laptop for $15 and do something meaningful with like writing code (meaningful as in make money)

I used to have this weird obsession of doing this, buying old chromebooks putting linux on them, with 4GB of RAM it was still useful but I realize nowadays for "ideal" computing it seems 16GB is a min for RAM

ge96•7mo ago
It's like the black Mac from 2007, I know its tech is outdated but I want it
thom•7mo ago
Definitely one of my favourite machines of all time (until the plastic started falling off).
TedDallas•7mo ago
It was kind or that way in early days of high end personal computing. I remember seeing an ad in the early 90s for a 486 laptop that was $6,000. Historically prices have always gone down. You just have to wait. SoTA is always going to go for a premium.
ghusto•7mo ago
That irked me too. "_Bleeding edge" consumer GPUs are ...", sure, but you wait 6 months and you have it at a fraction of the cost.

It's like saying "cars are already prohibitively expensive" whilst looking a Ferraris.

imiric•7mo ago
> That irked me too. "_Bleeding edge" consumer GPUs are ...", sure, but you wait 6 months and you have it at a fraction of the cost.

That's demonstrably false. The RTX 4090 released in 2022 with an MSRP of $1,600. Today you'd be hard pressed to find one below $3K that isn't a scam.

The reality is that NVIDIA is taking advantage of their market dominance to increase their markup with every generation of products[1], even when accounting for inflation and price-to-performance. The 50 series is even more egregious, since it delivers a marginal performance increase, yet the marketing relies heavily on frame generation. The trickling supply and scalpers are doing the rest.

AMD and Intel have a more reasonable pricing strategy, but they don't compete at the higher end.

[1]: https://www.digitaltrends.com/computing/nvidias-pricing-stra...

Animats•7mo ago
That's related more to NVidia's discovery that they could get away with huge margins, and the China GPU projects for graphics being years behind.[1]

[1] https://www.msn.com/en-in/money/news/china-s-first-gaming-gp...

Anduia•7mo ago
> Critically, this processor achieves accuracies approaching those of conventional 32-bit floating-point digital systems “out-of-the-box,” without relying on advanced methods such as fine-tuning or quantization-aware training.

Hmm... what? So it is not accurate?

btilly•7mo ago
It's an analog system. Which means that accuracy is naturally limited.

However a single analog math operation requires the same energy as a single bit flip in a digital computer. And it takes a lot of bit flips to do a single floating point operation. So a digital calculation can be approximated with far less energy and hardware. And neural nets don't need digital precision to produce useful results.

B1FF_PSUVM•7mo ago
> neural nets don't need digital precision to produce useful results.

The point - as shown by the original implementation...

bee_rider•7mo ago
It seems weirdly backwards. They don’t do techniques like quantization aware tuning to increase the accuracy of the coprocessor, right? (I mean that’s nonsense). They use those techniques, to allow them to use less accurate coprocessors, I thought.

I think they are just saying the coprocessor is pretty accurate, so they don’t need to use these advanced techniques.

btilly•7mo ago
This paradigm for computing was already covered three years ago by Veratasium in https://www.youtube.com/watch?v=GVsUOuSjvcg.

Maybe not the specific photonic system that they are describing. Which I'm sure has some significant improvements over what existed then. But the idea of using analog approximations of existing neural net AI models, to allow us to run AI models far more cheaply, with far less energy.

Whether or not this system is the one that wins out, I'm very sure that AI run on an analog system will have a very important role to play in the future. It will allow technologies like guiding autonomous robots with AI models running on hardware inside of the robot.

boznz•7mo ago
Weirdly complex to read yet light on key technical details. My TLDR (as an old clueless electronics engineer) was the compute part is photonic/analog, lasers and waveguides, yet we still require 50 billion transistors performing the (I guess non-compute) parts such as ADC, I/O, memory etc. The bottom line is 65 TOPS for <80W - The processing (optical) part consuming 1.65W and the 'helper electronics' consuming the rest so scaling the (optical) processing should not have the thermal bottlenecks of a solely transistor based processor. Also parallelism of the optical part though using different wavelengths of light as threads may be possible. Nothing about problems, costs, or can the helper electronics eventually use photonics.

I remember a TV Program in the UK from the 70's (tomorrows world I think) that talked about this so I am guessing silicon was just more cost effective until now. Still taking it at face value I would say it is quite an exciting technology.

sandeep1998•7mo ago
Insightful Note. thank you.
quantadev•7mo ago
In 25 years we'll have #GlassModels. A "chip", which is a passive device (just a complex lens) made only of glass or graphene, which can do an "AI Inference" simply by shining the "input tokens" thru it. (i.e. arrays of photons). In other words, the "numeric value" at one MLP "neuron input" will be the amplitude of the light (number of simultaneous photons).

All addition, multiplication, and tanh functions will be done by photon superposition/interference effects, and it will consume zero power (since it's only a complex "lens").

It will probably do parallel computations where each photon frequency range will not interfere with other ranges, allowing multiple "inferences" to be "Shining Thru" simultaneously.

This design will completely solve the energy crisis and each inference will take the same time as it takes light to travel a centimeter. i.e. essentially instantaneous.

gcanyon•7mo ago
For years I've been fascinated by those little solar-powered calculators. In a weird way, they're devices that enable us to cast hand shadows to do arithmetic.
quantadev•7mo ago
Lookup "Analog Optical Computing". There was recently a breakthrough just last week where optical computing researchers were able to use photon interference effects to do mathematical operations purely in analog! That means no 0s and 1s, just pure optics. Paste all that into Gemini to learn more.
gorkish•7mo ago
If you contextualize Star Trek's "isolinear chips" to be something like this, they start to seem considerably more sensible.

Has anyone built a physical ASIC that embeds a full model yet?

Animats•7mo ago
Interesting. Questions, the Nature paper being expensively paywalled:

- Is the analog computation actually done with light? What's the actual compute element like? Do they have an analog photonic multiplier? Those exist, and have been scaling up for a while.[1] The announcement isn't clear on how much compute is photonic. There are still a lot of digital components involved. Is it worth it to go D/A, generate light, do some photonic operations, go A/D, and put the bits back into memory? That's been the classic problem with photonic computing. Memory is really hard, and without memory, pretty soon you have to go back to a domain where you can store results. Pure photonic systems do exist, such as fiber optic cable amplifiers, but they are memoryless.

- If all this works, is loss of repeatability going to be a problem?

[1] https://ieeexplore.ieee.org/document/10484797

vivzkestrel•7mo ago
You are also forgetting the ridiculous amount of software bloat literally introduced at the OS level. https://www.youtube.com/watch?v=tCNmhpcjJCQ Look at how fast windows 7 works compared to windows 11. Think of how much telemetry, bloatware windows 10 and 11 ship with. A new methodology that teases a different direction from silicon chips is definitely a must step but software vendors need to come together as well and highlight the bloat problem. Perhaps for the next windows release, in addition to Home, Professional and Pro, how about Windows Maximal, a highly streamlined version of windows with only the absolute minimal set of things installed to get your system running at 2x speeds compared to Windows Home. Charge it double maybe?
userbinator•7mo ago
Just look at the demoscene for some great example of what it means to fully exploit what the hardware is actually capable of.
ForgotIdAgain•7mo ago
It's always cited as an example, and it's highly impressive yes, but it's also highly focused, with a very partial feature set, I'm not sure that type of coding is applicable to mass scale software.
andai•7mo ago
A few years ago I ran Windows XP in a VM inside Windows 10.

When you pressed Win+E, Windows opens an explorer window (in both versions).

In XP this happens in the span of a single video frame.

In Windows 10, first nothing happens, then a big white rectangle, then you get to watch all the UI elements get painted in one by one.

The really impressive part is that this was before they rewrote explorer as an electron app! I think it might actually be faster now that it's an electron app.