frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

AI-powered text correction for macOS

https://taipo.app/
1•neuling•1m ago•1 comments

AppSecMaster – Learn Application Security with hands on challenges

https://www.appsecmaster.net/en
1•aqeisi•2m ago•1 comments

Fibonacci Number Certificates

https://www.johndcook.com/blog/2026/02/05/fibonacci-certificate/
1•y1n0•3m ago•0 comments

AI Overviews are killing the web search, and there's nothing we can do about it

https://www.neowin.net/editorials/ai-overviews-are-killing-the-web-search-and-theres-nothing-we-c...
2•bundie•8m ago•0 comments

City skylines need an upgrade in the face of climate stress

https://theconversation.com/city-skylines-need-an-upgrade-in-the-face-of-climate-stress-267763
3•gnabgib•9m ago•0 comments

1979: The Model World of Robert Symes [video]

https://www.youtube.com/watch?v=HmDxmxhrGDc
1•xqcgrek2•13m ago•0 comments

Satellites Have a Lot of Room

https://www.johndcook.com/blog/2026/02/02/satellites-have-a-lot-of-room/
2•y1n0•14m ago•0 comments

1980s Farm Crisis

https://en.wikipedia.org/wiki/1980s_farm_crisis
3•calebhwin•15m ago•1 comments

Show HN: FSID - Identifier for files and directories (like ISBN for Books)

https://github.com/skorotkiewicz/fsid
1•modinfo•20m ago•0 comments

Show HN: Holy Grail: Open-Source Autonomous Development Agent

https://github.com/dakotalock/holygrailopensource
1•Moriarty2026•27m ago•1 comments

Show HN: Minecraft Creeper meets 90s Tamagotchi

https://github.com/danielbrendel/krepagotchi-game
1•foxiel•34m ago•1 comments

Show HN: Termiteam – Control center for multiple AI agent terminals

https://github.com/NetanelBaruch/termiteam
1•Netanelbaruch•34m ago•0 comments

The only U.S. particle collider shuts down

https://www.sciencenews.org/article/particle-collider-shuts-down-brookhaven
2•rolph•37m ago•1 comments

Ask HN: Why do purchased B2B email lists still have such poor deliverability?

1•solarisos•37m ago•2 comments

Show HN: Remotion directory (videos and prompts)

https://www.remotion.directory/
1•rokbenko•39m ago•0 comments

Portable C Compiler

https://en.wikipedia.org/wiki/Portable_C_Compiler
2•guerrilla•41m ago•0 comments

Show HN: Kokki – A "Dual-Core" System Prompt to Reduce LLM Hallucinations

1•Ginsabo•42m ago•0 comments

Software Engineering Transformation 2026

https://mfranc.com/blog/ai-2026/
1•michal-franc•43m ago•0 comments

Microsoft purges Win11 printer drivers, devices on borrowed time

https://www.tomshardware.com/peripherals/printers/microsoft-stops-distrubitng-legacy-v3-and-v4-pr...
3•rolph•44m ago•1 comments

Lunch with the FT: Tarek Mansour

https://www.ft.com/content/a4cebf4c-c26c-48bb-82c8-5701d8256282
2•hhs•47m ago•0 comments

Old Mexico and her lost provinces (1883)

https://www.gutenberg.org/cache/epub/77881/pg77881-images.html
1•petethomas•50m ago•0 comments

'AI' is a dick move, redux

https://www.baldurbjarnason.com/notes/2026/note-on-debating-llm-fans/
5•cratermoon•51m ago•0 comments

The source code was the moat. But not anymore

https://philipotoole.com/the-source-code-was-the-moat-no-longer/
1•otoolep•51m ago•0 comments

Does anyone else feel like their inbox has become their job?

1•cfata•52m ago•1 comments

An AI model that can read and diagnose a brain MRI in seconds

https://www.michiganmedicine.org/health-lab/ai-model-can-read-and-diagnose-brain-mri-seconds
2•hhs•55m ago•0 comments

Dev with 5 of experience switched to Rails, what should I be careful about?

2•vampiregrey•57m ago•0 comments

AlphaFace: High Fidelity and Real-Time Face Swapper Robust to Facial Pose

https://arxiv.org/abs/2601.16429
1•PaulHoule•58m ago•0 comments

Scientists discover “levitating” time crystals that you can hold in your hand

https://www.nyu.edu/about/news-publications/news/2026/february/scientists-discover--levitating--t...
3•hhs•1h ago•0 comments

Rammstein – Deutschland (C64 Cover, Real SID, 8-bit – 2019) [video]

https://www.youtube.com/watch?v=3VReIuv1GFo
1•erickhill•1h ago•0 comments

Tell HN: Yet Another Round of Zendesk Spam

6•Philpax•1h ago•1 comments
Open in hackernews

A New Kind of Computer (April 2025)

https://lightmatter.co/blog/a-new-kind-of-computer/
62•gkolli•7mo ago

Comments

croemer•7mo ago
I stopped reading after "Soon, you will not be able to afford your computer. Consumer GPUs are already prohibitively expensive."
kevin_thibedeau•7mo ago
This is always a hilarious take. If you inflation adjust a 386 PC from the early 90s when 486's were on the market you'd find they range in excess of $3000 and the 486s are in the $5000 zone. Computers are incredibly cheap now. What isn't cheap is the bleeding edge. A place fewer and fewer people have to be at, which leads to lower demand and higher prices to compensate.
ge96•7mo ago
It is crazy you can buy a used laptop for $15 and do something meaningful with like writing code (meaningful as in make money)

I used to have this weird obsession of doing this, buying old chromebooks putting linux on them, with 4GB of RAM it was still useful but I realize nowadays for "ideal" computing it seems 16GB is a min for RAM

ge96•7mo ago
It's like the black Mac from 2007, I know its tech is outdated but I want it
thom•7mo ago
Definitely one of my favourite machines of all time (until the plastic started falling off).
TedDallas•7mo ago
It was kind or that way in early days of high end personal computing. I remember seeing an ad in the early 90s for a 486 laptop that was $6,000. Historically prices have always gone down. You just have to wait. SoTA is always going to go for a premium.
ghusto•7mo ago
That irked me too. "_Bleeding edge" consumer GPUs are ...", sure, but you wait 6 months and you have it at a fraction of the cost.

It's like saying "cars are already prohibitively expensive" whilst looking a Ferraris.

imiric•7mo ago
> That irked me too. "_Bleeding edge" consumer GPUs are ...", sure, but you wait 6 months and you have it at a fraction of the cost.

That's demonstrably false. The RTX 4090 released in 2022 with an MSRP of $1,600. Today you'd be hard pressed to find one below $3K that isn't a scam.

The reality is that NVIDIA is taking advantage of their market dominance to increase their markup with every generation of products[1], even when accounting for inflation and price-to-performance. The 50 series is even more egregious, since it delivers a marginal performance increase, yet the marketing relies heavily on frame generation. The trickling supply and scalpers are doing the rest.

AMD and Intel have a more reasonable pricing strategy, but they don't compete at the higher end.

[1]: https://www.digitaltrends.com/computing/nvidias-pricing-stra...

Animats•7mo ago
That's related more to NVidia's discovery that they could get away with huge margins, and the China GPU projects for graphics being years behind.[1]

[1] https://www.msn.com/en-in/money/news/china-s-first-gaming-gp...

Anduia•7mo ago
> Critically, this processor achieves accuracies approaching those of conventional 32-bit floating-point digital systems “out-of-the-box,” without relying on advanced methods such as fine-tuning or quantization-aware training.

Hmm... what? So it is not accurate?

btilly•7mo ago
It's an analog system. Which means that accuracy is naturally limited.

However a single analog math operation requires the same energy as a single bit flip in a digital computer. And it takes a lot of bit flips to do a single floating point operation. So a digital calculation can be approximated with far less energy and hardware. And neural nets don't need digital precision to produce useful results.

B1FF_PSUVM•7mo ago
> neural nets don't need digital precision to produce useful results.

The point - as shown by the original implementation...

bee_rider•7mo ago
It seems weirdly backwards. They don’t do techniques like quantization aware tuning to increase the accuracy of the coprocessor, right? (I mean that’s nonsense). They use those techniques, to allow them to use less accurate coprocessors, I thought.

I think they are just saying the coprocessor is pretty accurate, so they don’t need to use these advanced techniques.

btilly•7mo ago
This paradigm for computing was already covered three years ago by Veratasium in https://www.youtube.com/watch?v=GVsUOuSjvcg.

Maybe not the specific photonic system that they are describing. Which I'm sure has some significant improvements over what existed then. But the idea of using analog approximations of existing neural net AI models, to allow us to run AI models far more cheaply, with far less energy.

Whether or not this system is the one that wins out, I'm very sure that AI run on an analog system will have a very important role to play in the future. It will allow technologies like guiding autonomous robots with AI models running on hardware inside of the robot.

boznz•7mo ago
Weirdly complex to read yet light on key technical details. My TLDR (as an old clueless electronics engineer) was the compute part is photonic/analog, lasers and waveguides, yet we still require 50 billion transistors performing the (I guess non-compute) parts such as ADC, I/O, memory etc. The bottom line is 65 TOPS for <80W - The processing (optical) part consuming 1.65W and the 'helper electronics' consuming the rest so scaling the (optical) processing should not have the thermal bottlenecks of a solely transistor based processor. Also parallelism of the optical part though using different wavelengths of light as threads may be possible. Nothing about problems, costs, or can the helper electronics eventually use photonics.

I remember a TV Program in the UK from the 70's (tomorrows world I think) that talked about this so I am guessing silicon was just more cost effective until now. Still taking it at face value I would say it is quite an exciting technology.

sandeep1998•7mo ago
Insightful Note. thank you.
quantadev•7mo ago
In 25 years we'll have #GlassModels. A "chip", which is a passive device (just a complex lens) made only of glass or graphene, which can do an "AI Inference" simply by shining the "input tokens" thru it. (i.e. arrays of photons). In other words, the "numeric value" at one MLP "neuron input" will be the amplitude of the light (number of simultaneous photons).

All addition, multiplication, and tanh functions will be done by photon superposition/interference effects, and it will consume zero power (since it's only a complex "lens").

It will probably do parallel computations where each photon frequency range will not interfere with other ranges, allowing multiple "inferences" to be "Shining Thru" simultaneously.

This design will completely solve the energy crisis and each inference will take the same time as it takes light to travel a centimeter. i.e. essentially instantaneous.

gcanyon•7mo ago
For years I've been fascinated by those little solar-powered calculators. In a weird way, they're devices that enable us to cast hand shadows to do arithmetic.
quantadev•7mo ago
Lookup "Analog Optical Computing". There was recently a breakthrough just last week where optical computing researchers were able to use photon interference effects to do mathematical operations purely in analog! That means no 0s and 1s, just pure optics. Paste all that into Gemini to learn more.
gorkish•7mo ago
If you contextualize Star Trek's "isolinear chips" to be something like this, they start to seem considerably more sensible.

Has anyone built a physical ASIC that embeds a full model yet?

Animats•7mo ago
Interesting. Questions, the Nature paper being expensively paywalled:

- Is the analog computation actually done with light? What's the actual compute element like? Do they have an analog photonic multiplier? Those exist, and have been scaling up for a while.[1] The announcement isn't clear on how much compute is photonic. There are still a lot of digital components involved. Is it worth it to go D/A, generate light, do some photonic operations, go A/D, and put the bits back into memory? That's been the classic problem with photonic computing. Memory is really hard, and without memory, pretty soon you have to go back to a domain where you can store results. Pure photonic systems do exist, such as fiber optic cable amplifiers, but they are memoryless.

- If all this works, is loss of repeatability going to be a problem?

[1] https://ieeexplore.ieee.org/document/10484797

vivzkestrel•7mo ago
You are also forgetting the ridiculous amount of software bloat literally introduced at the OS level. https://www.youtube.com/watch?v=tCNmhpcjJCQ Look at how fast windows 7 works compared to windows 11. Think of how much telemetry, bloatware windows 10 and 11 ship with. A new methodology that teases a different direction from silicon chips is definitely a must step but software vendors need to come together as well and highlight the bloat problem. Perhaps for the next windows release, in addition to Home, Professional and Pro, how about Windows Maximal, a highly streamlined version of windows with only the absolute minimal set of things installed to get your system running at 2x speeds compared to Windows Home. Charge it double maybe?
userbinator•7mo ago
Just look at the demoscene for some great example of what it means to fully exploit what the hardware is actually capable of.
ForgotIdAgain•7mo ago
It's always cited as an example, and it's highly impressive yes, but it's also highly focused, with a very partial feature set, I'm not sure that type of coding is applicable to mass scale software.
andai•7mo ago
A few years ago I ran Windows XP in a VM inside Windows 10.

When you pressed Win+E, Windows opens an explorer window (in both versions).

In XP this happens in the span of a single video frame.

In Windows 10, first nothing happens, then a big white rectangle, then you get to watch all the UI elements get painted in one by one.

The really impressive part is that this was before they rewrote explorer as an electron app! I think it might actually be faster now that it's an electron app.