frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

France's homegrown open source online office suite

https://github.com/suitenumerique
358•nar001•3h ago•176 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
92•bookofjoe•1h ago•79 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
411•theblazehen•2d ago•151 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
77•AlexeyBrin•4h ago•15 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
10•thelok•1h ago•0 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
767•klaussilveira•19h ago•240 comments

First Proof

https://arxiv.org/abs/2602.05192
32•samasblack•1h ago•18 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
49•onurkanbkrc•4h ago•3 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
25•vinhnx•2h ago•3 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1019•xnx•1d ago•580 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
155•alainrk•4h ago•189 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
158•jesperordrup•9h ago•56 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
8•marklit•5d ago•0 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
16•rbanffy•4d ago•0 comments

Software Factories and the Agentic Moment

https://factory.strongdm.ai/
10•mellosouls•2h ago•7 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
102•videotopia•4d ago•26 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
7•simonw•1h ago•1 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
152•matheusalmeida•2d ago•41 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
260•isitcontent•19h ago•33 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
34•matt_d•4d ago•9 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
273•dmpetrov•19h ago•145 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
15•sandGorgon•2d ago•3 comments

Google staff call for firm to cut ties with ICE

https://www.bbc.com/news/articles/cvgjg98vmzjo
98•tartoran•1h ago•24 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
544•todsacerdoti•1d ago•262 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
415•ostacke•1d ago•108 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
361•vecti•21h ago•161 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
61•helloplanets•4d ago•64 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
332•eljojo•22h ago•205 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
456•lstoll•1d ago•298 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
370•aktau•1d ago•194 comments
Open in hackernews

Scientists create ultra fast memory using light

https://www.isi.edu/news/81186/scientists-create-ultra-fast-memory-using-light/
126•giuliomagnifico•2mo ago

Comments

lebuffon•1mo ago
Wow 300mm chips. They must be huge!

(I am sure they meant nm, but nobody is checking the AI output)

vlovich123•1mo ago
From the paper

> footprint of 330 × 290 µm2 using the GlobalFoundries 45SPCLO

That’s a 45nm process but the units for the chip size probably should have been 330um? However I’m not well versed enough in the details to parse it out.

https://arxiv.org/abs/2503.19544

bgnn•1mo ago
I'm very familiar with this process as I use it regularly.

The area is massive. 330um × 290um are the X and Y dimensions. The area is roughly 0.1 mm2. You can see the comparison on table 1. This is roughly 50000 times larger than an SRAM of 45nm process.

This is the problem with photonic circuits. They are massive compared to electronics.

bun_at_work•1mo ago
Is it prohibitively larger? And is the size a fundamental constraint of the technology, or is it possible to reduce the size?
adrian_b•1mo ago
The size is a fundamental constraint of optical technologies, because it is related to the wavelength of light, which is much bigger than the sizes of semiconductor devices.

This is why modern semiconductor devices no longer use lithography with visible light or even with near ultraviolet, but they must use extreme ultraviolet.

The advantage of such optical devices is speed and low power consumption in the optical device itself (ignoring the power consumption of lasers, which might be shared by many devices).

Such memories have special purposes in various instruments, they are not suitable as computer memories.

bgnn•1mo ago
Previous reply is correct.

To give a feeling: micro-ribg resonators are anywhere between 10 to 40 micrometer in diameter. You also need a bunch of other waveguides. The process in the paper uses silicon waveguides, with 400nm width if I'm not wrong. So any optical feature unfortunately isn't going down as much as CMOS technology.

Fun fact: the photolithography has the same limitations. They use all kinds of tricks (different optical affects to shrink the features) but fundamentally limited by the wavelength used. This is why we are seeing a push to a lower and lower wavelengths by ASML. That + multiple patterning helps to scale CMOS down.

pezezin•1mo ago
Would it be possible to use something similar to DWDM to store/process multiple bits in parallel in the same circuit?
bgnn•1mo ago
It isn't unfortunately as the physical size of the resonators need to match a given wavelength. So for each wavelength you need a new circuit in parallel.
pezezin•1mo ago
I see, that's a pity then.
KK7NIL•1mo ago
It almost certainly refers to 300 mm wafers, which are the largest size used right now. They offer significantly better economics than the older 200 mm wafers or lab experiments done in even smaller (i.e. 100 mm) wafers.

The text in the article supports this:

> This is a commercial 300mm monolithic silicon photonics platform, meaning the technology is ready to scale today, rather than being limited to laboratory experiments.

xienze•1mo ago
This just in, OpenAI has already committed to buying the entire world’s supply once it becomes available.
AlOwain•1mo ago
I am not much into AI, but more demand for faster memory is good for all of us. Even if, in the short term, prices increase.
cs702•1mo ago
Cool. Memory bandwidth is a major bottleneck for many important applications today, including AI. Maybe this kind of memory "at the speed of light" can help alleviate the bottleneck?

For a second, I thought the headline was copied & pasted from the hallucinated 10-years-from-now HN frontpage that recently made the HN front page:

https://news.ycombinator.com/item?id=46205632

ilaksh•1mo ago
MRAM and MRAM-CIM is like 10 years ahead of this and going to make a huge impact on efficiency and performance in the next few years, right? Or so I thought I heard.

Memristors are also probably coming after MRAM-CIM and before photonic computing.

cycomanic•1mo ago
People have done these sort of "optical computing" based demonstrations for decades, despite David Miller showing that fundamentally digital computing with optical photons will be immensely power hungry (I say digital here, because there are some applications where analog computing can make sense, but it almost never relies memory for bits).

Specifically this paper is based on simulations, and I've only skimmed the paper, but the power efficiency numbers sound great because they say 40 GHz read/write speeds, but these consume comparatively large powers even if not reading or writing (the lasers have to be running constantly). I also think they did not include the contributions of the modulation and the required drivers (typically you need quite large voltages)? Somebody already pointed out that the size of these is massive, and that's again fundamental.

As someone working in the broad field, I really wish people would stop these type of publications. While these numbers might sound impressive at a first glance, they really are completely unrealistic. There are lots of legitimate applications of optics and photonics, we don't need to resort to this sort of stuff.

embedding-shape•1mo ago
> showing that fundamentally digital computing with optical photons will be immensely power hungry

> they really are completely unrealistic

Unrealistic only because they're power hungry? That sounds like a temporary problem, kind of like when we come up with a bunch of ML approaches we couldn't actually do in the 80s/90s because of the hardware resources required, but today work fine.

Maybe even if the solution aren't useful today, they could be useful in the future? Or maybe with these results, there are more people being inspired to create solutions specifically about the power usage?

"we don't need to resort to this sort of stuff" makes it sound like this is all so beneath you and not deserving of attention, but why are you then paying attention to it?

gsf_emergency_6•1mo ago
Miller limit is fundamentally due to photons being bosons, not great for digital logic (switches) vs carrying info.

There are promising avenues to use "bosonic" nonlinearity to overtake traditional fermionic computing, but they are basically not being explored by EE departments despite (because of?) their oversized funding and attention

scarmig•1mo ago
Universities believe that constantly putting out pieces that sound like some research is revolutionary and will change everything increases public support of science. It doesn't, because the vast majority of science is incremental and mostly learning about some weird, niche thing that probably won't translate into applications. This causes the public to misunderstand the role of scientific research and lose faith in it when it doesn't deliver on its promises (made by the university press office, not the researcher).
cycomanic•1mo ago
> > showing that fundamentally digital computing with optical photons will be immensely power hungry > > > they really are completely unrealistic > > Unrealistic only because they're power hungry? That sounds like a temporary problem, kind of like when we come up with a bunch of ML approaches we couldn't actually do in the 80s/90s because of the hardware resources required, but today work fine. > > Maybe even if the solution aren't useful today, they could be useful in the future? Or maybe with these results, there are more people being inspired to create solutions specifically about the power usage? >

No they are fundamentally power hungry because you essentially need a nonlinear response, i.e. photons need to interact with each other. However photons are bosons and really dislike interacting with each other.

Same thing about the size of the circuits they are determined by the wavelength of light, so fundamentally they are much larger than electronic circuits.

> "we don't need to resort to this sort of stuff" makes it sound like this is all so beneath you and not deserving of attention, but why are you then paying attention to it?

That's not what I said, in fact they deserve my attention because they need to be called out, as the article clearly does not highlight the limitations.

gsf_emergency_6•1mo ago
I only upvoted to send a msg to the moderators not to upweight uni/company press releases :) sadly the energy of VC-culture goes into refining wolf-crying despite all the talk of due dilligence,"thinking for yourself" and "understanding value"

The core section from paper (linked below) is pp8-9.

2mW for 100s of picosecs is huge.

(Also GIANT voltages,if only to illustrate how coarse their simulations are):

As shown in Figure 6(a), even with up to 1 V of noise on each Q and QB node (resulting in a 2 V differential between Q and QB), the pSRAM bitcell successfully regenerates the previously stored data. It is important to note that higher noise voltages increase the time required to restore the original state, but the bitcell continues to function correctly due to its regenerative behavior.

fooker•1mo ago
40GHZ memory/compute for 10-100x power sounds like a great idea to me.

We are going tohave energy abundant at some point.

adrian_b•1mo ago
Free version of the research paper:

https://arxiv.org/abs/2503.19544v1

The memory cell is huge in comparison with semiconductor memories, but it is very fast, with a 40 GHz read/write speed.

There are important applications for a very high speed small memory, e.g. for digital signal processing in radars and other such devices, but this will never replace a general-purpose computer memory, where much higher bit densities are needed.

aj7•1mo ago
This is a hugely important point. The de Broglie wavelength of the photon is hundreds to thousands of nm. There is no possibility of VLSI scale-up, a point conveniently omitted in hundreds of decks and at least $1B in investment. Photonic techniques will remain essentially a part of the analog pallette in system design.
jdub•1mo ago
Careful about "never"… individual transistors used to be large, heavy, power hungry, and expensive.
fsh•1mo ago
That's not true. Transistors were commercialized a few years after their invention, and already the first generation vastly outperformed vacuum tubes in size, weight, and power. Optical computing has been done for a few decades now with very little progress.
jdub•1mo ago
(I was being a little facetious – vacuum tubes being the original "transistors".)
ElectricalUnion•1mo ago
I might have done the math wrong, but is this really supposed to be 330 * 290 um² * 128GiB * 8 = 96 m² big? And this is the RAM one expects per node cluster element for current LLM AI, nevermind future GAI.
scotty79•1mo ago
NVLink Spine has 2 miles of wires.
moi2388•1mo ago
“ This represents more than a laboratory proof-of-concept; it’s a functional component manufactured using industry-standard processes.”

Nice AI text again

IntrepidPig•1mo ago
God it infuriates me
in_a_hole•1mo ago
Is it possible that this is actually just a very common writing pattern used by actual humans and that's the reason AI uses it so much?