frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Solving NP-Complete Structures via Information Noise Subtraction (P=NP)

https://zenodo.org/records/18395618
1•alemonti06•5m ago•1 comments

Cook New Emojis

https://emoji.supply/kitchen/
1•vasanthv•7m ago•0 comments

Show HN: LoKey Typer – A calm typing practice app with ambient soundscapes

https://mcp-tool-shop-org.github.io/LoKey-Typer/
1•mikeyfrilot•10m ago•0 comments

Long-Sought Proof Tames Some of Math's Unruliest Equations

https://www.quantamagazine.org/long-sought-proof-tames-some-of-maths-unruliest-equations-20260206/
1•asplake•11m ago•0 comments

Hacking the last Z80 computer – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/FEHLHY-hacking_the_last_z80_computer_ever_made/
1•michalpleban•12m ago•0 comments

Browser-use for Node.js v0.2.0: TS AI browser automation parity with PY v0.5.11

https://github.com/webllm/browser-use
1•unadlib•13m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
1•mitchbob•13m ago•1 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
1•alainrk•14m ago•0 comments

Storyship: Turn Screen Recordings into Professional Demos

https://storyship.app/
1•JohnsonZou6523•14m ago•0 comments

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
1•edent•17m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•21m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•21m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
2•tosh•26m ago•1 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
2•onurkanbkrc•27m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•28m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•31m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•33m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•34m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•34m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•34m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
3•juujian•36m ago•2 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•37m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•40m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
2•DEntisT_•42m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•42m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•43m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•45m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
5•sakanakana00•49m ago•1 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•51m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
4•Tehnix•52m ago•1 comments
Open in hackernews

Scientists create ultra fast memory using light

https://www.isi.edu/news/81186/scientists-create-ultra-fast-memory-using-light/
126•giuliomagnifico•2mo ago

Comments

lebuffon•1mo ago
Wow 300mm chips. They must be huge!

(I am sure they meant nm, but nobody is checking the AI output)

vlovich123•1mo ago
From the paper

> footprint of 330 × 290 µm2 using the GlobalFoundries 45SPCLO

That’s a 45nm process but the units for the chip size probably should have been 330um? However I’m not well versed enough in the details to parse it out.

https://arxiv.org/abs/2503.19544

bgnn•1mo ago
I'm very familiar with this process as I use it regularly.

The area is massive. 330um × 290um are the X and Y dimensions. The area is roughly 0.1 mm2. You can see the comparison on table 1. This is roughly 50000 times larger than an SRAM of 45nm process.

This is the problem with photonic circuits. They are massive compared to electronics.

bun_at_work•1mo ago
Is it prohibitively larger? And is the size a fundamental constraint of the technology, or is it possible to reduce the size?
adrian_b•1mo ago
The size is a fundamental constraint of optical technologies, because it is related to the wavelength of light, which is much bigger than the sizes of semiconductor devices.

This is why modern semiconductor devices no longer use lithography with visible light or even with near ultraviolet, but they must use extreme ultraviolet.

The advantage of such optical devices is speed and low power consumption in the optical device itself (ignoring the power consumption of lasers, which might be shared by many devices).

Such memories have special purposes in various instruments, they are not suitable as computer memories.

bgnn•1mo ago
Previous reply is correct.

To give a feeling: micro-ribg resonators are anywhere between 10 to 40 micrometer in diameter. You also need a bunch of other waveguides. The process in the paper uses silicon waveguides, with 400nm width if I'm not wrong. So any optical feature unfortunately isn't going down as much as CMOS technology.

Fun fact: the photolithography has the same limitations. They use all kinds of tricks (different optical affects to shrink the features) but fundamentally limited by the wavelength used. This is why we are seeing a push to a lower and lower wavelengths by ASML. That + multiple patterning helps to scale CMOS down.

pezezin•1mo ago
Would it be possible to use something similar to DWDM to store/process multiple bits in parallel in the same circuit?
bgnn•1mo ago
It isn't unfortunately as the physical size of the resonators need to match a given wavelength. So for each wavelength you need a new circuit in parallel.
pezezin•1mo ago
I see, that's a pity then.
KK7NIL•1mo ago
It almost certainly refers to 300 mm wafers, which are the largest size used right now. They offer significantly better economics than the older 200 mm wafers or lab experiments done in even smaller (i.e. 100 mm) wafers.

The text in the article supports this:

> This is a commercial 300mm monolithic silicon photonics platform, meaning the technology is ready to scale today, rather than being limited to laboratory experiments.

xienze•1mo ago
This just in, OpenAI has already committed to buying the entire world’s supply once it becomes available.
AlOwain•1mo ago
I am not much into AI, but more demand for faster memory is good for all of us. Even if, in the short term, prices increase.
cs702•1mo ago
Cool. Memory bandwidth is a major bottleneck for many important applications today, including AI. Maybe this kind of memory "at the speed of light" can help alleviate the bottleneck?

For a second, I thought the headline was copied & pasted from the hallucinated 10-years-from-now HN frontpage that recently made the HN front page:

https://news.ycombinator.com/item?id=46205632

ilaksh•1mo ago
MRAM and MRAM-CIM is like 10 years ahead of this and going to make a huge impact on efficiency and performance in the next few years, right? Or so I thought I heard.

Memristors are also probably coming after MRAM-CIM and before photonic computing.

cycomanic•1mo ago
People have done these sort of "optical computing" based demonstrations for decades, despite David Miller showing that fundamentally digital computing with optical photons will be immensely power hungry (I say digital here, because there are some applications where analog computing can make sense, but it almost never relies memory for bits).

Specifically this paper is based on simulations, and I've only skimmed the paper, but the power efficiency numbers sound great because they say 40 GHz read/write speeds, but these consume comparatively large powers even if not reading or writing (the lasers have to be running constantly). I also think they did not include the contributions of the modulation and the required drivers (typically you need quite large voltages)? Somebody already pointed out that the size of these is massive, and that's again fundamental.

As someone working in the broad field, I really wish people would stop these type of publications. While these numbers might sound impressive at a first glance, they really are completely unrealistic. There are lots of legitimate applications of optics and photonics, we don't need to resort to this sort of stuff.

embedding-shape•1mo ago
> showing that fundamentally digital computing with optical photons will be immensely power hungry

> they really are completely unrealistic

Unrealistic only because they're power hungry? That sounds like a temporary problem, kind of like when we come up with a bunch of ML approaches we couldn't actually do in the 80s/90s because of the hardware resources required, but today work fine.

Maybe even if the solution aren't useful today, they could be useful in the future? Or maybe with these results, there are more people being inspired to create solutions specifically about the power usage?

"we don't need to resort to this sort of stuff" makes it sound like this is all so beneath you and not deserving of attention, but why are you then paying attention to it?

gsf_emergency_6•1mo ago
Miller limit is fundamentally due to photons being bosons, not great for digital logic (switches) vs carrying info.

There are promising avenues to use "bosonic" nonlinearity to overtake traditional fermionic computing, but they are basically not being explored by EE departments despite (because of?) their oversized funding and attention

scarmig•1mo ago
Universities believe that constantly putting out pieces that sound like some research is revolutionary and will change everything increases public support of science. It doesn't, because the vast majority of science is incremental and mostly learning about some weird, niche thing that probably won't translate into applications. This causes the public to misunderstand the role of scientific research and lose faith in it when it doesn't deliver on its promises (made by the university press office, not the researcher).
cycomanic•1mo ago
> > showing that fundamentally digital computing with optical photons will be immensely power hungry > > > they really are completely unrealistic > > Unrealistic only because they're power hungry? That sounds like a temporary problem, kind of like when we come up with a bunch of ML approaches we couldn't actually do in the 80s/90s because of the hardware resources required, but today work fine. > > Maybe even if the solution aren't useful today, they could be useful in the future? Or maybe with these results, there are more people being inspired to create solutions specifically about the power usage? >

No they are fundamentally power hungry because you essentially need a nonlinear response, i.e. photons need to interact with each other. However photons are bosons and really dislike interacting with each other.

Same thing about the size of the circuits they are determined by the wavelength of light, so fundamentally they are much larger than electronic circuits.

> "we don't need to resort to this sort of stuff" makes it sound like this is all so beneath you and not deserving of attention, but why are you then paying attention to it?

That's not what I said, in fact they deserve my attention because they need to be called out, as the article clearly does not highlight the limitations.

gsf_emergency_6•1mo ago
I only upvoted to send a msg to the moderators not to upweight uni/company press releases :) sadly the energy of VC-culture goes into refining wolf-crying despite all the talk of due dilligence,"thinking for yourself" and "understanding value"

The core section from paper (linked below) is pp8-9.

2mW for 100s of picosecs is huge.

(Also GIANT voltages,if only to illustrate how coarse their simulations are):

As shown in Figure 6(a), even with up to 1 V of noise on each Q and QB node (resulting in a 2 V differential between Q and QB), the pSRAM bitcell successfully regenerates the previously stored data. It is important to note that higher noise voltages increase the time required to restore the original state, but the bitcell continues to function correctly due to its regenerative behavior.

fooker•1mo ago
40GHZ memory/compute for 10-100x power sounds like a great idea to me.

We are going tohave energy abundant at some point.

adrian_b•1mo ago
Free version of the research paper:

https://arxiv.org/abs/2503.19544v1

The memory cell is huge in comparison with semiconductor memories, but it is very fast, with a 40 GHz read/write speed.

There are important applications for a very high speed small memory, e.g. for digital signal processing in radars and other such devices, but this will never replace a general-purpose computer memory, where much higher bit densities are needed.

aj7•1mo ago
This is a hugely important point. The de Broglie wavelength of the photon is hundreds to thousands of nm. There is no possibility of VLSI scale-up, a point conveniently omitted in hundreds of decks and at least $1B in investment. Photonic techniques will remain essentially a part of the analog pallette in system design.
jdub•1mo ago
Careful about "never"… individual transistors used to be large, heavy, power hungry, and expensive.
fsh•1mo ago
That's not true. Transistors were commercialized a few years after their invention, and already the first generation vastly outperformed vacuum tubes in size, weight, and power. Optical computing has been done for a few decades now with very little progress.
jdub•1mo ago
(I was being a little facetious – vacuum tubes being the original "transistors".)
ElectricalUnion•1mo ago
I might have done the math wrong, but is this really supposed to be 330 * 290 um² * 128GiB * 8 = 96 m² big? And this is the RAM one expects per node cluster element for current LLM AI, nevermind future GAI.
scotty79•1mo ago
NVLink Spine has 2 miles of wires.
moi2388•1mo ago
“ This represents more than a laboratory proof-of-concept; it’s a functional component manufactured using industry-standard processes.”

Nice AI text again

IntrepidPig•1mo ago
God it infuriates me
in_a_hole•1mo ago
Is it possible that this is actually just a very common writing pattern used by actual humans and that's the reason AI uses it so much?