frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
50•thelok•3h ago•6 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
115•AlexeyBrin•6h ago•20 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
811•klaussilveira•21h ago•246 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
49•vinhnx•4h ago•7 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
91•1vuio0pswjnm7•7h ago•102 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
72•onurkanbkrc•6h ago•5 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1053•xnx•1d ago•600 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
470•theblazehen•2d ago•174 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
45•alephnerd•1h ago•14 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
197•jesperordrup•11h ago•67 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
8•languid-photic•3d ago•1 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
9•surprisetalk•1h ago•2 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
537•nar001•5h ago•248 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
204•alainrk•6h ago•311 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
33•rbanffy•4d ago•6 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
26•marklit•5d ago•1 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
110•videotopia•4d ago•30 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
63•mellosouls•4h ago•68 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
68•speckx•4d ago•71 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
21•sandGorgon•2d ago•11 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
271•isitcontent•21h ago•36 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
199•limoce•4d ago•110 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
284•dmpetrov•21h ago•151 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
553•todsacerdoti•1d ago•267 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
424•ostacke•1d ago•110 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
41•matt_d•4d ago•16 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
348•eljojo•1d ago•214 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
466•lstoll•1d ago•308 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
367•vecti•23h ago•167 comments
Open in hackernews

Replication of Quantum Factorisation Records with a VIC-20, an Abacus, and a Dog

https://eprint.iacr.org/2025/1237
84•teddyh•6mo ago

Comments

rahimnathwani•6mo ago
Previous: https://news.ycombinator.com/item?id=44538693
cbm-vic-20•6mo ago
> We verified this by taking a recently-calibrated reference dog, Scribble, depicted in Figure 6, and having him bark three times, thus simultaneously factorising both 15 and 21. This process wasn’t as simple as it first appeared because Scribble is very well behaved and almost never barks.
trhway•6mo ago
one can always press the door bell button - works like a charm with my Chihuahua. Though he prefers to factorize numbers more like 529 than 21.
tomgag•6mo ago
I guess I'll post it here as well. This is my personal take on the whole story: https://gagliardoni.net/#20250714_ludd_grandpas

A relevant quote: "this is your daily reminder that "How large is the biggest number it can factorize" is NOT a good measure of progress in quantum computing. If you're still stuck in this mindset, you'll be up for a rude awakening."

Related: this is from Dan Bernstein: https://blog.cr.yp.to/20250118-flight.html#moon

A relevant quote: "Humans faced with disaster tend to optimistically imagine ways that the disaster will be avoided. Given the reality of more and more user data being encrypted with RSA and ECC, the world will be a better place if every effort to build a quantum computer runs into some insurmountable physical obstacle"

jgeada•6mo ago
Except that factorization is exactly what is needed to break encryption, and so knowing what QC can do in that realm of mathematics and computing is exactly the critical question that needs to be asked.

And a reminder that in the world of non-QC computing, right from its very roots, the ability of computers improved in mind boggling large steps every year.

QC records, other than the odd statistic about how many bits they can make, have largely not made any strides in being able to solve real world sized problems (with exception of those that use QCs purely as an analog computer to model QC behavior)

tomgag•6mo ago
I beg you to read the full story and to not extrapolate from the quote.

Also, in the world of QC, right from its very roots, the ability of QC improved in mind boggling large steps every year. It's only that you cannot see it if you only look at the wrong metric, i.e., factorization records.

It's a bit like saying "classical computing technology has not improved for 50 years, it's only recently that we finally start to have programs that are able to write other programs".

madars•6mo ago
A great resource for visually seeing progress is https://sam-jaques.appspot.com/quantum_landscape (click "Prev"/"Next" to see other years) - it makes very clear that incredible progress is happening - this is a log-log plot.
jgeada•6mo ago
There is a reason QC factorization records haven't shifted much over the past years. Number of qubits by themselves isn't enough. You to be able to do computation on them and for long enough to run Shor's algorithm till it produces a solution. How the qubits are connected, how reliable the logic gates are and how long you can maintain the quantum coherence with enough fidelity to get results is equally important.

That no significant factorization milestones have moved is a huge critical black eye to this field. Even worse, that no one has ever even been able to truly run Schors algorithm on even trivial numbers is a shocking indictment of the whole field.

tomgag•6mo ago
The reasons you listed are exactly why the lack of factorization records should not be seen as a "critical black eye to this field", because they are not a relevant measure of progress. Again, think of the parallel with LLMs: it took decades to get out of the "AI winter", because that's what non-linear technological progress looks like.

With QC, the risk (and I am not saying this is going to happen, but I'm saying that it is a non-overlookable risk) is that we end up transitioning from "QC can only factorize 15" to "RSA-2048 is broken" in such a sudden way that the industry has no time to adapt.

theuirvhhjj588•6mo ago
You keep saying it's not a relevant figure, but that is absurd.

Factorisation is one of the few problems that we know are in BQP \ P. You could make the argument that we're not at a stage where running Shor's alg. on integers is feasible hence integers don't capture the progress in the field... but that's perhaps too much honestly for a field that is riding on a bubble.

wasabi991011•6mo ago
> You could make the argument that we're not at a stage where running Shor's alg. on integers is feasible hence integers don't capture the progress in the field...

That's exactly what they are saying, and I'll say it too. Maybe they weren't explicit enough, but reread their comments as "not a relevant figure [to measure current progress].

mlyle•6mo ago
I think the main thing is: quantum computing doesn't really work right now, at all.

Imagine if you had crummy, unreliable transistors. You couldn't build any computing machine out of them.

Indeed, in the real world progress looked like:

* Useless devices (1947)

* Very limited devices (hearing aids)

* Hand-selected, lab devices with a few hundred transistors, computing things as stunts (1955)

* The IBM 1401-- practical transitorized computers (1959)-- because devices got reliable enough and ancillary technologies like packaging improved.

In other words, there was a pattern of many years of seemingly negligible progress and then a sudden step once the foundational component reached a critical point. I think that's the point of the person you're talking to about this.

And then just a couple of years later we had the reliability to move to integrated circuits for logic.

If you looked at the "transistorized factorization record" it would be static for several years, before making a couple steps of several orders of magnitude each.

jgeada•6mo ago
Those useless devices cracked the Enigma machine and several other cyphers in the span of WWII.

No, electronic based computers were not useless from the start and very very rapidly started making significant progress.

bawolff•6mo ago
> That no significant factorization milestones have moved is a huge critical black eye to this field.

But everyone knew that it wasn't going to move going in. It would have been shocking if it had. It was never a reasonable medium-term goal.

kevinventullo•6mo ago
A better measure of progress (valid for cryptanalysis, which is, anyway, a very minor aspect of why QC are interesting IMHO) would be: how far are we from fully error-corrected and interconnected qubits? I don't know the answer, or at least I don't want to give estimates here. But I know that in the last 10 or more years, all objective indicators in progress that point to that cliff have been steadily improving: qubit fidelity, error rate, coherence time, interconnections... At this point I don't think it's wise to keep thrashing the field of quantum security as "academic paper churning".

I think the problem is that “objective indicators pointing to the cliff” is pretty handwavy. Could there be a widely agreed-upon function of qubit fidelity, error rate, coherence time, and interconnections that measures, even coarsely, how far we are from the cliff? It seems like the cliff has been ten years away for a very long time, so you might forgive an outsider for believing there has been a lot of motion without progress.

wasabi991011•6mo ago
> Could there be a widely agreed-upon function of qubit fidelity, error rate, coherence time, and interconnections that measures, even coarsely, how far we are from the cliff?

They've tried (like "Quantum Volume"), but it's really hard to summarize an entirely new computing paradigm into a single number, especially since different hardware platforms will make wildly different tradeoffs.

theuirvhhjj588•6mo ago
That's a cop out.

I agree with what you're saying, but what you're also essentially saying is that QCs are so useless at the moment that the granularity of integers is not enough to measure progress on the hardware.

ethan_smith•6mo ago
Shor's algorithm still requires O(log(N)³) qubits and O(log(N)²log(log(N))log(log(log(N)))) operations to factor N, which is why these satirical "records" highlight the absurdity of focusing solely on factorization milestones rather than addressing the exponential scaling challenges.
spr-alex•6mo ago
there's been advances, at least for RSA work from håstad, ekerå, gidney has brought this to O(N) qubits, on the runtime the papers are a little bit harder to read as they differ in notation but O(log(N)^3) runtime is what i recall. its possible i am wrong on the runtime and its O(log(N)^2)
adgjlsfhk1•6mo ago
The thing that still feels off to me is that you should be able to run 8 bit Shors algorithm without error correction, right? Sure we don't have reliable error corrected q-bits, but being able to factor a number that small should be possible (even if it had a fairly high error rate) with current computers. Sure it won't be 100% reliable, but if we had published results that in 2010 it got the right answer 10% of the time, and in 2025 it gets the answer write 25% of the time, that would at least be a measure of progress.
hagbard_c•6mo ago
After having read this paper I'm busy working on the replication of String Theory with a plate of Spaghetti, a packet of instant Ramen noodles and a pair of Octopuses. I would have used a single octopus but those 8 arms don't cover the 12 dimensions in String Theory. Technically a single squid might suffice - it has 8 arms, 2 tentacles and 2 fins which makes 12 - but that wouldn't be fair to the dimensions which get stuck with the fins while others get to walk away with those tentacles.
OhMeadhbh•6mo ago
Ha! Peter Gutmann is always fun.
spr-alex•6mo ago
on a related note the search space for https://www.qdayprize.org/curves seems far too small to be a meaningful contest and the rules dont seem to address how they judge the validity of the "quantumness" when sifting such small groups.
mycall•6mo ago
Call me crazy but I love this. Thanks teddyh.
brontitall•6mo ago
I think there’s a bug in Figure 3. The unsigned m will always be >= 0 so the while loop will not terminate.