Which of the two options given is stronger? Presumably the 512 one?
> Additionally, all the post-quantum algorithms implemented by OpenSSH are "hybrids" that combine a post-quantum algorithm with a classical algorithm. For example mlkem768x25519-sha256 combines ML-KEM, a post-quantum key agreement scheme, with ECDH/x25519, a classical key agreement algorithm that was formerly OpenSSH's preferred default. This ensures that the combined, hybrid algorithm is no worse than the previous best classical algorithm, even if the post-quantum algorithm turns out to be completely broken by future cryptanalysis.
The 256 one is actually newer than the 512 one, too:
> OpenSSH versions 9.0 and greater support sntrup761x25519-sha512 and versions 9.9 and greater support mlkem768x25519-sha256.
ML-KEM (originally "CRYSTALS-Kyber") was available, it's just the Tiny/OpenSSH folks decided not to choose that particular algorithm (for reasons beyond my pay grade).
NIST announced their competition in 2016 with the submission deadline being in 2017:
* https://en.wikipedia.org/wiki/NIST_Post-Quantum_Cryptography...
TinySSH added SNTRUP in 2018, with OpenSSH following in 2019/2020:
* https://blog.josefsson.org/2023/05/12/streamlined-ntru-prime...
SSH just happened to pick one of the candidates that NIST decided not to go with.
https://news.ycombinator.com/item?id=32366614
I'm curious where you got the idea that they had mlkem available to them? They disagree with you.
> We (OpenSSH) haven't "disregarded" the winning variants, we added NTRU before the standardisation process was finished and we'll almost certainly add the NIST finalists fairly soon.
Nothing in his statements talks about 'availability', just a particular choice (from the ideas floating around at the time).
CRYSTALS-Kyber (now ML-KEM) was available at the same time as SNTRUP because they were both candidates in the NIST competition. NTRU (Prime) is listed as round three finalist / alternate (along with CRYSTALS-Kyber):
* https://en.wikipedia.org/wiki/NIST_Post-Quantum_Cryptography...
Given that they were both candidates in the same competition, they would have been available at the same time. Tiny/OpenSSH simply chose a candidate that ended up not winning (I'm not criticizing / judging their choice: they made a call, and it happened to be a different call than NIST).
https://news.ycombinator.com/item?id=37520065
https://www.metzdowd.com/pipermail/cryptography/2016-March/0...
The first version of NTRU Prime in an SSH server was implemented in TinySSH and later adopted by OpenSSH. Bernstein provided new guidance, and OpenSSH developed an updated algorithm that TinySSH implemented in return.
The NIST approval process was fraught, and Bernstein ended up filing a lawsuit over treatment that he received. I don't know how that has progressed.
https://news.ycombinator.com/item?id=32360533
While Kyber may have been the winning algorithm, there will be great preference in the community for Bernstein's NTRU Prime.
There's IETF WG drafts for use of Kyber / ML-KEM, but none for NTRU, so I'm not sure about that:
* https://datatracker.ietf.org/doc/draft-ietf-tls-mlkem/
* https://datatracker.ietf.org/doc/draft-ietf-tls-ecdhe-mlkem/
* https://datatracker.ietf.org/doc/draft-ietf-tls-hybrid-desig...
* https://datatracker.ietf.org/doc/draft-ietf-ipsecme-ikev2-ml...
And given that NTRU made it to the third round, and NTRU Prime is labelled as an alternative, I'm not how strong a claim Bernstein can make to being ill-treated by NIST.
While NTRU Prime is not implemented in TLS, if it has even half the lifespan of DSA in SSH then it will be quite long lived.
So while some SSH folks just happened to pick NTRU after looking at the options at a particular point in time, some of the other most widely deployed systems (TLS, IPsec) will not be using it. So I'm not quite sure how defendable the "great preference" claim is.
Have you ever visited their site?
I use this in a variety of ways, thousands of logins per day. I don't see much love for AES.
Yes, I know. I mention this timeline in another one of my comments:
* https://news.ycombinator.com/item?id=44866802
> I use this in a variety of ways, thousands of logins per day. I don't see much love for AES.
So? Given its focus on low(er)-performance systems, perhaps on chips without AES-NI, it's no surprise that TinySSH does not have AES. Further, Dropbear, another implementation often used on smaller footprints, does have AES and recently added ML-KEM:
* https://github.com/mkj/dropbear/commit/1748ccae5090d511753c0...
PuTTY added ML-KEM in 0.83 earlier this year. So I'm not sure how talking about a niche SSH implementation supports your claim that "there will be great preference in the community for Bernstein's NTRU Prime."
The evidence appears to me that implementation have been adding NIST's choice(s) since they have become available.
> all the post-quantum algorithms implemented by OpenSSH are "hybrids" that combine a post-quantum algorithm with a classical algorithm. For example mlkem768x25519-sha256 combines ML-KEM, a post-quantum key agreement scheme, with ECDH/x25519, a classical key agreement algorithm that was formerly OpenSSH's preferred default. This ensures that the combined, hybrid algorithm is no worse than the previous best classical algorithm, even if the post-quantum algorithm turns out to be completely broken by future cryptanalysis.
Using a hybrid scheme ensures that you're not actually losing any security compared to the pre-quantum implementation.
Since Quantum Computers at scale aren't real yet, and those kinds of issues very much are, you'd think that'd be quite a trade-off. But so much work has gone into security research and formal verification over the last 10 years that the trade-off really does make sense.
It's a trade-off, yes, but that doesn't make it useless.
aside the marketing bluff, quantum computing is nowhere near close
If I have a secret, A, and I encrypt it with classical algorithm X such that it becomes A', then the result again with nonclassical algorithm Y such that it becomes A'', doesn't any claim that applying the second algorithm could make it weaker imply that any X encrypted string could later be made easier to crack by applying Y?
Or is it that by doing them sequentially you could potentially reveal some information about when the encryption took place?
If they're not, you could end up where second algorithm is correlated with the first in some way and they cancel each other out. (Toy example: suppose K1 == K2 and the algorithms are OneTimePad and InvOneTimePad, they'd just cancel out to give the null encryption algorithm. More realistically, if I cryptographically break K2 from the outer encryption and K1 came from the same seed it might be easier to find.)
Some government and military standards do call for multiple layers of encryption when handling data, but it's just that multiple layers. You can't ever really make that kind of encryption weaker by adding a new "outer" layer. But you can make encryption weaker if you add a new "inner" layer that handles the plaintext. Side-channels in that inner layer can persist even through multiple layers of encryption.
If I recall my crypto classes and definitions correctly, if you have a perfect encryption X, a C = X(K, P) has zero information about P unless you know K. Thus, once X is applied, Y is not relevant anymore.
Once you have non-perfect encryptions, it depends on X and Y. Why shouldn't a structure in some post-quantum algorithm give you information about, say, the cycle length of an underlying modular logarithm like RSA? This information in turn could shave fractions of bits off of the key length of the underlying algorithm. These could be the bits that make it feasible to brute-force. Or they could be just another step.
On the other hand, proving that this is impossible is ... would you think that a silly sequence about rabbits would be related to a ratio well-known in art? There are such crazy connections in math. Proving that something cannot possibly connected is the most craziest thing ever.
But that's the thing about crypto: It has to last 50 - 100 years. RSA is on a trajectory out. It had a good run. Now we have new algorithms with new drawbacks.
I could see it being more of a problem for signing.
Based on what we've seen so far in industry research, I'd guess that enabling Denial of Service is the most common kind of issue.
https://en.wikipedia.org/wiki/Multiple_encryption?utm_source...
The rest of the article has some stuff on what can go wrong if the implementations aren't truly independent.
However, what's notable is that the published CNSA 2.0 algorithms in this context are exclusively of the post-quantum variety, and even though there is no explicit disallowing of the use of hybrid constructions, NSA publicly deems them as unnecessary (from their FAQ [0]):
> NSA has confidence in CNSA 2.0 algorithms and will not require NSS developers to use hybrid certified products for security purposes.
[0] https://www.nsa.gov/Press-Room/News-Highlights/Article/Artic...
> However, product availability and interoperability requirements may lead to adopting hybrid solutions.
I was thinking about whether to move the Terminal-based microblogging / chat app I'm building into this direction.
(Especially after watching several interviews with Paul Durov and listening to what he went through...)
European and US security agencies being VERY interested in harassing him and his engineers for leverage.
also why would a blog website need ssh?
It's just a nerdy venture. It's Terminal-only. :-) https://itter.sh
The macOS app Secretive [1] stores SSH keys in the Secure Enclave. To make it work, they’ve selected an algorithm supported by the SE, namely ecdsa-sha2-nistp256.
I don’t think SE supports PQ algorithms, but would it be possible to use a “hybrid key” with a combined algorithm like mlkem768×ecdsa-sha2-nistp256, in a way that the ECDSA part is performed by the SE?
If you look at http://mdoc.su/o/ssh_config.5#KexAlgorithms and http://bxr.su/o/usr.bin/ssh/kex-names.c#kexalgs, `ecdsa-sha2-nistp256` is not a valid option for the setting (although `ecdh-sha2-nistp256` is).
Thanks!
https://developer.apple.com/documentation/cryptokit/secureen...
Not totally sure that I'm reading it right, since I've never done MacOS development before, but I'm a big fan of Secretive and use it whenever possible. If I've got it right, maybe Secretive can add PQ support once ML-KEM is out of beta.
FIPS compliance does require use of specific algorithms. ML-KEM is NIST approved and AFAIK NIST is on record saying that hybrid KEMs are fine. My understanding is therefore that it would be possible for mlkem768x25519-sha256 (supported by OpenSSH) to be certified.
caveat: IANAFA (I am not a FIPS auditor)
Right, but if you use the certified version of OpenSSH, it will only allow you to use certain algorithms.
> ML-KEM is NIST approved and AFAIK NIST is on record saying that hybrid KEMs are fine. My understanding is therefore that it would be possible for mlkem768x25519-sha256 (supported by OpenSSH) to be certifie
ML-KEM is allowed, and SHA-256 is allowed. But AFAIK, x25519 is not, although finding a definitive list is a lot more difficult for 140-3 than it was for 140-3, so I'm not positive. So I don't think (but IANAFA as well) mlkem768x25519-sha256 would be allowed, although I would expect a hybrid that used ECDSA instead of x25519 would probably be ok. But again, IANAFA, and would be happy if I was wrong.
I don't have a definitive reference for this though.
See perhaps §3.2, PQC-Classical Hybrid Protocols from interim report "Transition to Post-Quantum Cryptography Standards" (draft):
* https://nvlpubs.nist.gov/nistpubs/ir/2024/NIST.IR.8547.ipd.p...
No algorithm explicitly mentioned, but the general idea/technique discussed.
We're building something that even the smartest ai or the fastest quantum computer can't bypass and we need some BADASS hackers...to help us finish it and to pressure test it.
Any takers?? Reach out: cryptiqapp.com (sorry for link but this is legit collaborative and not promotional)
Can you explain this a bit more?
pilif•6mo ago
As far as I understand, the key material for any post quantum algorithm is much, much larger compared to non-quantum algorithms which leads to huge overheads in network traffic and of course CPU time.
[1]: https://eprint.iacr.org/2025/1237
tptacek•6mo ago
dadrian•6mo ago
calibas•6mo ago
> After our successful factorisation using a dog, we were delighted to learn that scientists have now discovered evidence of quantum entanglement in other species of mammals such as sheep [32]. This would open up an entirely new research field of mammal-based quantum factorisation. We hypothesise that the production of fully entangled sheep is easy, given how hard it can be to disentangle their coats in the first place. The logistics of assembling the tens of thousands of sheep necessary to factorise RSA-2048 numbers is left as an open problem.
AlanYx•6mo ago
fxwin•6mo ago
"Quantum computers don't exist yet, why go to all this trouble?"
Because of the "store now, decrypt later" attack mentioned above. Traffic sent today is at risk of decryption unless post-quantum key agreement is used.
"I don't believe we'll ever get quantum computers. This is a waste of time"
Some people consider the task of scaling existing quantum computers up to the point where they can tackle cryptographic problems to be practically insurmountable. This is a possibilty. However, it appears that most of the barriers to a cryptographically-relevant quantum computer are engineering challenges rather than underlying physics. If we're right about quantum computers being practical, then we will have protected vast quantities of user data. If we're wrong about it, then all we'll have done is moved to cryptographic algorithms with stronger mathematical underpinnings.
Not sure if I'd take the cited paper (while fun to read) too seriously to inform my opinion the risks of using quantum-insecure encryption rather than as a cynical take on hype and window dressing in QC research.
sigmoid10•6mo ago
I've heard this 15 years ago when I started university. People claimed all the basics were done, that we "only" needed to scale. That we would see practical quantum computers in 5-10 years. Today I still see the same estimates. Maybe 5 years by extreme optimists, 10-20 years by more reserved people. It's the same story as nuclear fusion. But who's prepping for unlimited energy today? Even though it would make sense to build future industrial environments around that if they want to be competitive.
fxwin•6mo ago
This claim is fundamentally different from what you quoted.
> But who's prepping for unlimited energy today?
It's about tradoffs: It costs almost nothing to switch to PQC methods, but i can't see a way to "prep for unlimited energy" that doesn't come with huge cost/time-waste in the case that doesn't happen
bee_rider•6mo ago
fxwin•6mo ago
thesz•5mo ago
[1] https://www.wired.com/2012/10/fuel-from-air/
thayne•6mo ago
It costs:
- development time to switch things over
- more computation, and thus more energy, because PQC algorithms aren't as efficient as classical ones
- more bandwidth, because PQC algorithms require larger keys
fxwin•6mo ago
throw0101a•6mo ago
Not wrong, but given these algorithms are mostly used at setup, how much cost is actually being occurred compared to the entire session? Certainly if your sessions are short-lived then the 'overhead' of PQC/hybrid is higher, but I'd be curious to know the actually byte and energy costs over and above non-PQC/hybrid, i.e., how many bytes/joules for a non-PQC exchange and how many more by adding PQC. E.g.
> Unfortunately, many of the proposed post-quantum cryptographic primitives have significant drawbacks compared to existing mechanisms, in particular producing outputs that are much larger. For signatures, a state of the art classical signature scheme is Ed25519, which produces 64-byte signatures and 32-byte public keys, while for widely-used RSA-2048 the values are around 256 bytes for both. Compare this to the lowest security strength ML-DSA post-quantum signature scheme, which has signatures of 2,420 bytes (i.e., over 2kB!) and public keys that are also over a kB in size (1,312 bytes). For encryption, the equivalent would be comparing X25519 as a KEM (32-byte public keys and ciphertexts) with ML-KEM-512 (800-byte PK, 768-byte ciphertext).
* https://neilmadden.blog/2025/06/20/are-we-overthinking-post-...
"The impact of data-heavy, post-quantum TLS 1.3 on the Time-To-Last-Byte of real-world connections" (PDF):
* https://csrc.nist.gov/csrc/media/Events/2024/fifth-pqc-stand...
(And development time is also generally one-time.)
thayne•5mo ago
I don't think the cost is large, and I agree that given the tradeoff, the cost is probably worth it, but there is a cost, and I'm not sure it can be categorized as "almost nothing".
djmdjm•5mo ago
This is a one time cost, and generally the implementations we're switching to are better quality than the classical algorithms they replace. For instance, the implementation of ML-KEM we use in OpenSSH comes from Cryspen's libcrux[1], which is formally-verified and quite fast.
[1] https://github.com/cryspen/libcrux
> - more computation, and thus more energy, because PQC algorithms aren't as efficient as classical ones
ML-KEM is very fast. In OpenSSH it's much faster than classic DH at the same security level and only slightly slower than ECDH/X25519.
> - more bandwidth, because PQC algorithms require larger keys
For key agreement, it's barely noticeable. ML-KEM public keys are slightly over 1Kb. Again this is larger than ECDH but comparable to classic DH.
PQ signatures are larger, e.g. a ML-DSA signature is about 3Kb but again this only happens once or twice per SSH connection and is totally lost in the noise.
unethical_ban•6mo ago
The costs to migrate to PQC continue to drop as they become mainstream algorithms. Second, the threat exists /now/ of organizations capturing encrypted data to decrypt later. There is no comparable current threat of "not preparing for fusion", whatever that entails.
dlubarov•6mo ago
spauldo•5mo ago
It's great for the environment but for most people not much would change.
sigmoid10•5mo ago
pclmulqdq•6mo ago
At some point, someone may discover some new physics that shows that all of these "engineering challenges" were actually a physics problem, but quantum physics hasn't really advanced in the last 30 years so it's understandable that the physicists are confused about what's wrong.
fxwin•6mo ago
b112•6mo ago
simiones•6mo ago
ifwinterco•6mo ago
msgodel•5mo ago
fxwin•6mo ago
westurner•6mo ago
pclmulqdq•6mo ago
The D-wave also isn't capable of Shor's algorithm or any other quantum-accelerated version of this problem.
maratc•6mo ago
He presented us with a picture of him and a number of other very important scientists in this field, none of them sharing his attitude. We then joked that there is a quantum entanglement of Nobel prize winners in the picture.
westurner•5mo ago
The universe is constantly doing large, scaled quantum computations.
The number of error-corrected qubits per QC will probably increase at an exponential rate.
Whether there is a problem decomposition strategy for RSA could change.
Oh, entanglement and the prize! Adherence to Bell's is abstruse and obtuse. Like attaching to a student of Minkowkski's who served as an honorable patent examiner in Europe who moved to America. We might agree that there are many loopholes by which information sharing through entanglement is possible; that Bell's theorem is not a real limit to communications or QC because there are many "loopholes to"
westurner•5mo ago
Why are you claiming superiority in ignorance?
mikestorrent•5mo ago
In that sense, they're more useful for normal folks today, and don't pose as many potential problems.
westurner•5mo ago
It may be that no solution exists; even given better error correction with that many qubits.
A standard LLM today won't yet answer with "no solution exists"
adgjlsfhk1•5mo ago
ziofill•5mo ago
I have my doubts about who’s the confused one. Quantum physics has advanced tremendously in the past 30 years. Do you realize we now have a scheme to break rsa 2048 with 1M noisy qubits? (See Gidney 2025)
wasabi991011•5mo ago
There's also been massive advances in terms of quantum engineering.
pclmulqdq•5mo ago
If this algorithm exists and works, and there are chips with 1000 noisy qubits, why has nobody used this algorithm to factor a 16-bit number? Why haven't they used it to factor the number 63? Factoring 63 on a quantum computer using a generic algorithm would be a huge advancement in capability, but there's always some reason why your fancy algorithm doesn't work with another guy's fancy hardware.
At the same time, we continue to have no actual understanding of the actual underlying physics of quantum superposition, which is the principle on which this whole thing relies. We know that it happens and we have lots of equations that show that it happens and we have lots of algorithms that rely on it working, but we have continued to be blissfully unaware of why it happens (other than that the math of our theory says so). In the year 3000, physicists will be looking back at these magical parts of quantum theory with the same ridicule we use looking back at the magical parts of Newton's gravity.
ziofill•5mo ago
pclmulqdq•5mo ago
The easiest way to prove that you do know what you're doing is to demonstrate it through making progress, which is something that this field refuses to do.
asah•5mo ago
Nuclear energy got commercialized in 1957. The core technology was discovered nearly 50 years earlier.
Electricity was first discovered in ~1750 but commercialized in the late 1800s.
Faraday's experiments on electromagnetism were in 1830-1855 but commercialization took decades.
(The list goes on ...)
pclmulqdq•5mo ago
wasabi991011•5mo ago
We understand superposition perfectly well. Maybe you are confusing science with philosophy.
Anyway, I'm starting to lose track of your point. There's definitely been steady advances in quantum technology, both in the underlying physics and in engineering. I'm not sure why you think that stopped.
pclmulqdq•5mo ago
I understand that we have math that says that superposition does work, but we don't actually understand the physics of it. One of the foibles of modern physics is thinking that knowing the math is enough. Newton knew the math of his 100% internally consistent version of physics, but we know that there were observations that were not explained by his math that we now understand the physical mechanisms for.
I understand that "things that are beyond the math and physics I know" may be philosophy in your mind, but that is not a correct definition of philosophy.
wasabi991011•5mo ago
I guess, in the sense that we know _it doesn't_. First of all, I'm pretty sure you are confusing superposition with entanglement. Second of all, entanglement doesn't transmit any information, it is purely a type of correlation. This is usually shown in most introductory quantum information or quantum computing courses. You can also find explanations on the physics stackexchange.
Superposition is just another word for the linearity of quantum systems.
Anyway, it's a hard question to figure out the limits between math, physics, and philosophy. A lot of physicists believe physics is about making useful mathematical models of reality, and trying to find better ones. Newton might disagree, but he's also been dead hundreds of years.
Anyway, please don't fall for the Dunning-Kruger effect. You clearly are only slightly familiar with quantum physics and have some serious misconceptions, but you sound very sure of yourself.
ktallett•6mo ago
daneel_w•6mo ago
This is just the key exchange. You're exchanging keys for the symmetric cipher you'll be using for traffic in the session. There's really no overhead to talk about.
lillecarl•6mo ago
But since the symmetrical key is the same for both sides you must either share it ahead of time or use asymmetrical crypto to exchange the symmetrical keys to go brrrrr
simiones•6mo ago
daneel_w•6mo ago
Rebelgecko•6mo ago
Especially since I think a pretty large number of computers/hostnames that are ssh'able today will probably have the same root password if they're still connected to the internet 10-20 years from now
SoftTalker•6mo ago
chasil•6mo ago
In TinySSH, which also implements the ntru exchange, root is always allowed.
I don't know what the behavior is in Dropbear, but the point is that OpenSSH is not the only implementation.
TinySSH would also enable you to quiet the warning on RHEL 7 or other legacy platforms.
petee•6mo ago
singlow•6mo ago
Not that this is a bad thing, but first start using keys, then start rotating them regularly and then worry about theoretical future attacks.
djmdjm•5mo ago
A captured SSH session should never be able to decrypted by an adversary regardless of whether it uses passwords or keys, or how weak the password is.
EthanHeilman•6mo ago
xoa•6mo ago
Eh? Public-key (asymmetric) cryptography is already very expensive compared to symmetric even under classical, that's normal, what it's used for is the vital but limited operation of key-exchange for AES or whatever fast symmetric algorithm afterwards. My understanding (and serious people in the field please correct me if I'm wrong!) is that the potential cryptographically relevant quantum computer issue threats almost 100% to key exchange, not symmetric encryption. The best theoretical search algorithm vs symmetric is Grover's which offers a square-root speed up, and thus trivially countered if necessary by doubling the key size (ie, 256-bits vs Grovers would offer 128-bits classical equivalent and 512-bits would offer 256-bits, which is already more than enough). The vast super majority of a given SSH session's traffic isn't typically handshakes unless something is quite odd, and you're likely going to have a pretty miserable experience in that case regardless. So even if the initial handshake gets made significantly more expensive it should be pretty irrelevant to network overhead, it still only happens during the initiation of a given session right?
ekr____•6mo ago
- Current PQ algorithms, for both signature and key establishment, have much larger key sizes than traditional algorithms. In terms of compute, they are comparably fast if not faster.
- Most protocols (e.g., TLS, SSH, etc.) do key establishment relatively infrequently (e.g., at the start of the connection) and so the key establishment size isn't a big deal, modulo some interoperability issues because the keys are big enough to push you over the TCP MTU, so you end up with the keys spanning two packets. One important exception here is double ratchet protocols like Signal or MLS which do very frequent key changes. What you sometimes see here is to rekey with PQ only occasionally (https://security.apple.com/blog/imessage-pq3/).
- In the particular case of TLS, message size for signatures is a much bigger deal, to a great extent because your typical TLS handshake involves a lot of signatures in the certificate chain. For this reason, there is a lot more concern about the viability of PQ signatures in TLS (https://dadrian.io/blog/posts/pqc-signatures-2024/). Possibly in other protocols too but I don't know them as well
hannob•6mo ago
This is somewhat correct, but needs some nuance.
First, the problem is bigger with signatures, which is why nobody is happy with the current post quantum signature schemes and people are working on better pq signature schemes for the future. But signatures aren't an urgent issue, as there is no "decrypt later" scenario for signatures.
For encryption, the overhead exists, but it isn't too bad. We are already deploying pqcrypto, and nobody seems to have an issue with it. Use a current OpenSSH and you use mlkem. Use a current browser with a server using modern libraries and you also use mlkem. I haven't heard anyone complaining that the Internet got so much slower in recent years due to pqcrypto key exchanges.
Compared to the overall traffic we use commonly these days, the few extra kb during the handshake (everything else is not affected) doesn't matter much.
Strilanc•6mo ago
In the past ten years, on the theory side, the expected cost of cryptographically relevant quantum factoring has dropped by 1000x [1][2]. On the hardware side, fault tolerance demonstrations have gone from repetition code error rates of 1% error per round [3] to 0.00000001% error per round [fig3a of 4], with full quantum codes being demonstrated with an error rate of 0.2% [fig1d of 4] via a 2x reduction in error each time distance is increased by 2.
If you want to track progress in quantum computing, follow the gradual spinup of fault tolerance. Noise is the main thing blocking factoring of larger and larger numbers. Once the quality problem is turned into a quantity problem, then those benchmarks can start moving.
[0]: https://www.youtube.com/watch?v=nJxENYdsB6c
[1]: https://arxiv.org/abs/1208.0928
[2]: https://arxiv.org/abs/2505.15917
[3]: https://arxiv.org/abs/1411.7403
[4]: https://arxiv.org/abs/2408.13687
lucb1e•6mo ago
edit: adding in some sources
2014: "between 2030 and 2040" according to https://www.aivd.nl/publicaties/publicaties/2014/11/20/infor... (404) via https://tweakers.net/reviews/5885/de-dreiging-van-quantumcom... (Dutch)
2021: "small chance it arrives by 2030" https://www.aivd.nl/documenten/publicaties/2021/09/23/bereid... (Dutch)
2025: "protect against ‘store now, decrypt later’ attacks by 2030", joint paper from 18 countries https://www.aivd.nl/binaries/aivd_nl/documenten/brochures/20... (English)
wang_li•6mo ago
lucb1e•6mo ago
Also, 2030 isn't 20 years away anymore and that's the recommendation I ended up finding in sources, even if they think it's only a small chance
Xss3•6mo ago
Denvercoder9•6mo ago
ifwinterco•6mo ago
On the other hand - we already give our passport information to every single airline and hotel we use. There must be hundreds if not thousands of random entities across the globe that already have mine. As long as certain key information is rotated occasionally (e.g. by making passports expire), maybe it doesn't really matter
djmdjm•5mo ago
I assumed that paper was intended as a joke. If it's supposed to be serious criticism of the concept of quantum computing then it's pretty off-base, akin to complaining that transistors couldn't calculate Pi in 1951.
> how big is the need for the current pace of post quantum crypto adoption?
It comes down to:
1) do you believe that no cryptographically-relevant quantum computer will be realised within your lifespan
2) how much you value the data that are trusting to conventional cryptography
If you believe that no QC will arrive in a timeframe you care about or you don't care about currently-private data then you'd be justified in thinking PQC is a waste of time.
OTOH if you're a maintainer of a cryptographic application, then IMO you don't have the luxury of ignoring (2) on behalf of your users, irrespective of (1).
1vuio0pswjnm7•5mo ago