I think Merkle Tree Certificates a promising option. I'll be participating in the standardization efforts.
Chrome has signalled in multiple venues that they anticipate this to be their preferred (or only) option for post-quantum certificates, so it seems fairly likely we will deploy this in the coming years
I work for Let's Encrypt, but this is not an official statement or promise to implement anything yet. For that you can subscribe to our newsletter :)
I could see government agencies with a big budget having access to it, but I don't see those computers becoming mainstream
Although I could see China having access to it, which is problem.
But I still think it’s a good idea to start switching over to post-quantum encryption, because the lead time is so high. It could easily take a full 10 years to fully implement the transition and we don’t want to be scrambling to start after Q-day.
Chrome and Cloudflare are doing a MTC experiment this year. We'll work on standardizing over the next year. Let's Encrypt may start adding support the year after that. Downstream software might start deploying support MTCs the year after that. People using LTS Linux distros might not upgrade software for another 5 years after that. People run out-of-date client devices for another 5 years too.
So even in that timeline, which is about as fast as any internet-scale migration goes, it may be 10-15 years from today for MTC support to be fully widespread.
I can see USA having access to it, which is also a problem. Or any other government.
Don’t we already just use the certificates to just negotiate the final encryption keys? Wouldn’t a quantum computer still crack the agreed upon keys without the exchange details?
But that's largely already true:
The key exchange is now typically done with X25519MLKEM768, a hybrid of the traditional x25519 and ML-KEM-768, which is post-quantum secure.
The exchanged keys typically AES-128 or AES-256 or ChaCha20. These are likely to be much more secure against quantum computers as well (while they may be weakened, it is likely we have plenty of security margin left).
Changing the key exchange or transport encryption protocols however is much, much easier, as it's negotiated and we can add new options right away.
Certificates are the trickiest piece to change and upgrade, so even though Q-day is likely years away still, we need to start working on this now.
Upgrading the key exchange has already happened because of the risk of capture-now, decrypt-later attacks, where you sniff traffic now and break it in the future.
No, since forward secret key agreement the certificate private key isn't involved at all in the secrecy of the session keys; the private key only proves the authenticity of the connection / the session keys.
The post didn't discuss it but naively this feels like it becomes a privacy issue?
There's "full certificates" defined in the draft which include signatures for clients who don't have landmarks pre-distributed, too.
You may want to pull landmarks from CAs outside of The Approved Set™ for inclusion in what your machine trusts, and this means you'll need data from somewhere else periodically. All the usual privacy concerns over how you get what from where apply; if you're doing a web transaction a third party may be able to see your DNS lookup, your connection to port 443, and the amount of traffic you exchange, but they shouldn't be able to see what you asked for or what you go. Your OS or browser can snitch on you as normal, though.
I don't personally see any new privacy threats, but I may not have considered all angles.
I could see the list of client-supplied available roots being added to client fingerprinting code for passive monitoring (e.g. JA4) if it’s in the client hello, or for the benefit of just the server if it’s encrypted in transit.
It's a privacy violating proxy after all.
If I understand this correctly each CA publishes a signed list of landmarks at some cadence (weekly)
For the certs you get the landmark (a 256-bit hash) and the hashes along the merkle path to the leaf cert's hash. For a landmark that contains N certs, you need to include log2(N) * hash_len bytes and perform log2(N) hash computations.
For a MTC signature that uses a 256bit hash and N=1 million that's about 20*32=620bytes.
Is this the gist of it?
I'm really curious about the math behind deciding the optimal landmark size and publishing cadence
> If a new landmark is allocated every hour, signatureless certificate subtrees will span around 4,400,000 certificates, leading to 23 hashes in the inclusion proof, giving an inclusion proof size of 736 bytes, with no signatures.
https://davidben.github.io/merkle-tree-certs/draft-davidben-...
That's assuming 4.4 million certs per landmark, a bit bigger than your estimate.
There's also a "full certificate" which includes signatures, for clients who don't have up-to-date landmarks. Those are big still, but if it's just for the occasional "curl" command, that's not the end of the world for many clients.
I don’t love the idea of giving every server I connect to via TLS the ability to fingerprint me by how recently (or not) I’ve fetched MTC treeheads. Even worse if this is in client hello, where anyone on the network path can view it either per connection or for my DoH requests to bootstrap encrypted client hello.
rvz•5h ago
To vibe coders: Good luck vibe coding that.
tomrod•4h ago