The advantages of OCSP were that you got a real-time understanding of the status of a certificate and you had no need to download large CRLs which become stale very quickly. If you set security.ocsp.require in the browser appropriately then you didn't have any risk of the browser failing open, either. I did that in the browser I was daily-driving for years and can count on one hand the number of times I ran into OCSP responder outages.
The privacy concerns could have been solved through adoption of Must-Staple, and you could then operate the OCSP responders purely for web-servers and folks doing research.
And let's not pretend users aren't already sending all the hostnames they are visiting to their selected DNS server. Why is that somehow okay, but OCSP not?
The underlying dynamics of any change to the Web ecosystem is that it has to be incrementally deployable, in the sense that when element A changes it doesn't experience breakage with the existing ecosystem. At present, approximately no Web servers do OCSP stapling, so any browser which requires it will just not work. In the past, when browsers want to make changes like this, they have had to give years of warning and then they can only actually make the change once nearly the entire ecosystem has switched and so you have minimal breakage. This is a huge effort an only worth doing when you have a real problem.
As a reference point, it took something like 7 years to disable SHA-1 in browsers [0], and that was an easier problem because (1) CAs were already transitioning (2) it didn't require any change to the servers, unlike OCSP stapling which requires them to regularly fetch OCSP responses [1] and (3) there was a clear security reason to make the change. By contrast, with Firefox's introduction of CRLite, all the major browsers now have some central revocation system, which works today as opposed to years from now and doesn't require any change to the servers.
[0] https://security.googleblog.com/2014/09/gradually-sunsetting... [1] As an aside it's not clear that OCSP stapling is better than short-lived certs.
> As an aside it's not clear that OCSP stapling is better than short-lived certs.
I agree this should be the end goal, really.
Absolutely, this is important.
But I don't understand why this should have any effect on OCSP-stapling vs. CRL.
As you note, "approximately no Web servers do OCSP stapling, so any browser which requires it will just not work." But browsers also cannot rely on CRLs being 100% available and up-to-date.
Enforcing OCSP stapling and enforcing a check against an up-to-date CRL would both require this kind of incremental or iterative deployment.
> As an aside it's not clear that OCSP stapling is better than short-lived certs.
This is equally applicable to CRL, though.
The current plan for phased reduction of TLS cert lifespan is to stabilize at 47 days in 2029. If reducing cert lifetime achieves the goal of reducing the value of compromised certs, then any mechanism for revoking/invalidating certificates will be reduced in value.
I think the argument isn’t that it’s okay, but that one bad thing doesn’t mean we should do two bad things. Just because my DNS provider can see my domain requests doesn’t mean I also want arbitrary CAs on the Internet to also see them.
You have to trust the DNS server more than you trust the server you are reaching out to as the DNS server can direct you anywhere as well as see everything you are trying to access anyhow.
Don't think I agree with this. TLS is important against MITM scenarios - integrity, privacy. You don't need DNS for this to be abused but a man in the middle - whether that is some open wifi, ISP or tapped into your network any other way.
Although tbh I think that just moves the problem somewhere else (which is perfectly fine if you don’t like the current PKI).
I’m not sure I understand the logic here. To me TLS PKI and DNS are somewhat orthogonal.
In practice, TLS certificates are given out to domain owners, and domain ownership is usually proven by being able to set a DNS record. This means compromise of the authorative DNS server implies compromise of TLS.
Malicious relaying servers and MitM on the client is already solved by DNSSEC, so it's not adding anything there either.
If we got rid of CAs and stored our TLS public keys in DNS instead, we would lose relatively little security. The main drawback I can think of is the loss of certificate issuance logs.
Yes, except for CT, which can help detect this kind of attack.
> Malicious relaying servers and MitM on the client is already solved by DNSSEC, so it's not adding anything there either.
I'm not sure quite what you have in mind here, but there is more to the issue than correct DNS resolution. In many cases, the attacker controls the network between you and the server, and can intercept your connection regardless of whether DNS resolved correctly.
> If we got rid of CAs and stored our TLS public keys in DNS instead, we would lose relatively little security. The main drawback I can think of is the loss of certificate issuance logs.
This may be true in principle but has a very low chance of happening in practice, because there is no current plausible transition path, so it's really just a theoretical debate.
Well, DANE exists and provides an obvious transition path, as brittle of an approach it is. Ideally you would be able to create your own intermediates (with name constraints) and pin the intermediate rather than the lead certificate, but PKI isn't set up for that.
From my understanding, the biggest issue with DNSSEC is that it's just a return to the single signing authority model that TLS used in the 90s. Isn't it also just Verisign again? (At least for .com.)
You still have the problem where a substantial cohort of Internet users can't resolve DANE records. They're on Internet paths that include middleboxes that freak out when they see anything but simple UDP DNS records. You can't define that problem away.
So now you need to design a fallback for those users. Whatever that fallback is, you have to assume attackers will target it; that's the whole point of the exercise. What you end up with a system that decays to the natural security level of the WebPKI. From a threat model perspective, what you've really done is just add another CA to the system. Not better!
DANE advocates tried for years to work around this problem by factoring out the DNS from DANE, and stapling DANE records to TLS handshakes. Then someone asked, "well, what happens when attackers just strip that out of the handshake". These records are used to authenticate the handshake, so you can't just set "the handshake will be secure" as an axiom. Nobody had a good answer! The DANE advocates were left saying we'd be doing something like HPKP, where browsers would remember DANE-stapled hosts after first contact. The browser vendors said "lol no".
That's where things stand. The stapling thing was so bad that Geoff Huston --- a DNS/DNSSEC éminence grise --- wrote a long blog post asking (and more or less conceding) that it was time to stick a fork in the whole thing.
Just imagine you succeeded in inventing a perfectly secure DNS server. Great, we know this IP address we just got back is the correct one for the server.
Ok, then I go to make a connection to that IP address, but someone on hop 3 of my connection is malicious, and instead of connecting me to the IP, just sends back a response pretending to be from that IP. How would I discover this? TLS would protect me from this, perfectly secure DNS won't.
If you are saying every packet sent is secure, then it would have nothing to do with DNS?
(I'm not necessarily in favour of this, I just don't see the revocation part as being the main issue.)
You can't refresh your certificates every 2 minutes but you can set the DNS TTL to 2 minutes and thus stop compromised certs as soon as you discover them (plus 2 minutes). If you use DANE this is already possible but quite fragile unless you have configured your TLS certificate issuing server to have access to modify your DNS records (which is probably less safe overall).
However I don't believe I've ever seen it used "in the wild".
How many companies now run TLDs? Yeah, .com is centralized, but between ccTLDs, new TLDs, etc., tons. And domain registrars and web hosts which provide DNS services? Thousands. And importantly, hosting companies and DNS providers are trivially easy to change between.
The idea Apple or Google can unilaterally decide what the baseline requirements should be needs to be understood as an existential threat to the Internet.
And again, every single requirement CAs implement is irrelevant if someone can log into your web host. The entire thing is an emperor has no clothes thing.
I always love when people bring the ccTLDs into these discussions, as if Google could leave .COM when .COM's utterly unaccountable ownership manipulates the DNS to intercept Google Mail.
Why is this more likely to happen than a rogue CA issuing a false certificate?
Also, Google has chosen to trust .com instead of using one of their eleven TLDs that they own for their own exclusive use, or any of the additional 22 TLDs that they also operate.
That isn’t possible with .com
This is not the case with a CA, however; you are forced to trust all of them, and hope that when fradulent certificates are issued (as has happened several times, IIUC), that they will not affect you.
But also the issues of segmentation are pretty much a total shift of the goalposts from what we were discussing, which is what actually happens when malicious activity occurs. In DNS, your only option is to stop trusting that slice of the tree and for every site operator to lift and shift to another TLD, inclusive of teaching all their users to use the new site. In WebPKI, the CA gets delisted for new certificate issuance and site operators get new certificates before the current ones expire. One of those is insane, and the other has successfully happened several times in response to bad/rogue CAs.
(Repost: <https://news.ycombinator.com/item?id=38695674>)
If you want to see what happens otherwise, just look at the gTLD landscape. Still, genuine power abuse is relatively rare, because to a large extent they are selling trust. If you start randomly taking down domains, nobody will ever risk registering a domain with you again.
Having a healthy competitive market for DNS services is good enough.
Running your own DNS server is rather easier than messing with OCSP. You do at least have a choice, even if it is bloody complicated.
SSL certs (and I refuse to call them TLS) will soon have a required lifetime of forty something days. OCSP and the rest becomes moot.
The 47 day life expectancy isn’t going to come until 2029 and it might get pushed.
Also 47 days is still too long if certificates are compromised.
[1]: https://dadrian.io/blog/posts/revocation-aint-no-thang/
> The privacy concerns could have been solved through adoption of Must-Staple
Agreed. I haven't followed every bit of the play-by-play here, but OCSP (multi-)stapling appeared to me to be a good solution to both the end-user privacy concerns and to the performance concerns.
Firefox also already has more effective revocation checking in CRLite: https://blog.mozilla.org/en/firefox/crlite/
Most people don't realize this.
It's quite insane given that Chrome will by default not check CRLs *at all* for internal, enterprise CAs.
https://www.chromium.org/Home/chromium-security/crlsets/
See here for some of the why: https://www.imperialviolet.org/2012/02/05/crlsets.html
I hit that road-block a lot when trying to do mTLS in the browser, that and dropping support for the [KeyGen](https://www.w3docs.com/learn-html/html-keygen-tag.html) tag.
I can patiently explain why a ROM cannot query a fucking remote service for a certificate's validity, but it's a lot easier to just say "Look OCSP sucks, and Let's Encrypt stopped supporting it", especially to the types of people I argue with about these things.
This sometimes collides with enterprise security policies that you should "never use port 80" (always 443).
GauntletWizard•4mo ago
The alternative to the privacy nightmare is ocsp stapling, which has the first problem once again - it adds complexity to the protocol just to add an override of the not after attribute, when the not after attribute could be updated just as easily with the original protocol, reissuing the certificate. It was a Band-Aid on the highly manual process of certain issuance that once dominated the space.
Good riddance to ocsp, I for one will not miss it.
jeroenhd•4mo ago
Certificates in air-gapped networks are problematic, but that problem can be solved with dedicated CRL-only certificate roots that suffer all of the downsides of CRLs for cases where OCSP stapling isn't available.
Nobody will miss OCSP now that it's dead, but assuming you used stapling I think it was a decent solution to a difficult problem that plagued the web for more than a decade and a half.
tremon•4mo ago
arccy•4mo ago
avianlyric•4mo ago
How would a bad actor do that without a certificate authority being involved?
syncsynchalt•4mo ago
sugarpimpdorsey•4mo ago
The browser-CA cartels stay relatively in sync.
You can verify this for yourself by creating and trusting a local CA and try issuing a 5 year certificate. It won't work. You'll have a valid cert, but it won't be trusted by the browser unless the lifetime is below their arbitrary limit. Yet that certificate would continue to be valid for non-browser purposes.
ameliaquining•4mo ago
sugarpimpdorsey•4mo ago
layer8•4mo ago
That's not a viable solution if the server you want to verify is compromised. The point of CRL and OCSP is exactly to ask the authority one higher up, without the entity you want to verify being able to interfere.
In non-TLS uses of X.509 certificates, OCSP is still very much a thing, by the way, as there is no real alternative for longer-lived certificates.
arccy•4mo ago
layer8•4mo ago
GauntletWizard•4mo ago
layer8•4mo ago
Yes, you could restrict certificates to very short lifetimes like 24 hours or less, but that isn’t always practical for non-TLS use cases.
tgsovlerkhgsel•4mo ago
Certificate volume in Certificate Transparency would increase a lot, adding load to the logs and making it even harder to follow CT.
Issues with domain validation would turn into an outage after 24h rather than when the cert expires, which could be a benefit in some cases (invalidating old certs quickly if a domain changes owner or is recovered after a compromise/hijack).
OCSP is simpler and has fewer dependencies than issuance (no need to do multi-perspective domain validation and the interaction with CT), so keeping it highly available should be easier than keeping issuance highly available.
With stapling (which would have been required for privacy) often poorly implemented and rarely deployed and browsers not requiring OCSP, this was a sensible decision.
tptacek•4mo ago
charcircuit•4mo ago
You can delete old logs or come up with a way to download the same thing with less disk space. Even if the current architecture does not scale we can always change it.
>even harder to follow CT.
It should be no harder to follow than before.
lokar•4mo ago
tgsovlerkhgsel•4mo ago
I was trying to process CT logs locally. I gave up when I realized that I'd be looking at over a week even if I optimized my software to the point that it could process the data at 1 Gbps (and the logs were providing the data at that rate), and that was a while ago.
With the current issuance rate, it's barely feasible to locally scan the CT logs with a lot of patience if you have a 1 Gbps line.
https://letsencrypt.org/2025/08/14/rfc-6962-logs-eol states "The current storage size of a CT log shard is between 7 and 10 terabytes". So that's a day at 1 Gbps for one single log shard of one operator, ignoring overhead.
integralid•4mo ago
Are you sure you did the math correctly? We're scanning CT at my work, and we do have scale problems, but the bottleneck is for database inserts. From your link, looks like a shard is 10TB and that's for a year of data
Still insane amount and a scale problem, of course
tgsovlerkhgsel•4mo ago
It would still be feasible to build a local database and keep it updated (with way less than 1 Gbps), but initial ingestion would be weeks at 1 Gbps, and I'd need the storage for it.
For most hobbyists not looking to spend a fortune on rented servers/cloud, it's out of reach already.
charcircuit•4mo ago