One notable exception is Cloudflare: They famously no longer rely solely on LetsEncrypt.
Especially something that needed to be renewed every 90 or is it 40 days now. How about issuing 100 years certificates as a default?
https://cloud.google.com/certificate-manager/docs/public-ca-... (EDIT: Google is their own CA, with https://pki.goog/ )
The browsers and security people have been pushing towards shorter certs, not longer ones. Knowing how to rotate a cert every year, if not shorter, helps when your certificate or any of your parent certs are compromised and require an emergency rotation.
In a practical sense you likely wouldn't like the alternatives, because for most people's usage of the internet there's exactly one authority which matters: the local government, and it's legal system - i.e. most of my necessary use of TLS is for ecommerce. Which means the ultimate authority is "are you a trusted business entity in the local jurisdiction?"
Very few people would have any reason to ever expand the definition beyond this, and less would have the knowledge to do so safely even if we provided the interfaces - i.e. no one knows what safety numbers in Signal mean, if I can even get them to use Signal.
Your users that visit your website and get a TLS warning are the authority to worry about, if you're running a business that needs security. Depending on what you're selling, that one user could be a gigantic chunk of your business. Showing your local government that you have a process in place to renew your TLS certificates, and your provider was down is most likely going to be more than enough to indemnify you for any kind of maliciousness or ignorance (ignorantia juris non excusat). Obviously, different countries/locations have varying laws, but I highly doubt you'd be held liable for such a major outage for a company that is in such heavy use. Honestly, if you were held liable, or think you would be for this type of event, I'd think twice about operating from that location.
Caddy uses ZeroSSL as a fallback if Let’s Encrypt fails!
I'm using Caddy here and it's not falling back on ZeroSSL. Thanks for your help
EDIT: hmm, it should be automatic...! https://caddyserver.com/docs/automatic-https#issuer-fallback interesting, I'll double check my config
woah... it's probably related to this! https://github.com/caddyserver/caddy/issues/7084 TLDR: "Caddy doesn't fall back to ZeroSSL for domains added using API" (which is my case)
Also, you have days to weeks of slack time for renewals. The only real impact is trying to issue new certs if you are solely dependent on LE.
The whole point of the expiration is in case a hacker gets the private key to the cert and can then MITM, they can keep MITMing successfully until the cert the hacker gives to the clients expires (or was revoked by something like OCSP, assuming the client verifies OCSP). A very long expiration is very bad because it means the hacker could keep MITMing for years.
The way things like this work with modern security is ephemeral security tokens. Your program starts and it requests a security token, and it refreshes the token over X time (within 24 hrs). If a hacker gets the token, they can attack using it until 1) you notice and revoke the existing tokens AND sessions, or 2) the token expires (and we assume they for some reason don't have an advanced persistent threat in place).
Nobody puts any emphasis on the fact that 1) you have to NOTICE THE ATTACK AND REVOKE SHIT for any of these expirations to have any impact on security whatsoever, and 2) if they got the private key once, they can probably get it again after it expires, UNLESS YOU NOTICE AND PLUG THE HOLE. If you have nothing in place to notice a hacker has your private key, and if revocation isn't effective, the impact is exactly the same whether expiration is 1 second or 1 year.
How many people are running security scans on their whole stack every day? How many are patching security holes within a week? How many have advanced software designed to find rootkits and other exploits? Or any other measure to detect active attacks? My guess is maybe 0.0001% of you do. So you will never know when they gain access to your certs, so the fast expiration is mostly pointless.
We should be completely reinventing the whole protocol to be a token-based authorization service, because that's where it's headed. And we should be focusing more on mitigating active exploits rather than just hoping nobody ever exploits anything. But that would scare people, or require additional work. So instead we let like 3 companies slowly do whatever they want with the entire web in an uncoordinated way. And because we let them do whatever they want with the web, they keep introducing more failure modes and things get shittier. We are enabling the enshittification happening in front of our eyes.
In old-school X.509 PKI this might be "in case this person is no longer affiliated with the issuer" (for organizational PKI) or "in case this contact information for this person is otherwise no longer accurate".
In web PKI this might be "in case this person no longer controls this domain name" or "in case this person no longer controls this IP address".
The key-compromise issue you mention was more urgent for the web PKI before TLS routinely used ciphersuites providing forward secrecy. In that case, a private key compromise would allow the attacker to passively decrypt all TLS sessions during the lifetime of that private key. With more modern ciphersuites, a private key compromise allows the attacker to actively impersonate an endpoint for future sessions during the lifetime of that private key. This is comparatively much less catastrophic.
Nope. So all that happened here is that you were wrong.
Note that you can make your own self-signed CA certificate, create any server and client certificates you want signed with that CA cert, and deploy them whenever and wherever you want. Of course you want the root CA private key securely put somewhere and all that stuff.
The only reason it won't work at large without a bit of friction is because your CA cert isn't in the default trusted root store of major browsers (phone and PC). It's easy enough to add it - it does pop up warnings and such on Windows, Android, iOS and hopefully Mac OS X, but they're necessary here.
No, it's not going to let the whole world do TLS with you warning-free without doing some sort of work, but for small scales (the type that Let's Encrypt is often used for anyway) it's fine.
As they move to shorter-lifetime certs (6 days now https://letsencrypt.org/2025/01/16/6-day-and-ip-certs/?utm_s...) this puts it in the realm of possibility that an incident could impact long-running services.
That said, I'm unbelievably grateful for the great product (and work!) LetsEncrypt has provided for free. Hope they're able to get their infrastructure back up soon.
Here is the HN announcement: https://news.ycombinator.com/item?id=8624160
Announcement "animated" https://hn.unlurker.com/replay?item=8624160
But DNS does not have a good reliable authenticated transport mechanism. I wonder if there was a way to build this that would have worked.
Half the year I live on an island that is reliant on submarine cables and has historically had weeks and months long outages and with a changing world I suspect that might become reality once again. Locally this wasn't much of an issue, the ccTLD continues to function, most services (but now about 35%) are locally hosted. Then HTTPS comes along. Zero certificates could be (re-)issued during an outage. A locally run CA isn't really an option (standalone simply isn't feasible and getting into root stores takes time and money), so you are left with teaching users to ignore certificate errors a few weeks into an extended outage.
I could see someone like LE working with TLD registrars to enable local issuance (with delegated/sub-CA certificates restricted to the TLD), that could also mitigate problems like today (decentralize issuance) and the registrars are already the primary source of truth for DV validation.
Your registrar should be able to validate your ownership of the domain, ergo your registrar should be your CA. Instead of a bunch of arbitrary and capricious rules to be trusted, a CA should not be "trusted" by the browser, but only able to sign certificates for domains registered to it.
Example OpenBSD /etc/acme-client.conf:
authority buypass {
api url "https://api.buypass.com/acme/directory"
account key "/etc/acme/buypass-privkey.pem"
contact "mailto:youremail@example.com"
}
domain example.com {
domain key "/etc/ssl/private/example.com.key"
domain full chain certificate "/etc/ssl/example.com.pem"
sign with buypass
}
A pity that acme-client(1) does not allow for fallbacks, but I will add a mental note about it being an easy enough patch to contribute if I ever find the time.
https://zerossl.com/ (90 days)
https://www.buypass.com/ (180 days)
I wrote this blog post a few weeks ago: "Minimal, cron-ready scripts in Bash, Python, Ruby, Node.js (JavaScript), Go, and Powershell to check when your website's SSL certificate expires." https://heiioncall.com/blog/barebone-scripts-to-check-ssl-ce... which may be helpful if you want to roll your own.
(Disclosure: at Heii On-Call we also offer free SSL certificate expiration monitoring, among other things.)
So, what gives?
Which is disappointing because you should be able to recreate the service they had nearly exactly with certificate transparency logs.
Shouldn't that happen automatically a bit beforehand?
It can send alerts to multiple alerting providers.
This sounds like an easy problem to identify root cause for.
I think I received about 15 'we're disabling email notifications soon' emails over the past several months - one of which was interesting, but none were needed, as I'd originally set this up, per documentation, to auto-renew every 30 days.
Perhaps create a calendar reminder for the short term?
There's no way it's DNS
It was DNS
1. Denial: It’s not DNS.
2. Anger: What the fuck is it!
3. Bargaining: Maybe it’s a firewall, or Cloudflare!
4. Depression: We’ve checked everything…
5. Acceptance: It’s DNS.
(Or, as I recently encountered, it can also be a McAfee corporate firewall trying to be helpful by showing a download progress bar in place of an HTTP SSE stream. I was sure that was being caused by MTU, but alas no.)
bethekidyouwant•3h ago
jaeh•3h ago