Then you only have to follow the stricter rules for only the public facing certs.
Active Directory Certificate Services is a fickle beast but it's about to get a lot more popular again.
https://github.com/linsomniac/lessencrypt
I've toyed with the idea of adding the ability for the server component to request certs from LetsEncrypt via DNS validation. Acting as a clearing house so that individual internal hosts don't need a DNS secret to get certs. However, we also put IP addresses and localhost on our internal certs, so we'd ahve to stop doing that to be able to get them from LetsEncrypt.
(You say hijacking the HTTP port, but I don't let the ACME client take over 80/443, I make my reverse proxy point the expected path to a folder the ACME client writes to, I'm not asking for a comparison with a setup where the acme client takes over the reverse proxy and edits its configuration by itself, which I don't like)
Automated renewal is... probably about a decade or two from being supported well enough to be an actual answer.
In our case, we'll be spending the next couple years reducing our use of PKI certificates to the bare functional minimum.
???
All my servers use certbot and it works fine. There's also no shortage of SaaS/PaaS that offer free ssl with their service, and presumably they've got that automated as well.
It may help you to understand that it is not an assumption any given product even supports HTTPS well in the first place, and a lot of vendors look at you weird when you express that you intend to enable it. One piece of software requires rerunning the installer to change the certificate.
Yeah, there are also some very expensive vendors out there to manage this for big companies with big dollars.
Plus, how would you ever get enterprise tool vendors to add support if not for customers pestering them with support requests because manual certificate renewal has gotten too painful?
> I do not think PKI will survive the 47 day change. […] In our case, we'll be spending the next couple years reducing our use of PKI certificates to the bare functional minimum.
Maybe PKI will die… or you will. Progress doesn't treat dinosaurs too well usually.
> In our case, we'll be spending the next couple years reducing our use of PKI certificates to the bare functional minimum.
Good. A certificate being publicly trusted is a liability, which is why there are all these stringent requirements around it. If your certificates do not in fact need to be trusted by random internet users, then the CA/B wants you to stop relying on the Web PKI, because that reduces the extent to which your maintenance costs have to be balanced against everybody else's security.
As I said in another comment, private CAs aren't that popular right now in the kinds of organizations that have a hard time keeping up with these changes, because configuring clients is too painful. But if you can do it, then by all means, do!
I suspect when companies who are members actually realize what happened, CA/B members will be told to reverse the 47 day lifetime or be fired and replaced by people who will. This is a group of people incredibly detached from reality, but that reality is going to come crashing through to their employers as 2029 approaches.
> Good.
You may assume that most organizations will implement private CAs in these scenarios. I suspect the use of encryption internally will just fall. And it will be far easier for attackers to move around inside a network, and take over the handful of fancy auto-renewing public-facing servers with PKI anyways.
It is short enough to force teams to automate the process.
You're not supposed to be human-actioning something every month.
But yes, it'll be a huge headache for teams that stick their head in the sand and think, "We don't need to automate this, it's just 6 months".
As the window decreases to 3 months it'll be even more frustrating, and then will come a breaking point when it finally rests at 47 days.
But the schedule is well advertised. The time to get automation into your certificate renewal is now.
In the real world however, this will be a LOT of teams. I think the organisations defining this has missed just how much legacy and manual processes are out there, and the impact that this has on them.
I don't think this post makes that argument well enough, instead trying to argue the technical aspect of ACME not being good enough.
ACME is irrelevant in the face of organisations not even trying, and wondering why they have a pain every 6 weeks.
The solution is just like with any other automation - document it.
What typically does work for this kind of thing, is finding a hook to artificially rather than technically necessitate it, while not breaking legacy.
For example, while I hate the monopoly that Google has on search, it was incredibly effective when they down-ranked HTTP sites in favour of HTTPs sites.
( In 2014: See https://developers.google.com/search/blog/2014/08/https-as-r... )
Almost overnight, organisations that never gave a shit, suddenly found themselves rushing through the any required tech debt to get SSL certs and HTTPs in place.
It was only after that drove up HTTPs to a critical mass did Google have the confidence to further nudge through bigger warnings in Chrome. ( 2018 ).
Perhaps ChatGPT and has impacted Google's monopoly too much to try again, but they could easily rank results based on certificate validity length and try the same trick again.
CRLs become gigantic and impractical at the sizes of the modern internet, and OCSP has privacy issues. And there's the issue of applications never checking for revocation at all.
So the obvious solution was just to make cert lifetimes really short. No gigantic CRLs, no reaching out to the registrar for every connection. All the required data is right there in the cert.
And if you thought 47 days was unreasonable, Let's Encrypt is trying 6 days. Which IMO on the whole is a great idea. Yearly, or even monthly intervals are long enough that you know a bunch of people will do it by hand, or have their renewal process break and not be noticed for months. 6 days is short enough that automation is basically a must and has to work reliably.
[0]: https://hacks.mozilla.org/2025/08/crlite-fast-private-and-co...
It's really annoying because I have to carve outs for browsers and other software that refuse to connect to things with unverifiable certs and adding my CA to some software or devices is a either a pain or impossible.
It's created a hodge podge of systems and policies and made our security posture full of holes. Back when we just did a fully delegated digicert wildcard (big expense) on a 3 or 5 year expiration, it was easy to manage. Now, I've got execs in other depts asking about crazy long expirations because of the hassle.
Plenty of people leave these devices without encrypted connections, because they are in a "secure network", but you should never rely on such a thing.
We used to use Firefox solely for internal problem devices with IP and subnet exclusions but even that is becoming difficult.
Why not encode that TXT record value into the CA-signed certificate metadata? And then at runtime, when a browser requests the page, the browser can verify the TXT record as well, and cache that result for an hour or whatever you like?
Or another set of TXT records for revocation, TXT _acme-challenge-revoked.<YOUR_DOMAIN> etc?
It's not perfect, DNS is not at all secure / relatively easy to spoof for a single client on your LAN, I know that. But realistically, if someone has control of your DNS, they can just issue themselves a legit certificate anyway.
Everywhere I've read, one "must validate domain control using multiple independent network perspectives". EG, multiple points on the internet, for DNS validation.
Yet there is not one place I can find a very specific "this is what this means". What is a "network perspective", searching shows it means "geographical independent regions". What's a region? How big? How far apart from your existing infra qualifies? How is it calculated.
Anyone know? Because apparently none of the bodies know, or wish to tell.
https://cabforum.org/working-groups/server/baseline-requirem...
You can also just search the document for the word "Perspective" to find most references to it.
"Effective December 15, 2026, the CA MUST implement Multi-Perspective Issuance Corroboration using at least five (5) remote Network Perspectives. The CA MUST ensure that [...] the remote Network Perspectives that corroborate the Primary Network Perspective fall within the service regions of at least two (2) distinct Regional Internet Registries."
"Network Perspectives are considered distinct when the straight-line distance between them is at least 500 km."
I.e they check from multiple network locations in case an attacker has messed with network routing in some way. This is reasonable and imposes no extra load on the domain needing the certificate all the extra work falls on the CA, and if Letsencrypt can get this right there is no major reason why "Joe's garage certs" can't do the same thing.
This is outrage porn.
What does this even mean? Does he check the certificates for typos, or that they have the correct security algorithm or something?
I'm pretty sure such an "approval" could be replaced by an automatic security scanner or even a small shall script
FWIW the idea of inspecting the certificate "for typos" or similar doesn't make sense. What you're getting from the CA wasn't really the certificate but the act of signing it, which they've already done. Except in some very niche situations your certificate is always already publicly available when you receive it, what you've got back is in some sense a courtesy copy. So it's too late to "approve" this document or not, the thing worth approving already happened.
Also the issuing CA was required by the rules to have done a whole bunch of automated checks far beyond what a human would reasonably do by hand. They're going to have checked your public keys don't have any of a set of undesirable mathematical properties (especially for RSA keys) for example and don't match various "known bad" keys. Can you do better? With good tooling yeah, by hand, not a chance.
But then beyond this, modern "SSL certificates" are just really boring. They're 10% boilerplate 90% random numbers. It's like tasking a child with keeping a tally of what colour cars they saw. "Another red one? Wow".
Many things need to be run and automated when running stuff, I don't understand what makes SSL certificates special in this.
For a hobbyist, setting up certbot or acme.sh is pretty much fire and forget. For more complex settings well… you already have this complexity to manage and therefore the people managing this complexity.
You'll need to pick a client and approve it, sure, but that's once, and that's true for any tool you already use. (edit: and nginx is getting ACME support, so you might already be using this tool)
It's not the first time I encounter them, but I really don't get the complaints. Sure, the setup may take longer. But the day to day operations are then easier.
For new certificate you can keep the existing amount of human oversight in place so nothing changes on that front.
With manual renewals, the cert either wouldn't get renewed and would become naturally invalid or the notification that the cert expired would prompt someone to finish the cleanup.
There are environments and devices where automation is not possible: not everything that needs a cert is a Linux server, or a system where you can run your own code. (I initially got ACME/LE working on a previous job's F5s because it was RH underneath and so could get Dehydrate working (only needs bash, cURL, OpenSSL); not all appliances even allow that).
I'm afraid that with the 47-day mandate we'll see the return of self-signed certs, and folks will be trained to "just accept it the first time".
When I saw the 47-day expiration period, it made me wonder if someone is trying to force everyone onto cloud solutions like what Azure provides.
The old geezer in me is disappointed that it's increasingly harder to host a site on a cable modem at home. (But I haven't done that in over two decades.)
> The old geezer in me is disappointed that it's increasingly harder to host a site on a cable modem at home. (But I haven't done that in over two decades.)
It might be harder to host at home, but only for network reasons. It is perfectly straightforward to use letsencrypt and your choice of acme client to do certificates; I really don't think that's meaningful point of friction even with the shorter certificate lifetimes.
And it's not like the automation is hard (when I first did letsencrypt certs I did a misguidedly-paranoid offline key thing - for my second attempt, the only reason I had to do any work at all, instead of letting the prepackaged automation work, was to support a messy podman setup, and even that ended up mostly being "systemd is more work than crontab")
The second side is that if it's so tedious to approve and install, use solutions that require neither. Surely you don't need to have some artisanal certificate installation process that involves a human if you already admit that stricter issuance reduces no risk of yours. Thus, simplify your processes.
There are automated solutions to pretty much all platforms both free and paid. Nginx has it, I just checked and Apache has a module for this as well. Could the author write a blog post about what's stopping them from adopting these solutions?
In the end I can think of *extremely* few and niche cases where any changes to a computer system are actually (human) time-consuming due to regulatory reasons that at the same time require public trust.
Probably because making sure that clients trust the right set of non-public CAs is currently too much of a pain in the ass. Possibly an underrated investment in the security of the internet would be inventing better solutions to make this process easier, the way Certbot made certificate renewal easier (though it'd be a harder problem as the environment is more heterogeneous). This might reduce the extent of conservative stakeholders crankily demanding that the public CA infrastructure accommodate their non-public-facing embedded systems that can't keep up with the constantly evolving security requirements that are part and parcel of existing on the public internet.
From advertising companies, search engines (ok, sometimes both), certificate peddlers and other 'service' (I use the term lightly here) providers there are just too many of these maggots that we don't actually need. We mostly need them to manage the maggots! If they would all fuck off the web would instantly be a better place.
Desktop app development gets increasingly hostile and OSes introduce more and more TCC modals, you pretty much need a certificate to codesign an app if you sideload (and app stores have a lot of hassle involved), mobile clients had it bad for a while (and just announced that Android will require a dev certificate for sideloading as well).
edit: also another comment is correct, the reason it is like that is because it has the most eyes on it. In the past it was on desktop apps, which made them worse
I'm not sure why many people are still dealing with legacy manual certificate renewal. Maybe some regulatory requirements? I even have a wildcard cert that covers my entire local network which is generated and deployed automatically by a cron job I wrote about 5 years ago. It's working perfectly and it would probably take me longer to track down exactly what it's doing than to re-write it from scratch.
For 99.something% of use cases, this is a solved problem.
Just because someone’s homelab is fully cert’d through Caddy and LE that they slapped together over a weekend two years ago, doesn’t mean the process is trivial or easy for the masses. Believe me, I’ve been fighting this battle internally my entire career and I hate it. I hate the shitty state of PKI today, and how the sole focus seems to be public-facing web services instead of, y’know, the other 90% of a network’s devices and resources.
PKI isn’t a solved problem.
Also, I used to do IT, I get it but what do you think the fix here is? You could also run your own CA that you push to all the devices and then you can cut certificates as long as you want.
> PKI isn’t a solved problem.
PKI is largely a solved issue nowadays. Software like Vault from hashicorp (it's FIPS compliant, too: https://developer.hashicorp.com/vault/docs/enterprise/fips) let you create a cryptographically-strong CA and build the automation you need.
It's been out for years now, integrating the root CA shouldn't be much of an issue via group policies (in windows, there are equivalents for mac os and gnu/linux i guess).
> Just because someone’s homelab is fully cert’d through Caddy and LE that they slapped together over a weekend two years ago, doesn’t mean the process is trivial or easy for the masses.
Quite the contrary: it means that the process is technically so trivial the masses can do it in an afternoon and live off it for years with little to no maintenance.
Hence, if a large organization is not able to implement that, the issue is in the organization, not in the technology.
You have no idea the environment they work in. The "skill issue" here is you thinking your basic knowledge of Vault matters.
> Software like Vault from hashicorp (it's FIPS compliant, too: https://developer.hashicorp.com/vault/docs/enterprise/fips) let you create a cryptographically-strong CA and build the automation you need.
They didn't tell you their needs, but you're convinced this vendor product solves it.
Are you a non-technical CTO by chance?
> there are equivalents for mac os and gnu/linux i guess
You guess? I'm sensing a skill issue. Why would you say it's solved for their environment, "I guess??"
> Quite the contrary: it means that the process is technically so trivial the masses can do it in an afternoon and live off it for years with little to no maintenance.
I'm sensing you work in a low skill environment if you think "home lab trivial" translates to enterprise and defense.
> Hence, if a large organization is not able to implement that, the issue is in the organization, not in the technology.
Absolutely meaningless statement.
* Yes, I have experience with Vault. I have deployed it internally, used it, loathed it, and shelved it. It’s entirely too cumbersome for basic PKI and secrets management in non-programmatic environments, which is the bulk of enterprise and business IT in my experience.
* You’re right, the organization is the problem. Let me just take that enlightened statement to my leadership and get my ass fired for insubordination, again, because I have literally tried this before with that outcome. Just because I know better doesn’t mean the org has to respect that knowledge or expertise. Meritocracies aren’t real.
* The reason I don’t solve my own PKI issues with Caddy in my homelab is because that’s an irrelevant skill to my actual day job, which - see the point above - doesn’t actually respect the skills and knowledge of the engineers doing the work, only the opinions of the C-suite and whatever Gartner report they’re foisting upon the board. Hence why we have outdated equipment on outdated technologies that don’t meet modern guidelines, which is most enterprises today. Outside of the tech world, you’re dealing with comparable dinosaurs (no relation) who see neither the value or the need for such slick, simplified solutions, especially when they prevent politicians inside the org from pulling crap.
I’ve been in these trenches for fifteen years. I’ve worked in small businesses, MSPs, school campuses, non-profits, major enterprises, manufacturing concerns, and a household name on par with FAANG. Nobody had this solved, anywhere, except for the non-profit and a software company that both went all-in on AD CA early-on and threw anything that couldn’t use a cert from there off the network.
This is why I storm into the comments on blogs like these to champion their cause.
PKI sucks ass, and I’m tired of letting DevOps people claim otherwise because of Let’s Encrypt and ACME.
It’s generally best to assume experts in other fields are doing things for good reasons, and if you don’t understand the reason it might be something other than them being dumb.
- Many companies (including competitors) are sponsoring LE, so the funding should be quite robust
- These companies are probably winning from the web being more secure, so the incentives are aligned with you (contrary to say, a company that offers something free but want to sink you under ads)
- the vendor lock-in is very weak. The day LE goes awry, you can move to another CA pretty painlessly
There are CAs supporting ACME that provide paid services as well.
That doesn't guarantee they don't have malicious intents, but it's different from a for-profit company that tries to make money with you.
So unless you’re part of the folks fine heavily curating (or jailbreaking) devices to make the above possible, PKI is hardly a solved problem. If anything it remains a nightmare for orgs of all sizes. Even in BigCo at a major SV company, we had a dedicated team to manage PKI for internal certificates - complete with review board, justification documents, etc - and that still only bought us a manual process with a lead time of 72 hours for a cert.
That said, it is measurably improved and I do think ACME/certbot/LE is on the right track here. Instead of constant bureaucratic revisioning of rules and standards documents, I believe the solution here is a sort of modern Wireguard-esque implementation of PKI and an associated certification program for vendors and devices. “Here’s the cert standard you have to accept, here’s the tool to automatically request and pin a certificate, here’s how that tool is configured for internal vs external PKI, and here’s the internal tooling standards projects that want to sign internal certs have to follow.”
Basically an AD CA-alike for SMB and Enterprise both. Saves me time having to get into the nitty gritty of why some new printer/IoT/PLC doesn’t support a cert, and improves the posture of the wider industry.
I wonder what they will do with the shorter validity periods. They aren't required to comply in the same way; it's not a great look not to but I can't believe the processes will scale (for them or their customers) to renewing an order of magnitude more frequently.
Just wait until SSL is used to prevent us from publishing anything.
Your ID will have to be on file and be compliant.
We've gone from really simple tools to tools that could easily be used to ensnare us and rid us of our rights.
Encryption doesn't necessarily mean privacy. It can also mean control.
yeah encryption is needed. but then you need authentication. and then, if authentication is controlled by corporations you're f'd.
instead youd want identities to be distributed and owned by everyone. many trust models have been developed and other than raw UI problems (hi gpg/pgp) its really not a terrible UX.
And yes, there are alternatives, but everything is made so that LetsEncrypt is the only reasonable choice.
First, if you are not using https, you get shunned by every major web browser, you don't get the latest features, even those that has nothing to do with encryption (ex: brotli compression), downloads get blocked, etc... So you need https, good thing LetsEncrypt make it so easy, so you use LetsEncrypt.
Because of the way LetsEncrypt verification works, you get short-term certificates, ok, fine. Other CAs do things differently, making it short-term certificates impractical, so your certificates last longer. But now, browsers are changing their requirements to only short-term certificate, but it is not a problem, just switch to LetsEncrypt, and it is free too.
Also, X.509 certificates, which is the basis of https (incl. TLS, HTTP/3, ...) only supports a single signature, so I guess it is LetsEncrypt and nothing else.
Software didn't have that sort of "ticking time bomb" element before, I think?
I think I understand why it's necessary: we have a single, globally shared public namespace of domain names, which we accept will turn over their ownership over the long run, just like real estate changes hands. So we need expiration dates to invalidate "stale" records.
We've already switched over everything to Let's Encrypt. But I don't think anyone should be under the delusion that automation / ACME is failproof:
https://github.com/certbot/certbot/issues?q=is%3Aissue%20ren...
https://github.com/cert-manager/cert-manager/issues?q=is%3Ai...
https://github.com/caddyserver/caddy/issues?q=is%3Aissue%20A...
(These are generally not issues with the software per se, but misconfiguration, third-party DNS API weirdness, IPv6, rate limits, or other weird edge cases.)
Anyway, a gentle reminder that Let's Encrypt suggests monitoring your SSL certificates may be "helpful": https://letsencrypt.org/docs/monitoring-options/ (Full disclosure: I wrote the most recent addition to that list, with the "self-hosted scripts".)
> I am responsible for approving SSL certificates for my company.
And that is exactly what the requirements are intending to prevent. Automation is the way.
The system is working!
The hand-waving away of certbot/ACME at the very end of the article only really goes to show that it hasn't been looked in to properly for whatever reason.
When I was doing this, via email, if you wanted a certificate for sub.subdomain.example.com - the list of email addresses were in order something like hostmaster@sub.subdomain.example.com and hostmaster@example.com - you clicked the radio option that best suited you and you were good to go. You don't need email addresses for every subdomain.
What does that even mean? Is he smelling them to check for freshness?
I get process around first time request perhaps to ensure it’s set up right, but renewals?
> My stakeholders understand their roles and responsibilities
Oh no. All that’s missing here is a committee and steering group and daily stand ups
In 2025 it's not possible to create an app and release it into the world and have it work for years or decades, as was once the case.
If your "developer certificate" for app stores and ad-hoc distribution is valid for a year, then every year you must pay a "developer program fee" to remain a participant. You need to renew that cert, and you need to recompile a new version within a year. Which means you must maintain a development environment and tools on an ongoing basis for an app that may be feature- and operationally-complete.
All this is completely unnecessary except when it comes to reinforcing hegemony of app-store monopolists.
I understand the point of CTL's and it's necessary given that every browser and device is configured to trust CA's that you wouldn't actually trust. It's had awful side effects for people who want to host low traffic sites, or fly under the radar for whatever reason.
But certs and every other context have become neigh impossible except in enterprise settings with your own CA and cert servers. From things like printers and network appliances to entirely non-http applications like VPN (StrongSwan and OpenVPN both have/support TLS with signed SSL certs, but place very different constraints on how those work in practice and what identities are supported, how or if wildcards work, etc).
Very little attention has been paid to non-general purpose and non-http contexts as things currently stand.
> I am responsible for approving SSL certificates for my company. [...] I review and approve each cert. What started out as a quarterly or semi-monthly task has become a monthly-to-weekly task depending on when our certs are expiring.
I don't get the security need for manually approving renewals, and the author makes no attempt to justify this either. It may make sense for some manual process to be in place for initial issuances, as certificates are permanently added to a publicly-available ledger. And to take a step back, do you need public certs to begin with? Can you not have an internal CA? Again, the author makes no attempt to justify this, or demonstrate understanding in the post.
> email-based validation may as well not exist when we need to update a certificate for test.lab.corp.example.com because there is no webmaster@test.lab.corp.example.com.
I know that this is an example, but as a developer it would be a pain to have to go through a manual, multi-day process for my `test.lab.corp.example.com` to work. And the rest of the post seems to imply that this is actually the case at OP's org.
> Which resource-starved team will manage the client and the infrastructure it needs? It will need time to undergo code review and/or supplier review if it’s sold by a company. There will be a requirement for secrets management. There will be a need for monitoring and alerting. It’s not as painless as the certificate approval workflow I have now.
There are additional costs and new processes to be made, yes, but even from a non-technical POV this appears to be a good time to lead and take ownership.
> Any platforms that offer or include certificate management bundled with the actual services we pay for will win our business by default. [...] What is obvious to me is that my stakeholders and I are hurrying to offload certificate management to our vendors and platforms and not to our CA.
That's okay. If you hate change and don't want to take ownership, pay someone else to take ownership.
Vs. shoving httpS proxy services in front of insecure backends is often easy.
I inherited a process using the same thing last year and it is the absolutely most insane nonsense I can think of. These types of companies have support that is totally useless and their entire business model is to charge 1000x or more (eg. compare signature price to a HSM in GCP) what competitors charge while also providing less functionality, and hoping that people will get sucked in and trapped in their ecosystem by purchasing an expensive cert such as an "EV" cert which I'm still not totally clear does by the way, but I'm assured it's very important for security on Windows. Not security against bad guys though... it appears to be for security against no-name anti virus vendors deleting your files if they detect you didn't pay this "EV" cert ransom. They don't need to actually detect threats based on code or behavior, they just detect if you have enough money.
gdbsjjdn•1h ago
For all the annoyance of SOC2 audits, it sure does make my manager actually spend time and money on following the rules. Without any kind of external pressure I (as a security-minded engineer) would struggle to convince senior leadership that anything matters beyond shipping features.
Jeslijar•1h ago
Why wouldn't you go with a week or a day? isn't that better than a whole month?
Why isn't it instead just a minute? or a few seconds? Wouldn't that be better?
Why not have certificates dynamically generated constantly and have it so every single request is serviced by a new one and then destroyed after the session is over?
Maybe the problem isn't that certificates expire too soon, maybe the problem is that humans are lazy. Perhaps it's time to go with another method entirely.
bananapub•1h ago
a month is better than a year because we never ever ever managed to make revocation work, and so the only thing we can do is reduce the length of certs so that stolen or fraudulently obtained certs can be used for less time.
naasking•1h ago
FuriouslyAdrift•55m ago
https://www.darkreading.com/endpoint-security/china-based-bi...
capitol_•54m ago
I'm sure that you are perfectly able to do your own research, why are you trying to push that work onto some stranger on the internet?
nisegami•1h ago
yjftsjthsd-h•1h ago
There is in fact work on making this an option: https://letsencrypt.org/2025/02/20/first-short-lived-cert-is...
> Why isn't it instead just a minute? or a few seconds? Wouldn't that be better?
> Why not have certificates dynamically generated constantly and have it so every single request is serviced by a new one and then destroyed after the session is over?
Eventually the overhead actually does start to matter
> Maybe the problem isn't that certificates expire too soon, maybe the problem is that humans are lazy. Perhaps it's time to go with another method entirely.
Like what?
allan_s•1h ago
a whole month put you in the "if you don't have the resource to automate it, it's still doable by a human, not enough to crush somebody, but still enough to make the option , let's automate fully something to consider"
hence why it's better than a week or a day (it's too much pressure for small companies) better than hours/minutes/secondes (it means you go from 1 year to 'now it must be fully automated right now ! )
a year or two years was not a good idea, because you loose knowledge, it creates pressure (oh my.... not the scary yearly certificate renewal, i remember last year we broke something, we i don't remember what...)
A month, you either start to fully document it, or at least to have it fresh in your mind. A month give you time to everytime think "ok, we have 30 certicates, can't we have a wild card, or a certificate with several domain in it?"
> Perhaps it's time to go with another method entirely.
I think that's the way forward, it's just that it will not happen in one step, and going to one month is a first step.
source: We have to manage a lot of certificate for a lot of different use cases (ssh, mutual ssl for authentification, classical HTTPS certificate etc. ) and we learned the hard way that no 2 years is not better than 1 , and I agree that one month would be better
also https://www.digicert.com/blog/tls-certificate-lifetimes-will...
belval•57m ago
Ah yes, let's make a terrible workflow to externally force companies who can't be arsed to document their processes to do things properly, at the expense of everyone else.
hombre_fatal•49m ago
Monthly expiration is a simple way to force you to automate something. Everyone benefits from automating it, too.
ameliaquining•57m ago
(Why not less than six days? Because I think at that point you might start to face some availability tradeoffs even if everything is always fully automated.)
yladiz•1h ago
> Perhaps it's time to go with another method entirely.
What method would you suggest here?
zimpenfish•46m ago
Could it work that your long-term certificate (90 days, whatever) gives you the ability to sign ephemeral certificates (much like, e.g. LetsEncrypt signs your 90 day certificate)? That saves calling out to a central authority for each request.
yladiz•38m ago
Thorrez•1h ago
Then if your CA went down for an hour, you would go down too. With 47 days, there's plenty of time for the CA to fix the outage and issue you a new cert before your current one expires.
8organicbits•37m ago
Using LetsEncrypt and ZeroSSL together is a popular approach. If you need a stronger guarantee of uptime, reach for the paid options.
https://github.com/acmesh-official/acme.sh?tab=readme-ov-fil...
Thorrez•22m ago
>If you need a stronger guarantee of uptime, reach for the paid options.
We don't. If we had 1 minute or 1 second lifetimes, we would.
8organicbits•15m ago
FuriouslyAdrift•1h ago
He said six figures for the price would be fine. This is an instance where business needs and technology have gotten really out of alignment.
9dev•50m ago
It'll take about fifteen minutes of time, and executive level won't ever have to concern themselves with something as mundane as TLS certificates again.
zoeysmithe•46m ago
Business culture devaluing security is the root of this and I hope people see the above example of everything that's wrong with how some technology companies operate, and "just throw money at the problem because security in an annoying cost center" is super bad leadership. I'm going to guess this guy also have an MFA exception on his account and a 7 character password because "it just works! It just makes sense, nerds!" I've worked with these kinds of execs all my career and they are absolutely the problem here.
darkwater•30m ago
I completely agree with you but you would be astonished by how many companies, even small/medium companies that uses recent technologies and are otherwise pretty lean, still think that restarting/redeploying/renewing as less as possible is the best way to go instead of fixing the root issue that makes restarting/redeploying/renewing a pain in the ass.
9dev•25m ago
I mean… There's a tradeoff to be sure. I also have a list of things that could be solved properly, but can't justify the time expense to doing so compared to repeating the shortcut every so often.
It's like that expensive espresso machine I've been drooling over for years—I can go out and grab a lot of great coffee at a barista shop before the machine would have saved me money.
But in this particular instance, sure; once you factor the operational risk in, proper automation often is a no-brainer.
btown•58m ago
Imagine you run an old-school media company who's come into possession of a beloved website with decades of user-generated and reporter-generated content. Content that puts the "this is someone's legacy" in "legacy content." You get some incremental ad revenue, and you're like "if all I have to do is have my outsourced IT team do this renewal thing once a year, it's free money I guess."
But now, you have to pay that team to do a human-in-the-loop task monthly for every site you operate, which now makes the cost no longer de minimis? Or, fully modernize your systems? But since that legacy site uses a different stack, they're saying it's an entirely separate project, which they'll happily quote you with far more zeroes than your ads are generating?
All of a sudden, something that was infrequent maintenance becomes a measurable job. Even a fully rational executive sees their incentives switch - and that doesn't count the ones who were waiting for an excuse to kill their predecessors' projects. We start seeing more and more sites go offline.
We should endeavor not to break the internet. That's not "don't break the internet, conditional on fully rational actors who magically don't have legacy systems." It's "don't break the internet."
tyzoid•24m ago
johannes1234321•48m ago
A short cycle ensures either automation or keeping memory fresh.
Automation of course can also be forgotten and break, but it's at least somewhere written down in some form (code) rather than personal memory of a long gone employee who previously uploaded certs to some CA website for signing manually etc
ozim•41m ago
Now they are doing next plausible solution. Seems like 47 days is something they found out by let’s encrypt experience estimating load by current renewals but that last part I am just imagining.