It's been six years, this author is still right, and now the idiots at the CA/B have decided to move the bomb to a 47 day timer for the whole Internet.
Anybody could look up a guide online on how to monitor who at their starbucks was logging into Facebook or whatever. We were having to train a generation of humans to be afraid of public wifi.
I'm not sure if I would object to that if it would be used sparsely and you could opt out.
Things have improved significantly with HTTPS adoption.
MITM is a user->service concern. If someone is between a service and LE, there are much bigger problems.
> If someone is between a service and LE
There is always someone there: my ISP, my government that monitors my ISP, the LE's ISP, and the US government that monitors the LE's ISP.
There are a lot of random internet routers between CAs and websites which effectively have the ability to get certificates for any domain they want. It just seems like such an obvious vulnerability I'm kinda shocked it hasn't been exploited yet. Perhaps the fact that it hasn't is a sign such an attack is more difficult than my intuition suggests.
Still, I'd be a lot more comfortable if DNSSEC or an equivalent were enforced for domain validation. Or perhaps if we just cut out the middleman and built a PKI directly into the DNS protocol, similar to how DANE or Namecoin work.
Also, Let's Encrypt validates DNSSEC for DNS-01 challenges, so you can use that if you like, although CAs in general are not required to do this, there are various reasons why a site operator might not want to, and most don't.
There are two fundamental problems with DANE that make it unworkable, and that would presumably also apply to any similar protocol. The first is compatibility: lots of badly behaved middleboxes don't let DNSSEC queries through, so a fail-closed system that required end-user devices to do that would kick a lot of existing users off the internet (and a fail-open one would serve no security purpose). The other is game-theoretic: while the high number of CAs in root stores is in some ways a security liability, it also has the significant upside that browsers can and do evict misbehaving CAs, secure in their knowledge that those CAs' customers have other options to stay online. And since governments know that'll happen, they very rarely try to coerce CAs into misissuing certificates. By contrast, if the keepers of the DNSSEC keys decided to start abusing their power, or were coerced into doing so, there basically wouldn't be anything that anyone could do about it.
I think you're wrong about DANE's flaws applying to "any similar protocol". The ossification problem could be solved by DNS over HTTPS cutting out the middle boxes, though I agree adoption of that will take time; much as adoption of HTTPS itself has. The game theory problem has been solved by CT; as you noted. You just need to subject certificates issued through the new system to the same process.
Remember that any actor capable of siezing control of DNS can already compromise the existing PKI by fulfilling DNS-01 challenges. You're not going to be able to solve that problem without completely replacing DNS with a self-sovereign system similar to Namecoin, though I can't imagine that happening anytime soon.
In reality, successful society lives halfway down tons of slippery slopes at any given point in time, and engineers in particular hate this. Yet this has been true since basically forever.
I'm sure cavemen engineers complained about how it's not secure to trust that your cave is the one with the symbol you made on the wall, etc.
But also, There is no choice now. Best we can do is encourage people to use web browsers that let people visit http sites, and afaik, that doesn't exist anymore.
(Hugs)
I think they are implying that if someone can man in the middle your website, then they can also man in the middle this request, and issue a certificate for you domain. However, the threat model of man in the middle between a user and your web server is very different than man in the middle between let's encrypt and your web server.
Before that widespread use of HTTPS it was trivial to connect to a coffeeshop's wifi network and sniff everyone else's traffic, and ISPs would man in the middle you to inject their own adds in websites you were looking at.
On the other hand to man in the middle Let's Encrypt -> your web server, you likely need to be state level actor and/or be or have hacked a major telecom (assuming your web server is running in a reputable data center). Folks like that can almost certainly already issue a certificate for your domain without running a man in the middle on Let's Encrypt.
His critiques of why LE is flawed security wise are spot on and I suspect something like SSH keys as he suggests would be pretty much as good.
But there's a reason we're encrypting everything, and the time when we started encrypting offers a clue as to why. Mass surveillance threat actors are not going to go to the trouble and visibility of MITMing every cert connection, but they will (and in the case of NSA did) happily go to the trouble of hoovering up network traffic en masse and watching how people surf. HTTPS provides some protection there because it at least hides the paths to the specific pages you are reading as you surf online, including things like search engine query terms.
The idea that $3.6m is a lot of money to encrypt a huge chunk of web traffic, or that Google is eagerly guarding the money it makes (?) off web certs, which must be a tiny fraction of its actual income, is a clue that this is maybe not a greedy conspiracy.
Because Google forced us to, by throwing up scary warnings if we didn't do it.
Google doesn't care about $3.6mm. They do care about the additional control they have by this scheme.
> [HTTPS] at least hides the paths to the specific pages you are reading as you surf online, including things like search engine query terms.
This assumes there isn't a secret firehose feed from Google to the NSA, which I don't think is a safe assumption.
Please do tell. I'm curious what forced him to join The Borg.
I'm not as convinced as the author is that nation states can easily tamper with certificates these days. I am not sure how much CT checking we do before each page load, but either nation states are compelling the issue of certs that aren't in the CT database, or they are and you can just get a list of who the nation states are spying on. Seems like less of a problem than it was a decade ago.
The author seems to miss the one guarantee that certificates provide; "the same people that controlled this site on $ISSUANCE_DATE control the site right now". That can be a useful guarantee.
We were working on some feature for a client's website, and suddenly things started breaking. We eventually tracked it down to some shoddy HTML + Javascript being on our page that we certainly didn't put there, and further investigation revealed that our ISP - whom we were paying for a business connection - was just slapping a fucking banner ad on most of the pages that were being served.
This was around ... 2008? I wonder if they were injecting it into AJAX responses, too.
My boss called them up and chewed them several new assholes, and the banner was gone by afternoon.
How?
One thing that helps drive it away at work is that we're a University, and essentially all the world's universities have a common authenticated WiFi (because students and perhaps more importantly, academics, just travel from one to another and expect stuff to work, if you got a degree in the last 20 or so years you likely used this, eduroam) but obviously they don't trust each other on this stuff so their sites all use the Web PKI, the same public trust as everybody else, internal stuff might not, but the moment you're asking some History professor to manually install a certificate you might as well assign them a dedicated IT person, so, everything facing ordinary users has public certs from, of course, Let's Encrypt.
Edited to name eduroam specifically.
Tbh makes it kinda sense for those systems, when used only with internal tools and on company devices... but yeah I’d just (of course) Let’s Encrypt if I was setting it up for a client.
1. You're somehow connecting to Facebook and Amazon over HTTP, not HTTPS
2. Your browser has an extension from your ISP installed that's interfering with content
3. You've trusted a root CA from your ISP in your browser's trust store
This inspired me to add a list of all script tags to error reports.
I feel like there needs to be a name for this. For now, "Those who do not learn from history are doomed to repeat it." is the most apt I think.
Happens constantly when you're essentially born on 3rd base. Maybe that's the proper name. Born on 3rd Base Syndrome.
[1] https://en.wikipedia.org/wiki/Glass%E2%80%93Steagall_legisla...
Amateur level ... Around 2006, we enjoyed some clients complaining why information on our CMS was being duplicated.
No matter what we did, there was no duplication on our end. So we started to trace the actions from the from the client (inc browser, ip etc). And low and behold, we got one action coming from the client, and another from a different IP source.
After tracing back the IP, it was a anti virus company. We installed the software on a test system, and ... Yep, the assh** duplicated every action, inc browser settings, session, you name it.
Total and complete mimic beyond the IP. So any action the user did + the information of the page, was send to their servers for "analyzing".
Little issue ... This was not from the public part of our CMS but the HTTPS protected admin pages!
Sure, our fault for not validating the session with extra IP checks but we did not expect the (admin only) session to leak out from a HTTPS connection.
So we tried to see if they reacted to login attempts at several bank pages. O, yes, they send the freaking passwords etc. We tried on a unused bank account, o, look, it was duplicating bank actions (again, bank at fault for not properly checking the session / ip).
It only failed on a bank transfer because the token for authorization was different on their side, vs our request.
You can imagine that we had a rather, how to say, less then polite talks / conversation with the software team behind that anti-virus. They "fixed it" in a new release. Did they remove the whole tracking? Nowp, they just removed the code for the session stealing if the connection was secure.
O, and the answer to why they did it. "it a bug" (yea, right, your mimic a total user behavior, and its a "bug"). Translation: Legal got up their behinds for that crap and they wanted to avoid legal issues with what they did.
Remember folks, if its free your the product. And when its paid, you are often STILL the product. And yes, that was a paid anti-virus "online protection". And people question why i never run any anti-virus software beyond a off-line scan from time to time, and have Windows "online" protections disabled.
Companies just can not stop themselves from being greedy. Same reason why i NEVER use Windows 11... You expect if you paid for Windows, Office or whatever, to not be the product, but hey ...
You can stop ISP ad injection with solutions much less complex than WebPKI.
Simply using TOFU-certificates (Trust On First Use) would achieve this. It also gives you the "people who controlled this website the first time I visited it still control it" guarantee you mention in your last paragraph.
TOFU isn't ideal, but it's an easy counterexample to your claims.
As a user how would I know if I should trust the website's public key on first use?
It's a counterexample, not a recommendation.
If you need this guarantee, use self-certifying hostnames like Tor *.onion sites do, where the URL carries the public key. More examples of this: https://codeberg.org/amjoseph/not-your-keys-not-your-name
I can set which CAs can sign certs for my domains, and monitor if any are issued that I didn't expect.
They can MITM the connection between the host and LE (or any other CA resolver, ACME or non-ACME, doesn't matter). This was demonstrated by the attack against jabber.ru, at the time hosted in OVH. I recommend reading the writeup by the admin (second link from the top in TFA).
This worked, because no-one checked CT.
That said, I don't think there's a way to stop a nation state from seizing control of a domain they control the TLD name servers for without something like Namecoin where the whole DNS system is redesigned to be self-sovereign.
The system is tamper evident not tamper proof. A nation state adversary can indeed impersonate my web site and obtain a new certificate, but the Web Browser doesn't trust that certificate without seeing Proof it was in the CT logs. So, now the nation state adversary need Proof it was Logged.
Whoever issued them the proof has 24 hours to include that dodgy certificate in their public logs for everyone to see. If they lie and don't actually log it, the proof will be worthless and if shown to a trust root this bad proof will result in distrust of the log's operator. That's likely a six or seven figure investment thrown away, for each time this happens.
On the other hand if they do log it, everybody can see what was issued and when, which is inconvenient if you'd prefer to be subtle like the NSA and to some extent Mossad. If you're happy to advertise that you're the bad guys, like the Russians and North Koreans, you do have the small problem that of course nobody trusts you, so, you can't expect any co-operation from the other actors...
This isn't like a missisuance where you can blame the CA and remove them from the root stores; they'd just be following the normal domain validation processes prescribed in the BRs.
Going to Portland to check whether it's on fire would be a lot of effort - so to some extent I must take it on trust that it's not actually on fire despite Donald Trump's statement - whereas visiting crt.sh to check for the extra certificates somebody claims the US government issued is trivial.
I'm not saying there's no value in being able to detect when you're compromised. I'm just saying it would be better if the compromise wasn't possible to begin with.
What would somewhat help would be CAA record with specified ACME account key. The attackers would then have to alter DNS record, would be harder as you describe. (Or pull the key from VM disk image, which would cross another line).
> the CA would be immediately distrusted by browsers, not as punishment but to deter state actors.
Do you think browsers operate outside of states?
> Compelling by the state to do something that destroys a company is illegal in many jurisdictions
How would it destroy the company? It might affect reputation, but as long as it wasn't the company doing it on its own, they can just claim to be the victim (, which they are). It will only affect the company, if is becomes public knowledge, which the state actor doesn't want anyway. I don't think reputation to not respond to legal warrants is protected by the law. Also for example the USA is famous for installing malware on other countries head of state.
Honestly this is the kind of law enforcement, which is fair in my opinion. It is much more preferable to mandated scanning (EU Chat Control), making the knowledge or selling of math illegal or sabotaging public encryption standards. No general security is undermined. It's just classic breaking in into some system and intercepting. Granted I think states shouldn't do it outside of their jurisdiction, but that is basically intelligence services fighting with each other.
If you're in business of selling X.503 certs trusted by browsers, then not being trusted by browsers kinda limits the marketability of your product.
I don't believe the browsers could be coerced to not distrust such a CA. In every root program I know there's a clause that membership to the program is at browser's pleasure. (Those that have public terms, i.e. not msft, but I'd assume those have similar language.)
Re: they can just do it, well, I think they'd be distrusted the same.
In Symantecgate one of the reasons for distrust was that they signed FPKI bridge, so I think no CA in the future will sign a subca that will sign FPKI certs.
> Also for example the USA is famous for installing malware on other countries head of state.
Yeah, exactly. I think they have more targeted ways that risk less detection and less collateral damage.
Do you thing Google or Apple are going to care? They bowed down to China, I think the state they have their headquarters in has even more leverage. As for Mozilla Firefox on Linux, maybe, but I wouldn't trust this too much either.
> I think they have more targeted ways that risk less detection and less collateral damage.
I think they don't really need to care about this, it was quite clear that no other state is publicly doing anything against this.
This has to be a rage bait comment, but anyway, how do you expect 'injections' to show up on 'http-only' ?
"Don't mind us, we're just sitting in the middle of your traffic here and recording your logins in plaintext"
While the situation with emails is worse it does not mean it should be like that.
Kinda sorta. In transit most email is encrypted, the big mail providers all both speak and expect TLS encryption when moving mail. Almost everybody configures TLS encrypted IMAP if they use a client, or reads email over HTTPS
> A public invitation to protest against my authoritarian government should not turn on total paranoia mode
The expectations ordinary people have for how the web works are not met by the basic HTTP protocol. They need HTTPS to deliver those basic assumptions. Who decides the hours of the local bakery? Is it Jeff Bezos? HTTP says that seems fine, but HTTPS says no, the bakery gets to decide, not Jeff.
I sure love when decisions reduce themselves to single points of consideration by virtue of them being discussed in a heated internet forum thread
Not a viable option in a lot of places. Nor does anyone really even want to consider this possibility of their ISP being able to MITM something in the first place.
Thats the least of the problems, they (anyone with basic access to your network actually) could easily overwrite every cooking or session on your machine to use their referral links. IE : Honey &* PayPal's Fraud [0] without you having any idea, now maybe you don't care, but it's stealing other peoples potential earnings.
[0] https://www.theverge.com/24343913/paypal-honey-megalag-coupo...
But the certificate is signed with the key of Let's Encrypt and your own, both of which the private key never leave the server.
Being generous I would say they are referring to if the client has an invalid ssl approved on their local, in which case its a client problem.
To ignore Encryption altogether is a silly idea. Maybe it shouldn't be so centralised to 1 company though.
EDIT: I understand how it works. This wasn’t my point.
The point (I think) that TLA is trying to make is that encryption isn’t enough. It wouldn’t be a good situation where someone looks at their house burning and says “well at least nobody could ever read my https traffic.”
The browser not trusting the CA that signed the certificate prevents this. As the commenter said above, they would first need to install a certificate into your list of trusted certs for this to work. Your IT department can do that because they have root on your machine, vpn-du-jour.com can not, and neither can anybody else without root.
Also, I believe that when I download “Shoot Your Friends Online” and install that, it also asks for root privileges (in order to make sure that no cheating software runs on my computer that would allow me to “shoot more of my friends quicker.”)
I also think that when I install “Freecell Advanced,” it also comes with “Freecell Advanced Updater” that needs root privileges (in order to “update Freecell Advanced.”)
Do I understand correctly that there is nothing stopping all three of these — running with root privileges — from installing certificates?
How many web site owner really do that? I mean, even Cloudflare hasn't been running a tight ship in this regard[0] until recently.
[0]: https://blog.cloudflare.com/unauthorized-issuance-of-certifi...
Manual long term keys are frowned upon due to potential keyleaks, such as heartbleed, or admin misuse, such as copy of keys on lots of devices when you were signing that 10 year key.
Automated and short lived keys are the solutions to these problems and they're pretty hard to argue against, especially as the key never leaves the server, so the security concerns are invalid.
That's not to say you can't levy valid criticism. I'm not sure if the author is entirely serious either though.
p.s. Certbot and Cert-manager are probably fine, but they're also fairly interesting attack vectors
This is completely backwards: TOFU schemes aren't acceptable for the public web because the average user (1) isn't equipped to compare certificate fingerprints for their bank, and (2) shouldn't be exposed to any MITM risk because they forget to. The entire point of a public key infrastructure like the Web PKI is to ensure that technical and non-technical people alike get transport security.
(The author appears to unwittingly concede this point with the SSH comparison -- asking my grandparents to learn SSH's host pinning behavior to manage their bank accounts would be elder abuse. It works great for nerds, and terribly for everyone else.)
No, but I was extending a charitable amount of credulousness :-)
Why is it reasonable to trust the key on first use? What if the first use itself has a man-in-the-middle that presents you the middle-man's key? Why should I trust it on first use? How do I tell if the key belongs to the real website or to a middle-man website?
A new cloud VM running in another city? I would trust it by default, but you don't have a lot of choice in many corporate environments.
Funnily enough, there is a solution to this: SSH has a certificate authority system that will let your SSH clients trust the identity of a server if the hostkey is signed and matches the domain the SSH CA provided.
Like with HTTPS, this sort of works if you're deploying stuff internally. No need to check fingerprints or anything, as long as whatever automation configured your new VM signs the generated host key. Essentially, you get DV certificates for SSH except you can't easily automate them with Let's Encrypt/ACME because SSH doesn't have tooling like that.
For prod.
ssh —-known-hosts-file=/dev/null
This! Forget about average user. As a technical user too I don't know how I would compare fingerprints every single time without making a mistake. I could install software or write my own to do this on desktop but what would I do on cell phones?
And TOFU requires "trust" on first use. How do I make sure that if I should be trusting the website public key on first use? It doesn't seem like any easier to solve than PKI.
Usually such questions get replied to with a recommendation of implementing DNSSEC. Which is also obviously PKI and in many ways worse than WebPKI.
It's a bit like suggesting that AES-GCM has risks so we ought to just switch to one-time-pads.
Or you can always get the fingerprint out of band. If it's some friend granting you SSH access to their server, or a vendor, or whatever, you can ask them to write the fingerprint on a piece of paper and give it to you, with you checking the paper comes from them and then checking them.
Nobody does; so there's very little to lose by also encrypting.
But it's really not, as countless comments here in this thread have correctly pointed out.
Why would you run certbot as root? You don't do that with any other server.
The official docs still recommend doing so: >Certbot is most useful when run with root privileges, because it is then able to automatically configure TLS/SSL for Apache and nginx.
Also his point that it "supplants better solutions" is inarguably true. The 2010s had lots of conversations about certificate transparency and CA changes that just don't happen today because the existence of Let's Encrypt made it so easy to put a cert-signed website online.
[1] of US firefox users: https://letsencrypt.org/stats/
- It introduces an exploitable attack vector
- He sees it as a Trojan Horse, and fears for what will happen in the future
There are a few static sites I run where there is no exchange of information. I'm locked into ensuring certificates exist for these sites, even though there's nothing to protect (unless you count the ensuring the content is really from me as protecting something).
Also by the author: https://michael.orlitzky.com/articles/in_defense_of_self-sig...
This is just not true!!!! CAs don't pay google to be in their root store.
> But if someone is able to perform a man-in-the-middle attack against your website, then he can intercept the certificate verification, too
The reasoning goes that most MITM (potential) attacks are between you and your ISP. Let's encrypt can connect to the backbone basically directly, so most MITM attacks won't reach them. Also, starting on September 15, 2025 (Let's encrypt has been doing this for a while already though) all domain validation requests have to be made from multiple perspectives, making MITM attacks harder.
> otherwise they drop certificates from Chrome and this has happened.
As far as I know, all the CAs Google dropped, this was because the CA misbehaved and misissued certs or was obviously failing at their job. Also, all CAs google has removed from their root store have also been removed by mozilla (or weren't removed because mozilla never included them).
The second part is the important one in this context, because there are ways to trick your dns resolution or ip routing. The dns resolution part is mitigated with DoH (that it also uses https with certificate), but that doesn't covers everything.
It might not be so fundamental for some just browsing sites, but for the ones you send data (not just credit card info) you may run into some risks.
Otherwise the evil MITM can decrypt the traffic, modify/inspect it, and re-encrypt it with their own self-signed certificate, and you're none the wiser.
> My medical opinion: if it hurts, maybe you should stop doing it.
Funny enough, that's the exact opposite of the common wisdom for deployment:
> If it’s painful, do it often.
The idea is that if you were to wait months between deployments and do enormous deployments, there is a very good chance that you will have problems every time. First of all, if it's infrequent, you can tolerate things like downtime windows for deployment, which are unideal. Second of all, it batches tons of changes at once, which increases the chances you'll need to roll it all back. Thirdly, it makes it harder to even figure out what went wrong, since the problem-causing change could've gone in months ago.
By having ACME renewel happen very often, it should become apparent very quickly when they're not working, much closer to when you made the change that broke it. I believe this is an improvement full-stop. If you want it to work even better, add alerting when the certificate gets too old and monitoring/observability on the renewal processes. That gives you multiple layers of assurance that you probably wanted to have anyways.
Finally, it seems like the importance of encrypting all Internet traffic is just missing from the calculus presented here; that's just silly. I'm not going to go into it. It isn't imperative that literally every website is always encrypted all of the time, but for a multitude of reasons it is ideal if 99% of them are 99% of the time. Let's Encrypt might allow for a MitM if you can pass HTTP-01 or DNS-01 momentarily, but you know what's even easier? Just being literally anywhere in the path of someone's HTTP connection and being able to perform a MitM with having compromised nothing about the CA system or the website itself. Even if we allow for some sites to sit back on HTTP, it matters that 99% of the Internet is on HTTPS because it makes MitM attacks like this highly unattractive. This is good when you're on untrusted or potentially adversarial networks... Which is increasingly many of them.
The other thing missing here is just how clever the CA system has gotten. Mozilla and Google have together made this system work surprisingly well despite its flaws. The CT system makes issuing bad certificates very unattractive, as Google and Mozilla can fiercely enforce the rules, and CT makes it nearly impossible to hide when you go against them. With CT, CAA records, and other tools available, you can at least know with damn near certainty if someone did exploit the CA system or your infrastructure and pull certificates for your properties. With these improvements, relying on the CA system doesn't feel nearly as ugly.
And also, you don't necessarily need to use LE. I think LE is the most competent of the ACME providers, but many paid services provide ACME support, and ZeroSSL provides another free ACME service.
Shorter lived certificates also have other benefits that are not mentioned. For example, if certificates can last 5 entire years, a revoked certificate also has to be able to last that long. This makes CRLs pretty much untenable, and forces something like OCSP, which is bad for privacy. Shorter certificate lifespans were a big part of how Firefox was able to leave OCSP behind in favor of a more advanced version of the CRL scheme, a solid win for both privacy and TLS latency.
All in all, the juice is clearly worth the squeeze.
A MITM attack against your renewal does not expose your private key. I don’t think that causes the harm the article suggests.
In any case if someone can become the thing you're trying to validate, be it access to an IP address or some DNS zone, you're kinda out-of-luck anyways. Though WebPKI has CT, which will give you some insight into it, unlike everything else out there.
Issued to: michael.orlitzky.com
Issued by:
- Common Name (CN) E7
- Organisation (O) Let's Encrypt
By the way what is the alternative to let's encrypt nowadays for a humble blog creator?It's one of the key points the author takes an issue with. That PKI is not MITM resistant enough, in the ways they dream of. That they need to monitor the CT logs, and that not being a whole lot.
You don't need TLS for your blog, though. Browsers will still connect to port 80 if you don't enable HTTPS.
HTTPS does three interrelated things:
Encryption - the data cannot be read by an intermediary, which protects your readers' privacy. You don't want people to know what pages you read on BigBank.com or EmbarassingFetish.com.
Tamper Proofing - the data cannot be changed by an intermediary, which protects your readers' (and your server) from someone messing with the data, say substituting one bank account number for another when setting up a payment, etc.
Site Authentication - ensures that the browser is connected to the server it says it is, which also prevents proxying. Without this an intermediary can impersonate any site.
Before the big push for encrypting everything it was not uncommon to hear of ISPs inspecting all traffic to sell to advertisers, or even injecting ads directly into pages. HTTPS makes this much more difficult.
I try to avoid them because they allow sketchy ISPs to inject ads and other weirdness into my browser, but normal browsers will still accept HTTP by default.
If you don't want people to know you're visiting EmbarrassingFetish.com, EmbarrassingFetish.com also needs to implement ECH (eSNI's replacement) and your browser must have it enabled, otherwise anyone can on the line can still sniff out what domain you're connecting to.
I don't think site authentication is practical, though. For some use cases it works (i.e. validating the origin before firing off a request to a U2F/FIDO2 authenticator), but for normal users, mybank.com and securemybank.com may as well be equivalent (and some shitty important services actually use fake sounding domains like that, like PayPal for instance). Unless you remember the country and state and town your bank is registered in, even EV certificates can't help you because there can be multiple companies with the name Apple Inc. that all deserve a certificate for their website.
More seriously, you are not wrong. Site Authentication is still a problem and actually the weakest part of HTTPS but it is also more of a people problem than a technical one. Nothing stops somebody from registering MyB4nk.com but at least HTTPS stops crooks spoofing MyBank.com exactly.
The best attack surfaces always do. If I'm a smart attacker, why would I impair your experience (at least, until I get what I want)? It's better to give you a false sense of security. There are, of course, dumber attacks that will show obvious signs. While many people do fall prey to such attacks from lapses in, or impairment to, their judgment, the smarter attacks hide themselves better.
The classical model of web security based around "important" sites and "sensitive" actions has been insufficient for decades. It was certainly wrong by the time the first coffee shop/airport/hotel wifi was created; by the time the first colocation provider/public cloud was created; by the time every visitor/student/employee of any library/university/company was given open Internet access; etc.
To connect to a website on the Internet, you must traverse a series of networks that neither you nor the website control. If the traffic is not tamper-proof, no matter how "unimportant" it may seem, it presents the opportunity for manipulation. All it takes is one of the nodes in the path to be compromised.
Scripts can be injected--even where none already exist; images can be modified--you see a harmless cat picture, the JPEG library gets a zero-day exploit; links can be added and manipulated--taking you to other, worse sites with more to gain by fooling you.
None of this is targeted at you or the website per se. It's targeted at the network traffic. You're just the victim.
It also ignores one really important fact that these pipes are not perfect, they do introduce errors into the stream. To ensure integrity we would still need to checksum everything and in a way that no eager router "fixes".
We want our bank statements to be bit-perfect, our family pictures not to be corrupted, so on and on.
So even if someone handwaves away all the reasons why we need encryption everywhere (which is insane), we would still need something very similar to TLS and CAs being used. Previous TLS versions have even had "eNULL" ciphersuites.
Precisely, without some magic handwaving there aren't any reasons.
eNULL was/would also kinda useful if one wanted to debug something without turning off TLS completely. But that's not worth the complexity keeping it around.
DNS requests leak this information.
> Tamper Proofing > Site Authentication
There are _many_ sites where this is not important. I want HTTPS for my bank, but I couldn't care less if someone wants to spend the time and effort to intercept and change pages from a blog I read.
I do not understand why so many people think having, say, zero-day exploits served to them is not a problem.
The blog is not the target; the unsecured connection is.
Approximately nobody is taking the time to hand craft a specific modification of some random blog. They develop and use tools that manipulate any packet streams which allow tampering, without the slightest concern for how (un-)important the source of those packets is.
* Encryption is the first thing HTTPS does and the one I'd argue that actually matters the most. It prevents your ISP or other middle parties from snooping or modifying what packets end up being shown to the end user. This is something that fundamentally doesn't require a CA to work. A self signed certificate is just as secure as one issued by a certificate authority on the matter of encryption; you just run an openssl command and you have a certificate, no CA needed (although a CA could still be useful for ie. Trusting updated certificates in the same chain, there's little reason to demand this to be done through a third party from a security perspective.)
* The second one is identification. Basically, the certificate is meant to give the idea that the site you're visiting is trusted and verified to belong to somebody. This is what CAs provided... except in practice, CA identification guarantees basically doesn't exist anymore. Finding the entity a certificate is issued to is hard to do in modern browsers, ever since a security researcher proved that it's relatively trivial to do a name collision attack, so browser developers (aka Chrome and Mozilla) hide it behind click through windows and don't show them anymore by default. Since browsers mandate HTTPS for as many APIs as they can get away with, everyone including garden variety scammers just gets an HTTPS certificate, which utterly defeats the entire purpose. CAs are essentially sitting in the middle and unless a third party suddenly demands you get an OV/EV certificate, the argument to not just use the CA that gives literally anyone who asks a certificate after the barest minimum effort to prove they own a domain is pretty questionable. Your bank might use an OV/EV certificate, but your average person seeing the bank website will not visually see any difference between that and a scam site. Both got perfectly legitimate certificates; one just got them from LetsEncrypt instead, where they had to give no details on the certificate. Only nerds look at the difference when visiting sites, and more people than nerds use banks.
Since identification is utterly dead, the entire CA structure feels like it gives little security to a modern browser as opposed to just going with a TOFU scheme like we do for SSH. Functionally, a CA ran by a sysadmin has the exact same guarantee as a CA ran by LetsEncrypt on the open internet for encryption purposes, except LE gets to be in browser and OS root programs. They might as well have the same security standards once you bring in CAA records.
Final note: there's something backwards about how a plain HTTP connection just gets a small label in the browser to complain about it, while a HTTPS certificate that's a single minute out of date will lead to giant full screen red pages that you have to click through. For consistency, the HTTP page should be getting the same scare pages from an encryption perspective, but they don't.
I agree with much of this article. IMHO certificates signed by a certificate certbot are only marginally more secure than a self signed certificate. Basically you can prove your domain is yours with a certificate ... by proving your domain is yours to let's encrypt via a DNS check. That sounds a bit recursive. At this point there is not a lot being checked or verified by signing authorities.
IMHO the current focus on shortening certificate validity periods just highlights how inadequate certificates are and helps exactly noone stay safe. This is 100% certified a website. With a domain. Owned by somebody random on the internet. That's all that the certificate guarantees.
Any scammer learned years ago how to get certificates for their scam domains. Short of blocking those domains faster than they pop up (good luck), there's no way to derive any more meaning from those certificates than "this is a website".
It would help for there to be more authorities. And also to have longer expiry periods. I don't need this busywork in my life whether it's worrying about automating, monitoring, etc. or just about paying off some gate keeper for some meaningless check + bureaucracy. Longer expiry enables more strict/expensive checks to happen and browsers should be checking specific certificates against blacklists. Rotating certificates frequently makes both those things less practical and devalues the whole notion of a certificate. Any scammer will just use the same services to get the same kind of certificates.
Also, the reason we can't rely on the DNS yet, is that there is still a lot of legacy software relying on non secure ways to talk to the DNS. And that being a bit of infrastructure that predates the whole notion of having certificates also means that the biggest risk is a lot of legacy insecure DNS infrastructure that is easy to spoof and can't be trusted. Anyone see a problem with that and how certificates are issued? Otherwise, we could just stick our public keys in there and self sign our own certificates. But secure DNS is a prerequisite.
* domain has DNSSEC * domain has CAA records only allowing DNS challenge and disallowing insecure HTTP challenge
but if we rely on DNSSEC we can just use DANE/TLSA and don't need the mess of CA/PKI
DNSSEC is PKI. We don't want to rely on it because it's significantly worse than WebPKI.
So I’ve always been fond of it and never really thought twice of it. While it’s rare for companies to support a shared resource together, this was a situation where it made sense.
But this is a good reminder to be wary of even the most benevolent looking tools and processes.
Gimme self-signed certificates please. With the ability to verify that the certificate was signed by whoever controls the domain I'm accessing. Abolish all certificate authorities. That's all I ask.
For one thing the verification doesn't just make a single http request, it makes several from many different nodes. There is a risk that your hosting provider MitMs the verification, but you need some level of trust in your hosting provider anyway, and in some cases that is actually a feature, as it allows your hosting provider to manage the certificates for you.
And that is one way that traditional CAs verified domain ownership for DV certs.
Is it perfect? Absolutely not. Is it better than nothing? Absolutely.
I do wish that DANE, or something similar had caught on.
Also, if you trust Let's Encrypt to be your CA it seems very strange to consider certificates provided by them as "untrusted". Also, certbot, or many of the other options don't necessarily need to be run as root. And many webservers support getting acme certs themselves. Also, there is nothing stopping you from verifying the certs are valid before using them.
Also, with a short expiration time, automation is basically required, which means that you set up the automation, and some monitoring that the renewal happens correctly, and then just let it go. And your renewal process is continually tested. With manual renewal, you have to remember to renew it, and remember how to renew it, a long time after the last time you did it. It is much more likely that you forget.
I somewhat agree with the precept, it's not great that the web is controlled by Google, beyond just tls certs. Something that changed since this was written is precisely that you have alternatives like zerossl.
Saying that letsencrypt doesn't bring any security is plain wrong though. The OWASP top ten doesn't list certificate theft or chain of trust mitm attack, but does have a category for cryptographic failures. My hotel has full control of the wifi, but it hardly has an opportunity to mitm my chain of trust. Same goes for ISP. When you have a cert corresponding to your dns record, it at least shows that you have some control over the infra that is behind that record.
jjgreen•3h ago
croes•3h ago
dijit•3h ago
It speaks to the problem of digital decay. We can still pull up a plain HTTP site from 1995, but a TLS site from five years ago is now often broken or flagged as "insecure" due to aggressive deprecation cycles. The internet is becoming less resilient.
And this has real, painful operational consequences. For sysadmins, this is making iDRAC/iLO annoying again.
(for those who don't know what iDRAC/iLO are, it's the out-of-band management controller that let you access a server's console (KVM) even when the OS is toast. The shift from requiring crappy, insecure Java Web Start (JWS) to using HTML5 was a massive win for security and usability - old school sysadmins might remember keeping some crappy insecure browser around (maybe on a bastion host) to interact with these things because they wouldn't load on modern browsers after 6mo)
Now, the SSL/TLS push is undoing that. Since the firmware on these embedded controllers can't keep pace with Chrome's release schedule, the controllers' older, functional certificates are rejected. The practical outcome is that we are forced to maintain an old, insecure browser installation just to access critical server hardware again.
We traded one form of operational insecurity (Java's runtime) for another (maintaining a stale browser) all because a universal security policy fails to account for specialised, slow-to-update infrastructure... I can already hear the thundering herd approaching me: "BUT YOU NEED FIRMWARE UPDATES" or "YOU NEED TO DEPRECATE YOUR FIRMWARES IF NOT SUPPORTED".. completely tone-deaf to the environments, objectives and realities where these things operate.
notatoad•3h ago
this is just a flat-out lie. yes, modern browsers will stilll load websites over http. come on.
dijit•3h ago
Direct sites will load with a "Not Secure" warning, includes on the site might not load without chrome://settings/content/insecureContent
And of course: you won't manage to be visible to Google itself, as you'll be down-ranked for not having TLS.
If you happen to have a .dev domain: you're on the HSTS Preload list, so your site literally won't load.
dragonwriter•2h ago
You’ll be visible to Google (otherwise there would be nothing to downrank), you will just be less visible on Google.
lanyard-textile•2h ago
And you, the owner, will likely be to blame by the user.
AntronX•2h ago
sambaumann•1h ago
lanyard-textile•1h ago
If you can call them a first world ISP ;)
https://arstechnica.com/tech-policy/2014/09/why-comcasts-jav...
_def•3h ago