I have worked with PhDs who can’t even manage ssh keys.
Not a good point.
I've experienced PhDs who can't (even, some may add) write shell script. Maybe they just hate the syntax, or specifically the $ symbol, maybe both. But, they did do fine job otherwise and has successfully replaced me, a person who can write a bit shell script :(
Even 2FA solutions like Google Authenticator are simple enough that you just need to scan a QR code but very few people use them and even fewer actually bother to print/save the backup emergency codes for use if they lose access to their auth device.
People can (sometimes) handle passwords, and an emailed reset link if they forget.
TLS PKI is designed to do one thing and one thing only: bind a keypair to a domain name. This is great for authenticating servers and terrible for authenticating clients. The question that a server wants to ask of a client is not "is this server authorized to serve this domain name", it's "is this a key I authorized to access this resource". The latter question is not just "inappropriate for the TLS PKI", it's inappropriate for any public PKI[0]. The only appropriate PKI for client authentication is for each server to run its own private client PKI.
If you insist on using client certs that are signed by the TLS PKI, then everyone who wants to connect to your API needs to buy a cert from a third-party that will provide no additional security for you or them, compared to the moral equivalent of ~/.ssh/authorized_hosts. If you have a particularly complicated server setup, then rolling your own client root would at least give you centralized issuance and revocation control.
Ok, let's say you set up your API to use client certs. You've just massively complicated the most common use case for your API: pinging it from a web browser running your JavaScript. There is no API in the web standard for a random website to enroll a client cert in your browser, and you can't specify a client cert in HTTP fetch or anything. Client certs are instead managed with the same terrible UX as the root cert store, which is deliberately obfuscated for fairly obvious "keeping the user from hurting themselves" reasons. So now you have to give your users a cert file and guide them through a bunch of nonsense to install the cert.
Furthermore, if any other website does this, now all your users have to pick which identity they want to use whenever they go to a website that wants client certs. They aren't bound to specific domain names like, say, password manager items can be. The only way you could avoid this... is by using a public client cert PKI, which means now all your users need to pay $$$ to a third party just to login to your website, which is a terrible idea.
Tokens, passwords, and passkeys avoid all of this nonsense and give you exactly what you want: client authentication, nothing more or less. TLS client certs are the sort of thing an engineer cooks up because it sounds nice and symmetrical. Oh, servers have certs, why can't clients have certs? But in practice certs solve the wrong problem and carry too much baggage for clients.
[0] Public public key infrastructure
You also don't need any special certificates. The authentication certificates you hand out to API clients can be self-signed if that's what you want, but you probably want an internal CA of some kind to hand out/revoke certificates the same way you have some kind of API key storage.
You're right that the browser UI is pretty terrible. Especially on Firefox, I should add, which decides to maintain its own key store. On all other platforms you just open the certificate and hit next a few times and you should be good.
As for picking the identity to use, there are easy ways in the TLS protocol to signal what certificates qualify, and there's no way you're going to have multiple certificates signed by the same Acme Corp Ltd. with private key 0xffaabb... Websites that aren't set up correctly will prompt users for a certificate to use, but that's the other websites' fault, as well as the browser's to be honest.
Client certificates work fine. They work so well, that they've been reinvented and stuffed into JWT, but now in weird JSON. Various Kubernetes overlay systems use auto-provisioned client certificates to ensure confidentiality and authentication between pods (the cloud world calls it mTLS though).
First of all, it gives the server more certainly that the other side of the TLS connection is the authenticated user, and not a MitM[1].
Secondly, if the message is intercepted, the secret key isn't compromised, and the secret key can be kept more secure than an API key that has to be sent with every request.
It also allows the signing party to be different from the verifying party. To be fair JWTs using asymmetric crypto also have this property.
And client certs, often using a private CA, are frequently used for server-to-server communication.
You are right about browsers not having good support for client certs, but there isn't any fundamental reason that couldn't be better.
As far as public key infrastructure, I agree that for client certs it isn't quite as useful as it is for server certs, but there are situations where it could be useful. Suppose a server for example.org makes a request to example.com. With a client cert signed using PKI, example.com can know that the request actually came from example.org, without having any previous establishment of a trust relationship between the two parties. That is a relatively niche use case, but I could see it being useful in federated protocols.
[1]: or at least if it is a MitM, it has access to the private key for the cert.
The certificate (or rather the privkey) is -- like the token -- a secret that only the client and the server should know.
Certificate-based authentication and token do protect equally from MITM. And MITM attacks are equally possible when cert or token auth is being done.
The only thing where cert based auth got the client shines is that you don't have to move a new token over the wire, initially.
But with a client cert, intercepting the request gets you nothing. You don't have the secret key, so there isn't any way for you to complete the TLD handshake with the server.
> The certificate (or rather the privkey) is -- like the token -- a secret that only the client and the server should know.
No, the server doesn't need to know the privkey at all, it only needs to know the public key of the CA that signed the cert.
Public CA TLS PKI is nice because it gives a root-of-trust for machines to start with using a DNS challenge. Then using it directly with mTLS for authentication could get you that token for operations.
I suppose the main thing that is going away with this change is a somewhat easy and secure way to distribute identity/secrets for machine-based authentication. People-based authentication mechanisms are well-built and fairly easy to use.
Renewal is the most important issue. Certificate stores often suck and for tokens there are various existing protocols for re-issuing expiring ones. For certificates, you're stuck doing your own thing.
For local applications, UI/UX of the system supposedly in charge of certs is often absolutely terrible. I've had the displeasure of working with Java key stores and I'd rather rewrite the auth mechanism than deal with that bullshit again.
The use case they would perform excellently in (centrally managed corporate networks) are also the places where you will find terrible middleboxes that will kill any protocol they don't understand.
TLS client certificates could've been what passkeys are today, and they could've been what OAuth2 tokens have become, but the terrible development experience killed them off before they could bloom. I mostly blame web browsers, who didn't bother improving their UI for certificates after the day they first implemented them. Hacking OAuth2 on top of HTTPS calls is pretty easy compared to figuring out how to refresh a user certificate.
You can't just add a new CA, you have to rebuild the entire keystore. And then you need to do it again every time you update the JVM, or risk missing updates to the default keystore.
For some reason the keystore is required to be protected by a password, even if it doesn't contain any secret keys.
The documentation for the tools that manage keystores is incomplete and sometimes out of date.
The whole thing is a relic from another time and could really use some modernization.
Unfortunately, most example code for SSL socket factories just disable TLS validation entirely. So yes, your custom CA now works reliably, but unfortunately so do self-signed certificates if you're not careful.
Luckily, modern Java doesn't need keystores anymore. Unfortunately, half the world seems stuck on Java 8.
I see this happen in large enterprise far more often than it should, like the kind of business where disabling SSL is a baaaad idea for the kind of attacks they're apt to get.
It's pretty easy to set up in Apache and Nginx at least.
For forward proxies, the only option is to either accept the connection unaltered or to drop the connection entirely.
I suppose a VPN is really the better answer here, but that's a pain if I want to give anyone else access and is less granular.
(I quite like the combo of app-level OAuth plus mTLS service mesh for backend comms).
"I tend to be pretty chill about little islands of mTLS, like "all the Consul clients need a cert". That, to me, solves a practical problem. I am way less chill about attempts to create coherent PKI namespaces for all the components of an app, tied together with mTLS.
People should use Macaroons!" along with some earlier bits about mTLS not affording much security if you have secure requests.
I've been thinking about machine identity+authorization and thus far have been thinking about it in a way where a machine would have some firm identity (like an X.509 cert) and use it to authenticate (like using mTLS) to get an authorization token.
Maybe I am putting words in your mouth, but do you sort of suggest skipping the identity part and just leaning into a full capability token? In this case macaroon, that acts as identity and then acts as authorization when attenuated and used?
They needed a cool name. I present: Passkeys
What I really wanted to do, and have proof of concept'd with only slightly more time, is per pod certificates, identifying a service account. It looks like this is progressing in mainline - https://github.com/kubernetes/enhancements/issues/4317 - albeit very slowly.
After this change I'd need a separate certificate for the service to use for mTLS.
Makes sense, even if inconvenient. Client auth is something LE can't provide as much validation of, compared to server certs.
Based on this: https://googlechrome.github.io/chromerootprogram/#322-pki-hi...
> To align all PKI hierarchies included in the Chrome Root Store on the principle of serving only TLS server authentication use cases, the Chrome Root Program will "phase-out" multi-purpose roots from the Chrome Root Store.
Rationale lacking.
And this is totally a free user hostile change with dubious justification.
Each service that will accept the client cert can decide or not to accept a root certificate. And chrome is not even on the server side, so I don't understand why they have a say in what should be acceptable by a server (except the monopoly they have on the client to be able to refuse certs with this flag).
So now you will have to pay again legacy cert authorities to be able to use TLS client cert. Just to be put in perspective with the recent changes pushed by the same actors to enforce a reduction in validity duration of certificates to short periods, like less than a year when you need to constantly go back to your official root cert authority.
All of that without a valid widespread problem that they would be trying to fix.
In addition, letsencrypt could continue to support a dedicated profile for that but they won't. Discontinuing something they even already created to fix the issue. So TLS client cert will become a "pro" limited feature...
What’s the assertion being made with that client cert?
For a simple example, let's suppose that you would want to use "client certificate" who can access webhooks API that will be pushed to your server.
The reason I have difficulty with public CA provided certificates for client certs is that client certs aren’t primarily being used as authentication of a specific user (and maybe authorization). While in servers, certs are practically only being used for authenticating a connection to an endpoint.
The issue here being that delegating client cert auth to a third party (Public CAs) means that you’re pushing the problem of monitoring the CA ecosystem to that client. A client that is limited and restricted by the rules of public CAs.
But I guess this isn’t that different from OAUTH with a third party IdP.
egberts1•8mo ago
There goes my private WireGuard client-side TLS certificate for tunneled SSH.
Just need to bashify all the certificate creation ... again.
throwaway81523•8mo ago
arccy•8mo ago
egberts1•8mo ago
* Creating Root CA node
* Creating Intermediate CA node
* Creating 2nd Intermediate CA node
* Renew Root CA node
* Renew Intermediate CA node
* Encrypt using ChaCha20-Poly1305
* Encrypt using Elliptic Curve
Problem is OpenSSH 3.1 recently broke CLI syntax, so that needs fixing.
Used JetBrain with Bash plugin for development.
https://github.com/egberts/tls-ca-manage
JCattheATM•8mo ago
egberts1•8mo ago
Also ensures that source IP are not leaked, as well.
JCattheATM•8mo ago
egberts1•8mo ago
It's all about hiding what protocol is being used.
JCattheATM•8mo ago
Or maybe I can just sense bs after a career in the field :-)
> It's all about hiding what protocol is being used.
And what, exactly, do you think the advantage is from hiding SSH traffic this way?
egberts1•8mo ago