The point here is this article is giving the developer lots of rope to hang themselves with the JOSE standard on JWT/K/S and it is a sure way to implement it incorrectly and have lots of security issues.
PASETO is a much better alternative to work with: https://paseto.io with none of the downsides of the JOSE standard.
PASETO is the “mostly fixed” version of JWTs, but if you’re looking for something with more features, biscuits are quite interesting:
Just do it right (and at this point it is widely documented what the pitfalls are here), comply with the widely used and commonly supported standards, and follow the principle of the least amount of surprise. Which is kind of important in a world where things need to be cross integrated with each other and where JWTs, JOSE, and associated standards like OpenID connect are basically used by world+dog in a way that is perfectly secure and 100% free of these issues.
Honestly, it's not that hard.
The paradox with Paseto is that if you are smart enough to know what problem it fixes, you shouldn't be having that problem and also be smart enough to know that using "none" as an algorithm is a spectacularly bad idea. You shouldn't need Paseto to fix it if you somehow did anyway. And of course you shouldn't be dealing with the security layer in your product at all if that is at all confusing to you.
But this scheme is flexible. You could also have the client send "requested" claims for the server to consider adding if allowed when getting a JWT.
You could also reverse-proxy client requests through your server, adding any claims the server allows.
In that case, the client can possess the JWK keypair and do its own signing.
isCornography = url.contains("corno") || url.contains("body") || url.contains("self");
isVirus = url.contains("virus") || competitorUrlRegex.matches(url);
None of this "BS" actually goes away with self-signed JWTs, right? Just replace mentions of "API Key" with public/private key and it's otherwise a similar process I think.
1. With self-signed JWTs, you could start consuming APIs with free tiers immediately, without first visiting a site and signing up. (I could see this pattern getting traction as it helps remove friction, especially if you want to be able to ask an LLM to use some API).
2. Compare this scheme to something like the Firebase SDK, where there's a separate server-side "admin" sdk. With self-signed JWTs, you just move privileged op invocations to claims – consuming the API is identical whether from the client or server.
3. The authority model is flexible. As long as the logical owner of the resource being accessed is the one signing JWTs, you're good. A database service I'm working on embeds playgrounds into the docs site that use client-generated JWKs to access client-owned DB instances.
I’m yet to see a website that provides an API and doesn’t have a ToS that you have to agree to. Unless you control both parties, or you expose your service only to pre-vetted customers, there is no legal department that is going to allow this.
(Maybe my confusion here is that these JWTs are being described as self-signed, as if there’s a JWK PKI cabal out there, like the bad old days of the Web PKI. There isn’t one that I know of!)
and it’s easy to do keypair generation in the browser using subtle crypto. That API doesn’t provide jwk generation, but seems like it would be relatively easy to do even without the jose module. And the browser cab “download” (really just save) the keypair using the Blob API. Keys need not leave the browser.
An api developer portal could just offer - generate a new keypair? - upload your existing public key?
…As a choice. Either way the public key gets registered for your account.
The end. Easy.
When you "register" the public key with whatever the relying party is, you're also likely going to bind it to some form of identity, so you can't leak this private key to others, either. (And I'm curious, of course, how the relying party comes to trust the public key. That call would seem to require its own form of auth, though we can punt that same as it would be punted for an API key you might download.)
Could you describe how that would work? If two people have the same info, how on earth do you tell which is which?
The post is talking about simplifying things by eliminating all the back and forth. It’s not pretending to invent a secret-less auth system.
I'm not. "It’s truly wild to me what some of y’all will tolerate." What, exactly, are we tolerating that is solved by asymmetric key pairs?
> The post is talking about simplifying things by eliminating all the back and forth. It’s not pretending to invent a secret-less auth system.
Well, then, I'm lost. What back & forth was eliminated?
In one system, we download an API key. In this system, we upload a public key. In both, we have a transfer; the direction doesn't really matter. Someone has to generate some secret somewhere, and bind it to the identity, which is what I was saying above, and is apparently the wildness that I'm tolerating.
This is the pitch. But it seems like you fixated on the next part of the paragraph where it talks about api keys in version control.
I’ll agree with you in as much as this isn’t a massive change - but i like the approach for being an incremental improvement - and for challenging the status quo
In a way this reminds me a bit of SRP, which was an attempt to handle login without the server ever having your password. Which makes me think this is something to be integrated with password managers.
And for the public email providers, a service like Gravatar could exist to host them for you.
Wouldn't that be nice.
[Ed: allegations that the following is inaccurate! Probably checks out? Yes I meant the browser not the domain bound part, that seems solid.] Pity that Passkeys are so constrained in practice by browsers, that using them pretty much requires you trust the cloud providers absolutely with all your critical keys.
I wish there were something built into browsers that offered a scheme where your pubkey = your identity, but in short there are a lot of issues with that
There's a proposal for cross-domain usage via Related Origins, but that scheme depends on the authority of the relying party, meaning you can't say "I'd like to be represented by the same keypair across this set of unrelated domains"
> Pity that Passkeys are so constrained in practice by browsers, that using them pretty much requires you trust the cloud providers absolutely with all your critical keys.
Passkeys are not constrained so you have to trust cloud providers or anyone else with all your critical keys. The key is resident in whatever software or hardware you want to use, and anyone can create passkey software or hardware that will work with Chrome etc. I'm talking about (and I'm pretty sure the OP was referring to) the other side of WebAuthn: where the credentials surfaced to JavaScript via WebAuthn actually come from and how the browser relays requests that a challenge is signed.
Not that an API couldn't support both API Keys and JWT based authentication, but one is a very established and well understood pattern and one is not. Lowest common denominator API designs are hard to shake.
50% of the support issues were because people could not properly sign requests and it caused me to learn how to make http in all sorts of crap to help support them.
Just generate the JWT using, e.g. https://github.com/mike-engel/jwt-cli ? It’s different, and a little harder the first time, but not any kind of ongoing burden.
You can even get Postman to generate them for you: https://learning.postman.com/docs/sending-requests/authoriza..., although I have not bothered with this personally.
That's interesting - why do it this way rather than including a "reusable" signed JWT with the request, like an API token? Why sign the whole request? What does that give you?
Also what made that API so nice? Was this a significant part of it?
Supposedly bearer tokens should be ephemeral, which means either short-lived (say single-digit minutes) or one-time use.
This was supposed to be the way bearer tokens were supposed to be used.
> What does that give you?
Security.
The desirable properties for tokens is that they have some means of verifying their integrity, that they are being sent by the authorized party, and that they are being consumed by the authorized recipient.
A "reusable" bearer JWT with a particular audience satisfies all three - as long as the channel and the software are properly protected from inspection/exfiltration. Native clients are often considered properly protected (even when they open themselves up to supply chain attacks by throwing unaudited third party libraries in); browser clients are a little less trustworthy due to web extensions and poor adoption of technologies like CSP.
A proof of possession JWT (or auxiliary mechanisms like DPoP) will also satisfy all three properties - as long as your client won't let its private key be exfiltrated.
It is when you can't have all three properties that you start looking at other risk mitigations, such as making a credential one time use (e.g. first-use-wins) when you can't trust it won't be known to attackers once sent, or limiting validity times under the assumption that the process of getting a new token is more secure.
Generally a short lifetime is part of one-time-use/first-use-wins, because that policy requires the target resource to be stateful. Persisting every token ever received would be costly and have a latency impact. Policy compliance is an issue as well - it is far easier to just allow those tokens to be used multiple times, and non-compliance will only be discovered through negative testing.
I haven't seen recommendations for single-digit minute times for re-issuance of a multi-use bearer token though (such as for ongoing API access). Once you consider going below 10 minutes of validity, you really want to reevaluate whatever your infrastructure requirements were that previously ruled out proof-of-possession (or whether your perceived level of risk-adversity is accurately represented in your budget)
There's some degree of confusion in your comment. JWKs is a standard to represent cryptographic keys. It is an acronym for JSON Web key set.
> JOSE is a pretty good library (...)
JOSE is a set of standards that form a framework to securely transfer claims.
It feels like that model handles key management, delegation, and revocation in a well-established way.
What am I missing here that makes this a better fit?
From a cursory read, the answer is "it doesn't".
The blogger puts up a strawman argument to complain about secret management and downloading SDKs, but the blogger ends up presenting as a tradeoff the need to manage public and private keys, key generation at the client side, and not to mention services having to ad-hoc secret verification at each request.
This is already a very poor tradeoff, but to this we need to factor in the fact that this is a highly non-standard, ad-hoc auth mechanism.
I recall that OAuth1 had a token generation flow that was similar in the way clients could generate requests on the fly with nonces and client keys. It sucked.
beckthompson•11h ago
https://docs.github.com/en/apps/creating-github-apps/authent...
The JWT website is also super useful https://www.jwt.io/