JWTs on the other hand allow to be used across domain, so that you can use JWT issued by your IDP on one domain, to be trusted on another domain. crypto signature helps in verifying integrity of data.
sessions are usually tied to a single backend/application server. Its hard to reuse a session data across different apps.
JWTs on the other hand allow sharing session data across different app servers/microservices.
My Apache webby thingies quite happily dole out encrypted cookies:
https://httpd.apache.org/docs/2.4/mod/mod_session_crypto.htm...
Your notes on cross site issues are also described there.
JWTs are mutually shared secret passable with nobs on - you can store loads of data in them. Cookies are more one shot and one piece of data.
While it's true that you could avoid signing cookies, this isn't the default for any server library I'm aware of. If your library doesn't require a secret to use for signing, you should report it.
I'm also unaware of JWT libraries that default to "none" for the algorithm (some go against the spec and avoid it entirely), though it's possible to use JWTs insecurely.
I have a web app that I'm doing sysops for which ended up with both. The web devs insisted on JWT and cough "forgot" about the auth bearer bit in the header because their API didn't use it. I ended up amending and recompiling an Apache module for that but to be fair, they will support it in the next version so I can revert my changes. A few db lookups in the Apache proxy JWT module I'm using and you have your claims.
On the front of that lot you have Apache session cookies generated when auth succeeds against a PrivacyIDEA instance - ie MFA.
I suppose we have cookies for authN and JWT for authZ. Cookies could do both and apart from session I'm not too familiar but it looks like claims would require multiple cookies where JWT does it all in one.
This way I don’t have to worry about sharing the secret. It never leaves the other service.
Mmmm. No. You're supposed to use a public key to verify the tokens, not a private key. What library are you using that tolerates this sort of misuse?
We experimented once with trying to put permissions on a JWT (more complex than your popular scopes) but that makes them grow quickly. And we experimented with putting role information on JWTs but that results in re-centralization of logic.
Maybe conveying complex authorization info via a single object that gets passed around repeatedly is fundamentally a flawed idea, but if I had an identity standards wishlist that would be near the top.
Attempting to generalize it ends up in pain, suffering, and AWS IAM.
> Biscuit is an authorization token with decentralized verification, offline attenuation and strong security policy enforcement based on a logic language
We solved it by simply using bitmasks.
Say, you want to encode an access rule "allows reading from Calendar objects". The typical CRUD actions can be encoded with 4 bits. For example, all bits are zero => no access. The first bit is 1 => can create. The second bit is 1 => can read. Etc.
Then, say, if your system has 32 different types of objects, you can say that, "position 13 encodes for calendars". So you get 32*4 = 128 bits, i.e. just 16 bytes to encode information about CRUD rules for 32 different types of objects.
Sure it sounds complicated but if you move it to a library, you stop thinking about it.
Putting all the repos you can access into a token is a request we get sometimes. It would be... Difficult.
My experience differs:
My private key is only 256 bits (32 bytes, which base64 encodes up to 44 characters, if you use padding). My typical passwords are 40-64 characters (unless stupid requirements force me to go shorter).
[1]: https://en.wikipedia.org/wiki/EdDSA#Ed25519
[2]: https://www.iana.org/assignments/jose/jose.xhtml#web-signatu...
JWTs are standardized (RFC 7519) and used outside the JS ecosystem. Not a vanity project
Though often overused and poorly misunderstood where simpler and more secure methods would suffice.
I've seen some news site trackers send JWT in url/header to some 3rd party tracker. Content is no surprise, my full name, and email address, violates its own privacy policy.
Otherwise it's very open and handy, from inspecting a jwt token I can learn a lot about the architectural design of many sites.
Not sure if I’m remembering correctly but isn’t it recommended to not store any identifiable information in a JWT precisely because of this?
Unfortunately, it seems like 99% of the industry decides which token to use based on Medium articles, LLM responses or how many unmaintained packages that implement this thing they can find on NPM.
JWT is mostly used as an access token, but for the vast majority of use cases it's a bad fit. If you've got low traffic no strict multi-region deployment requirements, random IDs are the best approach for you. They are extremely lean and easy to revoke. It's pretty secure: the only common vulnerabilities I can think of with this approach are session fixation[1] and timing attacks[2]. Both attacks are preventable if you take just a few simple precautions:
1. Always generate 32-byte session IDs using a cryptographically secure random number generator on authentication. (Never re-use existing session IDs for new logins)
2. Either use a cryptographic hash (e.g. SHA-256 or Blake2b) of the session ID a the database field used when querying sessions or make sure that the Session ID field is indexed with a hash-based index (B-trees are susceptible to timing attacks).
In cases where you really cannot use Session IDs, your service is usually big enough and important enough to use custom Protobuf tokens even a more special-purpose format like Macaroons. These formats give can be far more compact and give you full control on designing for your needs. For instance, if you want flexible claims (with most of them standardized across your services), together with encryption, you can use a combination of Protobuf and a libsodium secret box envelope.
[1] https://owasp.org/www-community/attacks/Session_fixation
you can of course bind sessionID to the IP address, but this is extra effort you need to put. in JWT land you can just put the IP addressed inside the payload and forward requests with non-matching IP to reauth and regenerate JWT for their new IP in case customer is roaming networks
Using a hash index instead of a btree isn't a 100% guaranteed solution because there may be craftable collisions (because e.g. postgres's index hash is not cryptographic) which cause fallback to linear comparison across the values inside the hash bucket:
https://dba.stackexchange.com/questions/285739/prevent-timin...
So hashing the ID before the DB lookup is better.
Most recently, I wanted to implement 2FA w/ TOTP. I figure I'll use 1 cookie for the session, and another cookie as a TOTP bypass. If the user doesn't have a 2FA bypass cookie, then they have to complete the 2FA challenge. Great, so user submits username & password like normal, if they pass but don't have the bypass cookie the server sends back a JWT with 10 minute expiry. They have to send back the JWT along with OTP to complete the login.
I figure this is OK, but not optimal. Worst case, hacker does not submit any username/password but attempts to forge the JWT along with OTP. User ID is in clear text in the JWT, but the key only exists on the server so it's very difficult to crack. Nevertheless, clients have unlimited attempts because JWT is stateless and they can keep extending the expiry or set it to far future as desired. Still, 256 bits, not likely they'll ever succeed, but I should probably be alerted to what's going on.
Alternative? Create a 2FA challenge key that's unique after every successful username/password combo. User submits challenge key along with OTP. Same 256 bit security, but unique for each login attempt instead of using global HMAC key. Also, now it's very easy to limit attempts to ~3 and I have a record of any such hacking attempt. Seems strictly better. Storage is not really a concern because worse case I can still prune all keys older than 10 minutes. Downside I guess is I still have to hit my DB, but it's a very efficient query and I can always move to a key-value store if it becomes a bottleneck.
I don't know, what's the use-case? Server-server interaction? Then they still need to share a key to validate the JWT. And probably all but the user-facing server doesn't need to be exposed to public internet anyway so why the hoopla adding JWT? I haven't looked into it much because I don't believe in this microservice architecture either, but if I were to go down that road I'd probably try gRPC/protobufs and still not bother with JWT.
You can do it all via individual browser cookies but it will be complicated. However you can dump session cookies to a database and then you can do claims locally on the server and use that cookie to tie it all together.
So I think you can do it either way.
JWTs are mutually authenticated (shared secret) but cookies are not.
JWT might do other things for you, like standardizing how to deal with key rotation (using the "kid" claim and JWKs discovery urls), or tying a bearer token to a PoP structure (DPoP), but that's all about standardization. And as a standard JWT is too flexible and ambiguous. There are better proposed standards out there, and for most of the thing JWT is used for (non-interoperable access tokens) it's an overkill.
I agree that JWTs don't really do anything more than a cookie couldn't already do, but I think the use case is for apps, not web browsers. In particular apps that do raw HTTP API calls and do not implement a cookie jar. And then because most companies do "app first development", we end up having to support JWT in the web browser too, manually putting it into localstorage or the application state, instead of just leveraging the cookie jar that was already there.
We just recently had to implement an SSO solution using JWT because the platform only gave out JWTs, so we ended up putting the JWT inside an encrypted HttpOnly cookie. Seemed a bit like a hat-on-a-hat, but eh.
> We just recently had to implement an SSO solution using JWT because the platform only gave out JWTs, so we ended up putting the JWT inside an encrypted HttpOnly cookie. Seemed a bit like a hat-on-a-hat, but eh.
Why would you think that? Cookies are a perfectly normal place to store JWTs for web applications. If your frontend is server-side-generated, the browser needs to authenticate the very first request it sends to the server and can't rely on anything apart from cookies anyway.
Now that JWT exists, there is a standard way to do it so you don’t have to write the same boring code a bunch of times in different languages. You just have one string you pass in one field and if you tell someone else that it’s a JWT, they know how to parse it. You don’t have to document your own special way anymore.
At the end of the day, it’s just a standard for that specific problem that didn’t have a standard solution before. If passing data like that is not a problem for your use case, then you don’t need the tool.
To use your Protobuf example, there was a time before Protobuf or tools like it existed. I can tell you that writing the exact same protocol code by hand in Java, PHP, and Python is absolute tedious work. But if it never came up that you had to write your own protocol, you neither know the pain of doing it manually nor the pleasure of using Protobuf, and that’s fine.
The use-case I always remember people presenting for JWTs was mostly part of the "serverless" fad/hype.
The theory was presented like this: If you use a JWT your application logic can be stateless when it comes to authentication; So you don't need to load user info from a session database/kv store, it's right there in the request...
The only way that makes any sense to me, is if your application has zero storage of its own: it's all remote APIs/services, including your authentication source. I'm sure there are some applications like that, but I find it hard to believe that's how/why it's used most of the time.
Never underestimate this industry's ability to get obsessed with the new shiny.
I had an eye opening experience many years ago with a junior dev (I was significantly more experienced than he was then, but wouldn't have called myself "senior" at the time).
He had written an internal tool for the agency we both worked for/through. I don't recall the exact specifics, but I remember the accountant was involved somewhat, and it was a fairly basic CRUD-y PHP/MySQL app. Nothing to write home about, but it worked fine.
At some point he had an issue getting his php/mysql environment configured (I guess on a new laptop?) - this was before the time of Docker; Vagrant was likely already a thing but he wasn't using it.
From what he explained afterwards I believe it was just the extremely common issue that connecting to "localhost" causes the mysql client to attempt a socket connection, and the default socket location provided in php isn't always correct.
As I said, I heard about this after he'd decided that connection issue (with an app that had already been working well enough to show off to powers-that-be and get approval to spend more paid time on it) was enough to warrant re-writing the entire thing to use MongoDB.
As I said: never underestimate this industry's ability to get obsessed with the new shiny.
Cracking 256bit by brute force is unrealistically unlikely as you said, and there are many systems that could be compromised by that compute, an isolated jwt sig seems like just a very specific example.
A nice benefit of JWT for me is that it can be asymm signed and verified (ID tokens)
JWTs across servers are typically used with signatures, not in HMAC mode (so no globally shared HMAC keys). Then the issuer simply exposes a JWKS endpoint for downstream consumers (so no additional maintenance to distribute public keys).
It seems to be a NIH-ed serialization format with hard-coded ciphersuits. It doesn't seem to support use-cases like delegation and claims.
And sorry, but the article https://paragonie.com/blog/2017/03/jwt-json-web-tokens-is-ba... is just weak. The only real vulnerability is the key type confusion (HS256 vs. RSA256) enabled by libraries in weakly-typed languages, and easily fixed. Other:
> RSA with PKCS #1v1.5 padding is vulnerable to a type of chosen-ciphertext attack, called a padding oracle.
Not applicable.
> If you attempt to avoid invalid curve attacks by using one of the elliptic curves for security, you're no longer JWT standards-compliant.
This is just nonsense. JWT allows Ed25519: https://www.rfc-editor.org/rfc/rfc8037 Moreover, I'm not aware of real attacks against NIST curves.
Here's a list of "alg: none" JWT vulns. Every one of these would've been avoided had the standard been something like PASETO which didn't allow that. https://github.com/zofrex/howmanydayssinceajwtalgnonevuln/bl...
You say "I am not going to watch the talk" and then you continue to argue in bad faith. Please walk away if you're not going to engage honestly.
You parade the alg=none vulnerability that has been fixed long ago as the reason to reinvent the world. It's simply not.
PASETO has exactly the same vulnerabilities. You can specify a different version, and a buggy implementation can misinterpret it. With PASETO, the algorithm selection is fully under the control of the attacker.
[1] https://www.microsoft.com/en-us/security/blog/2023/07/14/ana...
`alg=none` and `hsa=rsa` were really the only ones that are JWT-specific. Invalid curves are algorithm-specific, and JWT allows the Ed25519 signatures.
And so far, I don't think NIST curves have been cracked? iOS secure enclave only supports them, for example.
The closest OAuth gets to mandating JWT is with client authentication and proof-of-possession. The OAuth Best Current Practices RFC (9700) recommends using asymmetric JWT for client authentication in case you cannot use Mutual TLS (which is usually the case). This recommendation will probably be rolled into the new OAuth 2.1 standard (it is included in the draft). OAuth 2.1 also mentions the JWT-based DPoP as one of the two recommended methods for implementing sender-constrained access tokens (the other one is Mutual TLS again).
Verify: https://cozejson.com
OAuth doesn't, OIDC does for the ID token[0]. OAuth, at least the inital RFCs, were released 3 years before JWT was defined. But many extensions of OAuth do require or support JWTs.
Either way, I'm just not sure the demand is there.
My employer has had an open issue for Pasteo[1] for years but hasn't seen much community support. Some other interesting comments here[2]. Looks like most of the implementations[3] are libraries rather than standalone auth servers.
0: https://openid.net/specs/openid-connect-core-1_0.html#IDToke...
1: https://github.com/fusionAuth/fusionauth-issues/issues/773
2: https://www.reddit.com/r/KeyCloak/comments/1e2h5w7/is_paseto...
It shouldn't be about demand. It's about solving the danger of these poorly designed APIs to improve overall web security.
For the most simple use case of an client auth state; you want to be able to revoke auth straight away if an account is compromised. This means you have to check the auth database for every request anyway, and you probably could have got whatever else was in the claim there quickly.
Same with roles; if you downgrade an admin user to a lower 'class' of user then you don't want it to take minutes to take effect.
So then all you are left with is a unified client id format, which is somewhat useful, but not really the 'promise' of JWTs (I feel?).
FWIW, I built a system previously that got around this "having to check the DB on every access to check for revocations" issue that worked quite well. Two important things to realize:
1. Revocations (or what is usually basically "explicit logout") is actually quite rare in a lot of user application patterns. E.g. for many web apps users very rarely explicitly logout. It's even rarer for mobile apps.
2. You only need to keep around a list of revocations for as long as your token expiry is. For example, if your token expiration is 30 mins, and you expire a user's tokens at noon, by 12:30 PM you can drop that revocation statement, because any tokens affected by that revocation would have expired anyway.
Thus, if you have a relatively short token expiration (say, a half hour), the size of your token expiration list can almost always fit in memory. So what I built:
1. The interface to see if a token has expired is basically "getEarliestTokenIssuedAt(userId: string): Date" - essentially, what is the earliest possible issuance timestamp for a token for a particular user to be considered valid. So, revoking a user's previously issued tokens means just setting this date to Now(), then any token issued before that will be considered invalid.
2. I had a table in postgres that just stored the user ID and earliest valid token date. However, I used postgres' NOTIFY functionality to send a broadcast to all my servers whenever a row was added to this table.
3. My servers then just had what was a local copy of this table, but stored in memory. Again, remember that I could just drop entries that were older than the longest token expiration date, so this could fit in memory.
On the off-chance that somehow the current revocation list couldn't fit in memory, I build something in the system that allowed it to essentially say "memory is full" which would cause it to make a call back to postgres', but again, that situation would naturally clear up after a few minutes if revocations went back down and the token expiration window passed.
This sounds more complicated than it actually was. It has the benefits of:
1. Almost no statefulness, which was great for scalability.
2. Verifying a token could still always be done in memory, at least almost. Over a couple years of running the system I actually never hit a state when the in-memory revocation list got too big.
And this sort of thing is basically what redis is for, right? Spin up a docker container, use it as a simple key value store (really just key store). When someone manually invalidates a token, push it in, with the expiry date is has anyway.
Is this as secure as doing a blacklist for non-expired tokens? No, it isn't. It is a sane tradeoff between decent security and implementation complexity.
> A "logout" action from the user should just delete the JWT from the device he is using.
I wouldn't say should. It may. If you're fine with inability to terminate sessions on other devices.Also, as vintermann suggested, you can use a faster, domain-specific database if you're concerned about this becoming an issue. And sometimes edge cases like this aren't worth considering until you hit them.
> 1. Almost no statefulness, which was great for scalability.
This is called "eventual consistency", it's probably fine in practice but you still do have a lot of state. Personally, if I have any in-application state at all, I would use a sticky cookie on the LB to send each client to the same instance.
But still, I'm not sure that I've seen an auth/roles database that couldn't fit (at least) the important stuff itself in RAM itself fwiw. Even 1TB of RAM is relatively affordable (if you are not on the hyperscalers) and you could fit billions of users in that, which at least in theory means you can just check everything and not have another store to worry about.
You can keep revocations in a very fast lookup system (eg broadcasts + in-memory store), combined with reasonably short token renewals, like 5-60 minutes.
Massively cuts down the number of token validity checks, and makes the system tolerant to downtimes of the auth system. That's less relevant for basic apps where the auth data is in the same DB as all the other data, but that is rarely the case in larger systems.
The classic solution to avoid this (in the common case where you can fit the entire revocation list in memory) is to have a push-based or pub/sub-based mechanism for propagating revocations to token verifiers.
If you read the draft, the TTL is clearly specified as optional.
> (...) and does not get re-loaded until that TTL expires.
That is false. The draft clearly states that the optional TTL is intended to "specify the maximum amount of time, in seconds, that the Status List Token can be cached by a consumer before a fresh copy SHOULD be retrieved."
> You can have a lower TTL for statuslist, but that comes at the cost of higher frequency of high-latency network calls due to cache misses.
The concept of a TTL specifies the staleness limit, and anyone can refresh the cache at a fraction of the TTL. In fact, some cache revalidation strategies trigger refreshes at random moments well within the TTL.
There is also a practical limit to how frequently you refresh a token revocation list. Some organizations have a 5-10min tolerance period for basic, genera-purpose access tokens, and fall back to shorter-lived and even one-time access tokens for privileged operations. So if you have privileged operations being allowed when using long-lived tokens, your problem is not the revocation list.
Again, it doesn't matter if TTL and caching is optional, what matters is that this specification has NOTHING to do with a pub/sub-based or push-based mechanism as described by GGGP. This draft specifies a list that can be cached and/or refreshed periodically or on demand. This means that there will always be some specified refresh frequency and you cannot have near-real-time refreshes.
> There is also a practical limit to how frequently you refresh a token revocation list. Some organizations have a 5-10min tolerance period for basic, genera-purpose access tokens, and fall back to shorter-lived and even one-time access tokens for privileged operations. So if you have privileged operations being allowed when using long-lived tokens, your problem is not the revocation list.
That's totally cool. Some organizations are obviously happy with delayed revocations for non-sensitive operations, which they could easily achieve them with stateful refresh tokens, without the added complexity of revocation lists. Stateful and revokable refresh tokens are already supported by many OAuth 2.0 implementations such as Keycloak and Auth0[1]. All you have to do is to set the access token's TTL to 5-10 minutes and you'll get the same effect as you've described above. The performance characteristics may be worse, but many ap which are happy with delayed revocation are happy with this simple solution.
Unfortunately, there are many products where immediate revocation is required. For instance, administrative dashboards and consoles where most operations are sensitive. You can force token validity check through an API call for all operations, but that makes stateless access tokens useless.
What the original post above proposed is a common pattern[2] that lets you have the performance characteristics (zero extra latency) of stateless tokens together with the security characteristics of a stateful access token (revocation is registered in near-real-time, usually less than 10 seconds). This approach is supported by WSO2[3], for instance. The statuslist spec does nothing to standardize this approach.
[1] https://auth0.com/docs/secure/tokens/refresh-tokens/revoke-r...
[2] See "Decentralized approach" in https://dzone.com/articles/jwt-token-revocation
[3] https://mg.docs.wso2.com/en/latest/concepts/revoked-tokens/#...
It only depends on your own requirements. You can easily implement pull-based or push-based approaches if they suit your needs. I know some companies enforce a 10min tolerance on revoked access tokens, and yet some resource servers poll them at a much higher frequency.
> Again, it doesn't matter if TTL and caching is optional (...)
I agree, it doesn't. TTL is not relevant at all. If you go for a pull-based approach, you pick the refresh strategy that suits your needs. TTL means nothing if it's longer than your refresh periods.
> This draft specifies a list that can be cached and/or refreshed periodically or on demand. This means that there will always be some specified refresh frequency and you cannot have near-real-time refreshes.
Yes. You know what it makes sense for you. It's not for the standard to specify the max frequency. I mean, do you think the spec specify max expiry periods for tokens?
Try to think about the problem. What would you do if the standard somehow specified a TTL and it was greater than your personal needs?
Some auth servers implement it. Keycloak does[0]. Auth0 doesn't as far as I can tell[1]. FusionAuth (my employer) has had it listed as a possible feature for years[2] but it never has had the community feedback to bubble it up to the top of our todo list.
0: https://www.keycloak.org/securing-apps/oidc-layers#_token_re...
1: https://auth0.com/docs/secure/tokens/revoke-tokens
2: https://github.com/fusionAuth/fusionauth-issues/issues/201
It's not rare, it happens constantly in enterprise software, project managemment software, anything where you have collaboration.
What is so frustrating about tech like JWTs is that it fits the fairly rare, high profile, websites like Reddit, netflix, etc. but doesn't fit ANYTHING else.
Everyone else wants immediate revocation of rights, not waiting for a token to expire.
And yet we all have to suffer this subpar tech because someone wrote a blog post about it and a bunch of moronic software "architects" made it the only option. If you don't JWT somehow you're doing it wrong, even though it should in fact be an extremely niche way of doing Auth at scale.
Simple cookie based tokens were and still are a much better choice for many applications.
The number of revoked tokens compared to all active tokens should still be tiny in those systems, wouldn’t you agree?
> Everyone else wants immediate revocation of rights, not waiting for a token to expire.
With a revocation list you can still have that. Once you propagated your revocation to all relying parties the token effectively expires early.
In fact, it can be used to create simple tokens—even if you store them in a database in a traditional authentication sense.
But it is also helpful to be able to use OIDC, for example, with continuous delivery workflows to authenticate code for deployment. These use JWT and it works quite well I think.
Note: technically JWT is only one of the specs so it’s not exactly correct how I’m referring to it, but I think of them collectively as JWT. :)
I'm trying to be sensible here not dream up straw man scenarios of which there are many.
Then you have anything that handles financial data. If you're a bank and you get a call that you have a fraudster taking over an account; you want to be able to revoke that straight away. Waiting another 5 minutes could mean many thousands more in losses (simplified example, but you hopefully get my drift), which arguably the bank may be liable for by the regulator.
Also many other "UX" problems, you also don't want roles to be out of sync for 5 minutes. Imagine you are collaborating on a web app and you need to give a colleague write access to the system for an urgent deadline. She's sitting next to you and you have to wait 5 minutes (or do a forced login/logout) before you get access, even after refreshing the page.
Finally it's really far from ideal to be using 5 min refreshes. For idle users with a tab open you will have people constantly pinging the backend all the time to get a refresh. Imagine some sort of IOT use case where you have thousands of devices on very bandwidth limited wide area networks.
Furthermore - it's a total mess on mobile apps. Imagine you have an app (say a food delivery app) that is powered by push notifications for delivery status. If you've got a 5 min token and you push down an update via push notifications telling it to get new data from a HTTP endpoint to update a widget, your token will almost certainly be expired by the time the delivery is on the way. You then need to do a background token refresh which may or may not be possible on the OS in question.
This is only conceivably true if your ability to design services only goes as far as reusing reddit-like usecases for everything and anything.
But everyone else is not incumbered by that limitation.
> Everyone else wants immediate revocation of rights, not waiting for a token to expire.
Where exactly does a JWT prevent you from rejecting revoked tokens? I mean, JWTs support short-lived tokens, jti denylists, single-user tokens with nonces, etc. Why are you blaming JWTs for problems you're creating to yourself.
You mean the system that handles revocations? If so, your own system will be vulnerable if you continue to do business as usual while it's down
For example, a magic link sent via email can have a substantial validity duration.
My point is…JWT can be used in a number of contexts.
Unlike e.g. challenge-response or signature authentication.
I fail to see the relevance of your scenarios regarding JWTs. I mean, I get your frustration. However, none of it is related t JWTs. Take a moment to read what you wrote: if your account is compromised, the attacker started abusing credentials the moment he got them. The moment the attacker got a hold of valid credentials is not the moment you discovered the attack, let alone the moment you forced the compromised account to go through a global sign-off. This means that your scenario does not prevent abuse. You are revoking a token when it was already being abused.
Also, as someone who implemented JWT-based access controls in resource servers, checking revocation lists is a basic scenario. It's very often implemented as a very basic and very fast endpoint that provides a list of JWT IDs. The resource server polls this endpoint to check for changes, and checks the list on every call as part of the JWT check. The time window between revoking a token and rejecting said token in a request is dictated by how frequent you poll the endpoint. Do you think, say, 1 second is too long?
> Same with roles; if you downgrade an admin user to a lower 'class' of user then you don't want it to take minutes to take effect.
It's the exact same scenario: you force a client to refresh it's access tokens, and you revoke which tokens were issued. Again, is 1 second too long?
Also, nothing forces you to include roles in a JWT. OAuth2 doesn't. Nothing prevents your resource server from just using the jti to fetch roles from another service. Nevertheless, are you sure that service would be updated as fast or faster than a token revocation?
> So then all you are left with is a unified client id format, which is somewhat useful, but not really the 'promise' of JWTs (I feel?).
OAuth2 is just that. What's wrong with OAuth?
Also, it seems you are completely missing the point of JWTs. Their whole shtick is that they allow resource servers do verify access tokens locally without being forced to consume external services. Token revocation and global sign-offs are often reported as gotchas, but given how infrequent these scenarios take place and how trivial they are to implement (periodically polling an endpoint hardly changes that.
RFC 8705 section 3[0], binds tokens by adding a signature of a client certificate presented to the server doing authentication. Then any server receiving that token can check to see that the client certificate presented is the same (strictly speaking, hashes to the same value). This works great if you have client certs everywhere and can handle provisioning and revoking them.
RFC 9449[1] is a more recent one that uses cryptographic primitives in the client to create proof of private key possessions. From the spec:
> The main data structure introduced by this specification is a DPoP proof JWT that is sent as a header in an HTTP request, as described in detail below. A client uses a DPoP proof JWT to prove the possession of a private key corresponding to a certain public key.
These standards are robust ways to ensure a client presenting a token is the client who obtained it.
Note that both depend on other secrets (client cert, private key) being kept secure.
It really depends on the system. In my experience, there are tons of apps that want to be able to revoke access but weigh that against transparent re-authentication. OIDC handles both nicely with:
* short access/id token lifetimes (seconds to minutes)
* regular transparent refreshes of those tokens (using a refresh token that is good for days to months)
This flexibility lets developers use the same technology for banks (with a shorter lifetime for both access/id tokens and refresh tokens) and consumer applications (with a short lifetime for access/id tokens and a longer lifetime for refresh tokens).
And from the backend perspective, most frameworks have session tracking built-in with cookies so it's super easy to dismiss one or all clients.
With JWT however, that rarely exist and you need to re-implement the whole session-shebang in order to keep track of the clients.
I don't know what the better solution looks like, but dealing with OAuth and JWT setups is kind of horrible, regardless of the technology stack being used.
I think it is a real good trade-off, as in case of a security breach you have an easy way to mitigate leaked tokens. The downside is, your user will have to re-login all devices. If you do not want to burden your users with the login on all devices, you should ask your self how often you do have security breaches and leaked tokens, might be you have others issues going on.
Then, if that has some sort of limitation for your app's specific use case, you can see to migrate to JWT.
But the standards are standards for a reason.
https://www.youtube.com/watch?v=IgKRGS6cQWw
Here's the video description:
> JWT is an IETF standard security token format that, due to perceived simplicity and widespread library availability, has been extremely popular in recent years. Despite that popularity (or maybe, in part, because of it), JWT has been heavily derided by reputable people in information security ("horrible standard", "RFC was made by monkeys", "Internet’s worst cryptography standard", "JWT is a disaster ... amazing how bad it is", "simplistic, complicated, and unsafe all at the same time", and "almost impossible to build a secure JWT library" ...give just a taste of the sentiment).
> The criticism has been substantiated and amplified by a steady stream of public vulnerabilities in libraries and deployments. Indeed there have been serious and legitimate security problems with JWT and many of them can be attributed directly to fundamental flaws in the specification itself that allowed, or even encouraged, such implementation mistakes. But is JWT irredeemably flawed? This session will endeavor to take a hard look at that very question (complete with the presenter's own sense of inadequacy and fear of culpability in JWT's flaws) with a review/overview of JWT fundamentals and a pragmatic look at each of the most common and/or biting criticisms and associated real-world vulnerabilities.
He discusses the architectural advantages of JWT but also discusses JWTs lacking
"JWTs are a passport without a picture. A very dangerous thing".
His solution: OAuth2 + JWT + Signatures
https://www.ietf.org/archive/id/draft-sheffer-oauth-rfc8725b...
JSON Web Token Best Current Practices
90s_dev•1mo ago
It's certainly a sign of something's utility and versatility, for sure. Congrats.
vrosas•1mo ago
90s_dev•1mo ago
marifjeren•1mo ago
deeringc•1mo ago