Oh. Em. Gee.
Is this a common take on Okta? The article and comments suggest...maybe? That is frightening considering how many customers depend on Okta and Auth0.
auth0, as a product, distinguished itself with a modern, streamlined architecture and a commendable focus on developer experience. As an organisation, auth0 further cemented its reputation through the publication of a consistently high-calibre technical blog. Its content goes deeply into advanced subjects such as fine-grained API access control via OIDC scopes, RBAC, ABAC and LBAC models – a level of discourse rare amongst vendors in this space.
It was, therefore, something of a jolt – though in retrospect, not entirely unexpected – when Okta acquired auth0 in 2021. Whether this move was intended to subsume a superior product under the mediocrity of its own offering or to force a consolidation of the two remains speculative. As for the fate of the auth0 product itself, I must admit I am not in possession of definitive information – though history offers little comfort when innovation is placed under the heel of corporate, IPO driven strategy.
Surprisingly, I have found that many people struggle to wrap their heads around the relative simple concepts of RBAC, ABAC and, more recently, LBAC. auth0 did a great job at unfolding such less trivial concepts into a language that made them accessible to a wider audience, which, in my books, is a great feat and accomplishment.
[0] https://auth0.com/blog/auth0-code-repository-archives-from-2...
Okta has committed to and has had a consitent track record of delivering at least one full scale security breach and the consistent user expericence degradation to their customers every year – and completely free of charge.
I suppose it has been a couple years since the last... [0]
[0] https://techcrunch.com/2023/11/29/okta-admits-hackers-access...
1. OAuth2 and OIDC are inherently intricate and alarmingly brittle – the specifications, whilst theoretically robust, leave sufficient ambiguity to spawn implementation chaos.
2. The proliferation of standards results in the absence of any true standard – token formats and claim structures vary so wildly that the notion of consistency becomes a farce – a case study in design by committee with no enforcement mechanism.
3. ID tokens and claims lack uniformity across providers – interoperability, far from being an achievable objective, has become an exercise in futility. Every integration must contend with the peculiarities – or outright misbehaviours – of each vendor’s interpretation of the protocol. What ought to be a cohesive interface degenerates into a swamp of bespoke accommodations.
4. There is no consensus on data placement – some providers, either out of ignorance or expedience, attempt to embed excessive user and group metadata within query string parameters – a mechanism limited to roughly 2k characters. The technically rational alternative – the UserInfo endpoint – is inconsistently implemented or left out entirely, rendering the most obvious solution functionally unreliable.
Each of these deficiencies necessitates a separate layer of abstraction – a bespoke «adapter» for every Identity Provider, capable of interpreting token formats, claim nomenclature, pagination models, directory synchronisation behaviour, and the inevitable, undocumented bugs. Such adapters must then be ceaselessly maintained, as vendors alter behaviour, break compatibility, or introduce yet another poorly thought-out feature under the guise of progress.
All of this – the mess, the madness, and the maintenance burden – is exhaustively documented[0]. A resource, I might add, that reads less like a standard and more like a survival manual.
[0] https://www.pomerium.com/blog/5-lessons-learned-connecting-e...
Beyond the aforementioned concerns, one encounters yet another quagmire – the semantics of OIDC claims, the obligations ostensibly imposed by the standard, and the rather imaginative ways in which various implementations choose to interpret or neglect those obligations.
Please allow me to illustrate with a common and persistently exasperating example: user group handling, particularly as implemented by Okta and Cognito. The OIDC spec, in its infinite wisdom, declines to define a dedicated claim for group membership. Instead, it offers a mere suggestion – that implementers utilise unique namespaces. A recommendation, not a mandate – and predictably, it has been treated as such.
In perfect accordance with the standard’s ambiguity, Okta provides no native «groups» claim. The burden, as always, is placed squarely upon the customer to define a custom claim with an arbitrary name and appropriate mapping. User group memberships (roles) are typically sourced from an identity management system – not infrequently, and regrettably, from an ageing Active Directory instance or, more recently, a new and shiny Entra instance.
Cognito, by contrast, does define a claim – «cognito:groups» – to represent group membership as understood by Cognito. It is rigid, internally coherent, and entirely incompatible with anything beyond its own boundaries.
Now, consider a federated identity scenario – Okta as the upstream identity provider, federated into Cognito. In this scenario, Cognito permits rudimentary claim mapping – simple KV rewrites. However, such mappings do not extend to the «cognito:groups» structure, nor do they support anything approaching a nuanced translation. The result is a predictable and preventable failure of interoperability.
Thus, despite both platforms ostensibly conforming to the same OIDC standard, they fail to interoperate in one of the most critical domains for medium to large-scale enterprises: user group (role) resolution. The standard has become a canvas – and each vendor paints what they will. The outcome, invariably, is less a federation and more a fragmentation – dressed in the language of protocol compliance.
> I've implemented both OAuth2 and OpenID Connect multiple times
Whilst I do not doubt that you have made multiple earnest attempts to implement the specification, I must express serious reservations as to whether the providers in question have ever delivered comprehensive, interoperable support for the standard in its entirety. It is far more plausible that they focused on a constrained subset of client requirements, tailoring their implementation to satisfy those expectations alone at the IdP level and nothing else. Or, they may have delivered only the bare minimum functionality required to align themselves, nominally, with OAuth2 and OIDC.
Please allow me to make it abundantly clear: this is neither an insult aimed at you nor an indictment of your professional capabilities. Rather, it is a sober acknowledgement of the reality – that the standard itself is both convoluted and maddeningly imprecise, making it extraordinarily difficult for even seasoned engineers to produce a high-quality, truly interoperable implementation.
> I'm sure you're right that vendors take liberties -- that is almost always the case, and delinquency of e.g. Okta is what started this thread.
This, quite precisely, underscores the fundamental purpose of a standard – to establish a clear, concise, and unambiguous definition of that which is being standardised. When a standard permits five divergent interpretations, one does not possess a standard at all – one has five competing standards masquerading under a single name.
Regrettably, this is the exact predicament we face with OAuth2 and OIDC. What should be a singular foundation for interoperability has devolved into a fragmented set of behaviours, each shaped more by vendor discretion than by protocol fidelity. In effect, we are navigating a battlefield of pluralities under the illusion of unity – and paying dearly for the inconsistency.
Needless to say, OAuth2 and OIDC are still the best that we have had, especially compared to their predecessors, and by a large margin.
Happy to chat (email in profile), or you can visit our comparison page[0] or detailed technical migration guide[1].
0: https://fusionauth.io/compare/fusionauth-vs-auth0
1: https://fusionauth.io/docs/lifecycle/migrate-users/provider-...
(Disclaimer: I work for Zitadel).
They have an enterprise version now (mostly for support and bleeding edge features that later make it into the open source product.)
It's pretty easy to self host. I have been doing it for a small site for years and I couldn't even get any other open source solution to work. They are mostly huge with less features.
[1] https://blog.cloudflare.com/how-cloudflare-mitigated-yet-ano...
We have lambdas (basically JavaScript code that can make API calls[0] and be managed and tested[1]) that execute at fixed points in the auth lifecycle:
- before a login is allowed
- before a token is created
- after a user returns from a federated login (SAML, OIDC, etc)
- before a user registers
And more[2].
And we're currently working on one for "before an MFA challenge is issued"[3].
There are some limitations[4]. We don't allow, for instance, loading of arbitrary JavaScript libraries.
Not sure if that meets all your needs, but thought it was worth mentioning.
0: https://fusionauth.io/docs/extend/code/lambdas/lambda-remote...
1: https://fusionauth.io/docs/extend/code/lambdas/testing
2: full list here: https://fusionauth.io/docs/extend/code/lambdas/
3: https://github.com/FusionAuth/fusionauth-issues/issues/2309
4: https://fusionauth.io/docs/extend/code/lambdas/#limitations
When I brought it up, they said they didn't have anyone smart enough to host an identity solution.
They didn't have anyone smart enough to use Okta either. I had caught multiple dealbreakers-for-me such dubious / conflicting config settings resulting in exposures, actual outages caused by forced upgrades, not to mention their lackluster responses to bona fide incidents over the years.
I use Authentik for SSO in my homelab, fwiw.
Ill never understand this thinking.
Definitely makes things safer than users not knowing about them.
It's hardly surprising that the market prefers to offload that responsibility to players it thinks it can trust, who operate at a scale where concerns about high traffic go away.
I'll concede there is some complexity in integrating with everything and putting up with the associated confusion. And granted the stakes are a little raised due to the nature of identity and access, and like you point out what could go wrong. Implementation is annoying, both writing the identity solution and then deploying and operating it. But the deployment & operation part is still there if you go with Okta or 1Login or Cognito or whomever.
The implementation is a capital type thing that is substantially solved already with the various F/OSS solutions people are mentioning - it's just a docker pull and some config work to get it going into a POC.
There are much harder problems in tech IMO, anything ill-defined for starters.
The C-level folks seem to think they are buying some kind of indemnity with these "enterprise" grade solutions, but there is no such thing. They'll even turn it around and take Okta's limitations as existential--"if even Okta doesn't get it right, there is no way we could pull it off". Out of touch, or less politely, delusional.
Something you need to understand about executives, is that they're not really individual God-like figures ruling the world; at the end of the day they answer to their CEO, to their Boards, and want to look good to executive recruiters who might consider them for a C-level role at a larger company for higher pay; and a good many of them lead not-so-affordable lifestyles to keep up appearances among aforementioned folk and might be worse off in their personal finances than you.
All of which is just to say - "nobody got fired for buying IBM." It might be tragic, but going with peer consensus is what helps them stay with their in-crowd. The risks for departing from the herd (holding up deals on compliance concerns, possibly higher downtime for whatever reason, difficulty of hiring people who demand cheaper salaries but already know an Industry Standard Solution) are too high compared to the potential benefits (lower total cost of ownership, increased agility, better security/engineering quality, higher availability assuming for the sake of argument that is actually the case), particularly when increased agility and better quality are difficult to quantify, higher availability is hard to prove (Okta and peers don't exactly publish their real availability figures), and the difference in TCO is not enough to move the needle.
It's very rare to find executives who care more about their company's engineering than their peer group - folks who care that much rarely become executives in the first place.
That said, a lot of these things are very well documented... there are self-host systems and options both open-source, paid and combinations not to mention self-hosted options for both.
I've worked on auth systems used in banking and govt applications as well as integration with a number of platforms including Okta/Auth0. And while I can understand some of the appeal, it's just such a critical point of potential failure, I don't have that much trust in me.
I wish I could have open-sourced the auth platform I wrote a few years ago, as it is pretty simple in terms of both what it can do and how to setup/configure/integrate into applications. Most such systems are just excessively complex for very little reason with no reasonable easy path.
Why on earth did I spend time in creating a reproducible example?
That's what the critique is about, lack of communication and lack of acknowledgement. Ghosting people when they took the time to file an issue/bug report, with providing a PoC and test case is just rude behavior.
(3 years later...)
Also some projects like the Linux kernel are just mirrors and would be better off with that functionality disabled.
They definitely don't want them if their process requires signed commits and their solution is 1) open another PR with the authors info then sign it for them, and 2) add AI into the mix because git is too hard I guess?
No matter how you slice it, it doesn't seem like there are Okta employees who want to be taking changes from third parties.
Mistakes happen, I guess this hurts his 'commits in a public repo' cv score.
What is your understanding of what license and rights the author was providing them - understanding this I can figure out where you are confused.
It would indeed be copyright violation to improperly attribute code changes. In this case I would absolutely say a force push is warranted, especially since most projects are leaning (potentially improperly) on Git metadata in order to fulfill legal obligations. (This project is MIT-licensed, but this is particularly true of Apache-licensed projects, which have some obligations that are surprising to people today.) A force push is not the end of the world. You can still generally disallow it, but an egregious copyright mistake in recent history is a pretty good justification. That or, literally, revert and re-add the commit with correct attribution. If you really feel this is asking too much, can you please explain why you think it's such a big problem? If it's such a pain, a good rule of thumb would be to not fuck this up regularly enough that it is a major concern when you have to break the glass.
Using Auth0 in apps, I find their documentation bafflingly difficult to read. It's not like being thrown in the deep end unexpected to swim. It's like being injected at the bottom of the deep end.God help the poor non-native English speakers on my team who have to slog through it.
Maybe if enterprise sales decisions weren't made based on checklist and which account exec took them out on the best golf trip, we'd have better products.
This one is amusing, and as another comment mentioned below, large companies are awful at accepting patches on github. Most use one-way sync tools to push from their internal repositories to github.
https://dirkjanm.io/obtaining-global-admin-in-every-entra-id...
Auth0 really is super easy and comfortable to integrate and I don‘t want to run my own keycloak or whatever.
Aren't they cheeky!
Thanks, I will try.
OIDC is not scary, and advanced central authorization features (beyond group memberships) are a big ole YAGNI / complexity trap.
Yes, you need someone to wear the IAM admin hat. But once you get it configured and running it requires 0.1 FTE or less (likely identical to whatever your Okta admin would be). Not worth 6+ figures a year and exposure to Okta breach risk.
Yes, creating a SAML integration is easy, but that's only one piece of the puzzle.
This isn't email.
Especially when the AI is being represented as a person, this to me is dishonest. Not to mention annoying, almost more-so than the number of different apps that think they are important enough to send me push notifications to fill out a survey (don’t even get me started).
There's no value in naming the employee. Whatever that employee did, if the company needed to figure out who it was, they can from the commit hashes, etc. But there's no value in the public knowing the employee's name.
Remember that if someone Googles this person for a newer job, it might show up. This is the sort of stuff that can disproportionately harm that person's ability to get a job in the future, even if they made a small mistake (they even apologized for it and was open about what caused it).
So no, it's completely unnecessary and irrelevant to the post.
Isn't that beneficial in this case?
Not to sound too harsh, but this is a person who rudely let AI perform a task badly which should have been handled by just… merging/rebasing the PR after confirming it does what it should do, then couldn't be bothered to reply and instead let the robot handle it, and then refused to fix the mess they made (making the apology void).
That's three strikes.
I'm sure lots won't, but if that is you as an employer you're worth nothing.
As a certified former newborn, I should tell that finding the tit as a newborn is way harder, and yet here we all are.
"Struggling manfully," my arse, I don't know if the bar can go any lower...
That's the whole point; I sincerely hope it does. Why would anyone want to hire someone that delegates their core job to a slop generator?
So you'd rather the company get incomplete information about a candidate with hopes the candidate gets hired from a place of ignorance? If it's something the company would avoid hiring him for, then I don't find a problem with giving them the agency to make that decision for themselves.
On the one hand, you're right, it is distasteful, I completely agree. On the other hand, GitHub and Google and the public domain internet isn't everybody's CV that they can pick and choose which of their actions are publicised, tailored towards only their successes.
What does respect mean and how was it violated by this post?
I think you are far outside the mainstream of journalism norms and ethics and as such should bear the burden of explaining yourself further.
I think you're the one being disrespectful.
"You're absolutely right!" is the Claude cliche (not a ChatGPT one) - "You are absolutely correct." is not that.
> Yeah, i had to manually stop it and delete the ai-generated comment.
He's not fictitious I think.
It's really evident in situations like this where you are looking for something specific. Seems like they all pushed too hard on the AI and the results are for averaged search queries. Using quotes and -term have become less helpful
Conspiratorially, I wonder if this is intentional to drive more traffic to ai. I find myself using Google Deep Search more, which is honestly a better UX if it would stop writing damn reports and just give me a brief with links. Alas it ignores any instructions to change it's output format
These github links are not open source projects, these are public readable software projects. You do not control any of it, you have to deal with internal company politics like "# PRs opened", "# Bugs solved" for the developers' next performance review.
Dammit, things like this trigger a very strong rejection of actively adopting AI into my workflows. Not the AI tooling itself, but the absolutely irresponsible ways of using it. This is insane.
The bigger problem is trust. If an identity provider can’t reliably support mainstream frameworks, it undermines confidence in their entire platform. Developers end up spending more time debugging the SDK than building features.
This is why many of us lean toward smaller, well‑maintained libraries (Auth.js, Supabase Auth, etc.). They don’t try to abstract away everything, but they do the fundamentals well — and that’s what matters most in security.
My conclusion has been: for social and email login, you don't need things like Auth0. Just write it yourself.
You need: session management, account management (you'd already have this), and some simple social login pathways (PKCE etc). If you're an experienced engineer and take the time to do it properly, it's totally fine to "roll your own auth". Things like Auth0 and Firebase Auth are built for nobody and make life more difficult.
Any SaaS service that saves you like <40 hours of implementation work is not worth buying into. Just put in the hours and you're set for life. It'll probably take you that many hours to wrangle with integrating it anyway (and when things get serious, you'll need to figure it out down to the bone anyway; auth is not something you can just plop in like a blackbox and forget about it). And if in the process of rolling it yourself you realize "oh shit the service is actually lifting a lot for me", then the time you spent on learning that lesson was also worth it and made you a better engineer.
Basically, don't cargo-cult things just because everyone says you should. You should feel the "aha" for why you need to introduce a 3rd party thing.
dovys•2mo ago
manithree•2mo ago
mmsc•2mo ago