I’m so old and dumb that I don’t even understand why an app for internal Microsoft use is even accesible from outside its network.
So everything "internal" is now also external and required to have its own layer of permissions and the like, making it much harder for, e.g. the article, to use one exploit to access another service.
VPN puts a user on the network and allows a bad actor to move laterally through the network.
In fact, I'd say is a good defence-in-depth approach, which comes at the cost of increased complexity.
If there are tens of different services, is it more likely that one of them has vulnerablity than both VPN and service has? And vulnerability in VPN alone does not matter if your internal network is build like it is facing public world. You might be able to patch it before vulnerability in other services is found.
But I am saying that a VPN isn’t zero trust, by the agreed upon industry definition. There’s no way to make a VPN zero trust, and zero trust was created specifically to replace legacy VPNs.
Even here: Hacker News “should” support 2 factor authentication, being an online forum literally owned by a VC firm with tons of cash, but they don’t.
For accounts that actually mean something (Microsoft, Azure, banking, etc), yes, the more factors the better. For a lot of other apps, the extra security is occupying precious roadmap space[1]
1: I'm intentionally side-stepping the "but AI does everything autonomously" debate for the purpose of this discussion
Me: I didn't give the store website permission to save my credit card. If someone logs in, they'll know I ordered pants there.
HN allows for creating a user. HN requires every post and comment to be created by a user. HN displays the user for each post and comment. HN allows for browsing users' post and comment history. HN allows for flagging posts and comments, but only by users. HN allows for voting on posts and comments, but only by users. HN also has some baseline guardrails for fresh accounts. Very clearly, the concept of user accounts is central to the overall architecture of the site.
And you ask if it is in HN's interest to ensure people's user accounts remain in their control? Literally all mutative actions you can take on HN are bound to a user that I can tell, with that covering all content submission actions. They even turn on captchas from time to time for combating bots. [0] How could it not be in their interest to ensure people can properly secure their user accounts?
And if I further extend this thinking, why even perform proper password practices at all (hashing and salting)? Heck, why even check passwords, or even have user accounts at all?
So in my thinking, this is not a reasonable question to ponder. What is, is that maybe the added friction of more elaborate security practices would deter users, or at least that's what [0] suggests to me. But then the importance of user account security or the benefit of 2FA really isn't even a question, it's accepted to be more secure, it's more a choice of giving up on it in favor of some perceived other rationale.
Let's look at some common attacks :-
- Single user has their password compromised (e.g. by a keylogger). Here the impact to HN is minimal, the user may lose their account if they can't get through some kind of reset process to get access to it. MFA may protect against this, depending on the MFA type and the attacker.
- Attacker compromises HN service to get the password database. MFA's not really helping HN here at all and assuming that they're using good password storage processes the attacker probably isn't retrieving the passwords anyway.
- Attacker uses a supply chain attack to get MITM access to user data via code execution on HNs server(s). Here MFA isn't helping at all.
It's important to recognize that secure is not a binary state, it's a set of mitigations that can be applied to various risks. Not every site will want to use all of them.
Implementing mechanisms has a direct cost (development and maintenance of the mechanism) and also an indirect cost (friction for users), each service will decide whether a specific mitigation is worth it for them to implement on that basis.
Certainly, not doing anything will always be the more frugal option, and people are not trading on here, so financial losses of people are not a concern. The platform isn't monetized either. Considering finances is important, but reversing the arrow and using it as a definitive reason to not do something is not necessarily a good idea.
Regarding the threat scenarios, MFA would indeed help the most against credential reuse based attacks, or in cases of improper credential storage and leakage, but it would also help prevent account takeovers in cases of device compromise. Consider token theft leading to compromised HN user account and email for example - MFA involving an independent other factor would allow for recovery and prevent a complete hijack.
MFA can also add a support cost, where a user loses their MFA token. If you allow e-mail only reset, you lose some security benefits, if you use backup tokens, you run the risk that people don't store those securely/can't remember where they put them after a longer period.
As there's no major direct impact to HN that MFA would mitigate, the other question is, is there a reputational impact to consider?
I'd say the answer to that is no, in that all the users here seem fine with using the site in its current form :)
Other forum sites (e.g. reddit) do offer MFA, but I've never seen someone comment that they use reddit and not HN due to the relative availability of that feature, providing at least some indication that it's not a huge factor in people's decision to use a specific site.
"Well, let's build a list of attacks that I can think of off-the-cuff. And then let's iterate through that list of attacks: For each attack, let's build a list of 'useful' things that attackers could possibly want.
Since I'm the smartest and most creative person on the planet, and can also tell the future, my lists of ideas here will actually be complete. There's no way that any hacker could possibly be smart enough or weird enough to think of something different! And again, since I'm the smartest and most creative --and also, magically able to tell the future-- and since I can't think of anything that would be 'worth the cost', then this must be a complete proof as to why your security measure should be skipped!"
HN does not enforce anonymity, so the identity of some users (many startup owners btw) is tied to their real identities.
A compromised password could allow a bad actor to impersonate those users. That could be used to scam others or to kickstart some social engineering that could be used to compromise other systems.
The question was though, what are the consequences for HN, rather than individual users, as it's HN that would take the cost of implementation.
Now if a lot of prominent HN users start getting their passwords compromised and that leads to a hit on HNs reputation, you could easily see that tipping the balance in favour of implementing MFA, but (AFAIK at least) that hasn't happened.
Now ofc you might expect orgs to be pro-active about these things, but having seen companies that had actual financial data and transactions on the line drag their feet on MFA implementations in the past, I kind of don't expect that :)
Individual breaches don't really scale (e.g. device compromise, phishing, credential reuse, etc.), but at scale everything scales. At scale then, you get problems like hijacked accounts being used for spam and scams (e.g. you can spam in comment sections, or replace a user's contact info with something malicious), and sentiment manipulation (including vote manipulation, flagging manipulation, propaganda, etc.).
HN, compared to something like Reddit, is a fairly small scale operation. Its users are also more on the technically involved side. It makes sense then that due to the lesser velocity and unconventional userbase, they might still have this under control via other means, or can dynamically adjust to the challenge. But on its own, this is not a technical trait. There's no hard and fast rule to tell when they cross the boundary and get into the territory where adding manpower is less good than to just spend the days or weeks to implement better account controls.
I guess if I really needed to put this into some framework, I'd weigh the amount of time spent on chasing the aforementioned abuse vectors compared to the estimated time required to implement MFA. The forum has been operating for more than 18 years. I think they can find an argument there for spending even a whole 2 week sprint on implementing MFA, though obviously, I have no way of knowing.
And this is really turning the bean counting to the maximum. I'm really surprised that one has to argue tooth and nail about the rationality of implementing basic account controls, like MFA, in the big 2025. Along with session management (the ability to review all past and current sessions, to retrieve an immutable activity log for them, and a way to clear all other active sessions), it should be the bare minimum these days. But then, even deleting users is not possible on here. And yes, I did read the FAQ entry about this [0], it misses the point hard - deleting a user doesn't necessarily have to mean the deletion of their submissions, and no, not deleting submissions doesn't render the action useless; because as described, user hijacking can and I'm sure does happen. A disabled user account "wouldn't be possible" to hijack, however. I guess one could reasonably take an issue with calling this user deletion though.
I don't but the lack of changes in the basic functionality of the site in the number of years I've used it make me feel that they may not have any/many full time devs working on it...
But no, I do not have any information on their staffing situation. I presume you don't either though, do you?
GitHub Actions are a prime example. Azure's network, their compute, but I can cryptographically prove it's my repo (and my commit) OIDC-ing into my AWS account. But configuring a Warp client on those machines is some damn nonsense
If you're going to say "self hosted runners exist," yes, so does self-hosted GitHub and yet people get out of the self-hosted game because it eats into other valuable time that could be spent on product features
The way I see this is that VPN is just network extender. Nothing to do with design for on-premise software. By using VPN as an additional layer, most of the vulnerability scanners can’t scan your services anymore. It reduces the likelihood that you are impacted immediately by some publicly known CVEs. That is the only purpose of VPN here.
VPN may also have vulnerabilities, but for making the impact, both VPN and service vulnerability is required at the same time. The more different services/protocols you have behind VPN, more useful it is. It might not make sense if you have just ssh need, for example. Then you have 1:1 protocol ratio, and ssh could be more secure protocol.
So outside attackers have already been foiled, and insider threats have a million attack options anyway, what's one more? Go work on features that increase revenue instead.
In principle the idea of "zero trust" was to write your internal-facing webapps to the same high standards as your externally-facing code. You don't need the VPN, because you've fixed the many XSS bugs.
In practice zero trust at most companies means buying something extremely similar to a VPN.
But why stop there? If these apps are not required to be accessed from public world, by setting up VPN you need to exploit both VPN and and the service to have an impact. Denial of specific service is harder and exploiting known CVEs is harder.
The other thing is most companies are not Google. If you're a global company with hundreds of thousands of people who need internal access, moats may be non-ideal. For a business located in one place, local-only on-premise systems which block access to any country which they don't actively do business with is leaps and bounds better.
It is easy for Google/Microsoft and any other FAANG like company to preach about Zero Trust when they have unlimited (for whatever value of unlimited you want to consider) resources. And even then they get it wrong sometimes.
The simpler alternative is to publish all your internal apps through a load balancer / API gateway with a static IP address, put it behind a VPN and call it a day.
Or just use Cognito. It can wrap up all the ugly Microsoft authentication into it's basic OAuth and API Gateway can use and verify Cognito tokens for you transparently. It's as close to the Zero Trust model in a Small Developer Shop we could get.
Anyone who is on this forum is capable of building their own stuff, and running their own server, but that is not most people.
It seems that the fundamental issue surfaced in the blog post is that developers who work on authorizarion in resource servers are failing to check basic claims in tokens such as the issuer, the audience, and subject.
If your developers are behind this gross oversight, do you honestly expect an intranet to make a difference?
Listen, the underlying issue is not cloud vs self-hosted. The underlying issue is that security is hard and in general there is no feedback loop except security incidents. Placing your apps in a intranet, or VPN, does nothing to mitigate this issue.
this is tbh, computer architecture is already hard enough and cyber security is like a whole different field especially if the system/program is complex
For me, the core of the discovered issue was that applications intended purely for use by internal MS staff were discoverable and attackable by anyone on the Internet, and some of those applications had a mis-configuration that allowed them to be attacked.
If all those applications had been behind a decently configured VPN service which required MFA, any attacker who wanted to exploit them would first need access to that VPN, which is another hurdle to cross and would reduce the chance of exploitation.
With a target like MS (and indeed most targets of any value) you shouldn't rely solely on the security provided by a VPN, but it can provide another layer of defence.
For me the question should be, "is the additional security provided by the VPN layer justified against the costs of managing it, and potentially the additional attack surface introduced with the VPN".
That said yep corps over-complicate things and given the number of 0-days in enterprise VPN providers, it could easily be argued that they add more risk than they mitigate.
That's not to say a good VPN setup (or even allow-listing source IP address ranges) doesn't reduce exposure of otherwise Internet visible systems, reducing the likelihood of a mis-configuration or vulnerability being exploited...
I think the real problem is that these applications (Entra ID) are multi-tenant, rather than a dedicated single-tenant instance.
Here, we have critical identity information that is being stored and shared in the same database with other tenants (malicious attackers). This makes multi-tenancy violations common. Even if Entra ID had a robust mechanism to perform tenancy checks i.e. object belongs to some tenant, there are still vulnerabilities. For example, as you saw in the blog post, multi-tenant requests (requests that span >= 2 tenants), are fundamentally difficult to authorize. A single mistake, can lead to complete compromise.
Compare this to a single tenant app. First, the attacker would need to be authenticated as an user within your tenant. This makes pre-auth attacks more difficult.
(laid off) Microsoft PM here that worked on the patch described as a result of the research from Wiz.
One correction I’d like to suggest to the article: the guidance given is to check either the “iss” or “tid” claim when authorizing multi-tenant apps.
The actual recommended guidance we provided is slightly more involved. There is a chance that when only validating the tenant, any service principal could be granted authorized access.
You should always validate the subject in addition to validating the tenant for the token being authorized. One method for this would be to validate the token using a combined key (for example, tid+oid) or perform checks on both the tenant and subject before authorizing access. More info can be found here:
https://learn.microsoft.com/en-us/entra/identity-platform/cl...
Tenant, User, Group, Resource - validate it all before allowing it through.
make sure anything done in a session can be undone as part of sanitizing the user
In my own experience:
- Leaked service principal credentials granting access to their tenant? $0 bounty.
- Leaked employee credentials granting access to generate privileged tokens? $0 bounty.
- Access to private source code? $0 bounty.
Etc.
Should be a major public reckoning over this. But there can't be, they hold the cards, the only real view of this you'd have is day-to-day on Blind and some occasional posts that stir honest discussion here.
I guess we just get to grin and bear it while they give gold statues and millions to the right politicians.
Have they already gotten so drunk on "zero trust" that they don't think it should matter if attackers see their source code? Then again, they are open-sourcing a ton of stuff these days...
Their SECURITY.md mentions bug bounties, yet if your submission has anything to do with GitHub it's immediately disqualified. They refuse to remove that (in my opinion) misleading language.
This is what I like about actual safety culture, like you would find in aviation, _all causes_ are to be investigated, all the way back to the shape, size and position of the switches on the flight deck.
It's difficult to take Microsoft's stance seriously. It makes the prices for their "service" seem completely unjustifiable.
this means that a lot of genuine bug bounty hunters just won't look at MS stuff and MS avoid getting things fixed, instead other attackers will be the ones finding things, and they likely won't report it to MS...
Obviously nobody with power cares about security in Microsoft's Azure branch. Why does anyone trust continue trusting them? (I mean, I know that Azure is not something you buy by choice, you do because you got a good deal on it or were a Microsoft shop before, but still).
I mean, it's an additional layer.
Defense-in-depth is about having multiple.
I think the opposite problem can be the case: people think that something inside a VPN is now secure and we don't have to worry too much about it.
Add the fact that MSAL doesn’t work for stuff like browser extensions, so people have to implement their own security solutions to interact with Entra ID and it’s not surprising there are so many issues.
This one very annoying "feature" where I could say this app is available for the following tenants. No, only "my tenant" or "all tenants in Azure".
One workaround I use is to set up apps with "only this tenant" and invite users from other tenants into my tenant. The other approach is to say "all tenants" and then use a group to enforce who can actually use the app.
I don't know if there are any reasons behind this limitation or just an oversight or no client big enough asked for this feature.
Generally, you should say "only this tenant" unless you're a SaaS provider. And if you're a SaaS provider, you should really already understand the need to keep your various customers data separate.
[1] https://learn.microsoft.com/en-us/entra/external-id/cross-te...
For various reasons, we are not allowed to store personal information like that.
I need to be able to accept users from tenant A and from tenant B. I need to know to which tenant they belong, but NOT any other information such as name or email address.
This is currently not possible at all in Entra ID. The only option is allowing all tenants and manually roll auth to whitelist certain ones to actually continue calling APIs.
It’s completely moronic of Microsoft
To make things even worse, users of DIFFERENT tenants get stored TOGETHER in your external ID tenant.
In various situations it’s illegal or against contracts to have data of different companies in the same database.
Azure has an another option called B2C tenant (they're renaming it now something like Entra External ID or something similar) which is designed to work as user database for things like customers/clients. Instead of developing your own classic MySQL + $whatever framework for authentication to use this service as an alternative.
If you invite an external user that already exists in another Microsoft Azure tenant, you only know their user principal and first/last name. Nothing else. All other info does not get populated into your tenant even if it exists in the source tenant.
* No idea why the rename happened. Does some manager in Microsoft have the plaque: "Renomino, ergo sum."?
Edit: I would add - simple allow/deny authz is only relevant for the very simplest of apps (where all users have the same permissions). For any complex application, users will have different levels of access, which usually requires the application to do AuthZ.
[1] https://learn.microsoft.com/en-us/entra/identity/enterprise-...
Any application when AuthZ isn't simply yes/no, which rather quickly is just about all of them (even simple blogs have an admin tier), except for a very heavily microservice based architecture - where one would still want to have a much more convenient interface than Entra to see/manage the access permissions centrally... Entra AuthZ is at best a temporary development aid, but it's so easy to roll AuthZ yourself one might as well do it.
I notice that it requires the tool to be pulled from NuGet. While it looks like you could enter any package and NuGet source, I would be very surprised if there wasn’t a locked down whitelist of allowed sources (limited to internal Microsoft NuGet feeds).
Locking down NuGet packages was one of the primary things we (the Windows Engineering System team) were heavily focusing on when I left years ago. We were explicitly prevented from using public NuGet packages at all. We had to repackage them and upload them to the internal source to be used.
Their solution to this will be to add even more documentation, as if anyone had the stomach to read through the spaghetti that exist today.
Entra treats such requests as an OpenID Connect OAuth hybrid. The ID token is as specified under OpenID Connect, but the access token is as expected from OAuth. In practice, these are the tokens most people want. The UserInfo endpoint is stupid - you can get all that information in the ID token without an extra round trip.
(I work on Entra) Can you point me to the documentation for this? This statement is not correct. The WithExtraScopesToConsent method (https://learn.microsoft.com/en-us/dotnet/api/microsoft.ident...) exists for this purpose. An Entra client can call the interactive endpoint (/authorize) with scope=openid $clientid/.default $client2/.default $client3/.default - multiple resource servers as long as it specifies exactly one of those resource servers on the non-interactive endpoint (/token) - i.e. scope=openid $clientid/.default. In the language of Microsoft.Identity.Client (MSAL), that's .WithScopes("$clientid/.default").WithExtraScopesToConsent("$client2/.default $client3.default"). This pattern is useful when your app needs to access multiple resources and you want the user to resolve all relevant permission or MFA prompts up front.
It is true that an access token can only target a single resource server - but it should be possible to go through the first leg of the authorization code flow for many resources, and then the second leg of the authorization code flow for a single resource, followed by refresh token flows for the remaining resources.
Was not new news, AFAIK from the article they just patched the microsoft tools but they won't be pushing it tenant wide for all orgs.
This about donet/runtime?
Absolutely. It'd be hilarious if it weren't sad.
Okta (the other elephant in the room) has its own issues but at least it has decent documentation and even though it’s more expensive I think it’s worth paying that price just to keep security in a separate domain than co-mingle it with other Azure services.
I recently built an SSO login using Entra ID (which was thankfully single-tenant) and I basically had to keep randomly stabbing in the dark until I got it to work with the correct scopes and extra fields returned with the access token.
Trying to search for any kind of Getting started guide just took me to child pages several levels deep full of incomprehensible Microsoft jargon and hyperlinks to helpful-sounding but ultimately similarly useless articles.
x-hacker: Want root? Visit join.a8c.com and mention this header
x-nananana: Batcache-Hit
Popular CDNs require SNI but do not offer a solution for plaintext domain names on the wire.(ECH exists but is not enabled everywhere SNI is required.)
Meanwhile Wordpress hosts multiple HTTPS sites on same IP and does not require SNI.
(No plaintext domain names on the wire.)
gjsman-1000•6mo ago
This totally has no foreseeable potential consequences. It would be a real shame if some foreign hostile government with nuclear weapons managed to connect MS Account, LinkedIn Profile, and OpenAI accounts together by shared emails and phone numbers. Is it really worth starting a war for the crime of depantsing the nation?
jychang•6mo ago
gjsman-1000•6mo ago
croes•6mo ago
mdaniel•6mo ago
deathanatos•6mo ago
indrora•6mo ago
This is "It'll be safe if we leave it on the intranet" and then someone says "Zero trust!" and then all the sudden things that had authentication on the inside are also going through a new and different layer of authentication. A stack of totally reasonable expectations stack tolerance on tolerance, and just like the Sig Sauer P320, it has a habit of shooting you in the foot when you least expect it.