It also blew my mind that Google Cloud VPCs and autoscaling groups are global, so that you don’t have to jump through hoops and use the Global Accelerator service to architect a global application.
After learning just those two things I’m downright shocked that Google is still in 3rd place in this market. AWS could really use a refactor at this point.
I read a lot of horror stories of people getting in troubles with GCP and not being able to talk to a human person, whereas you would get access to some human presence with AWS.
Things might have been changed, but I guess a lot of people have still this in the back of their mind.
Google’s poor support reputation is deserved, but I’m not sure I’d want to architect extra stuff over that issue. After I found out those facts about GCP I was pretty sure I could have gotten 6 months of my professional life back because of the architecture of GCP being superior.
"Yes accounts is a mess but they're what we have".
How is GCP much better? FWIW I use/evangelize GCP everyday. Their IAM setup is just very naive and seems like it has had things bolted on as an afterthought. AWS is much more well designed and future proof.
Now it's weird in a dozen different ways, and it endlessly spews ridiculous results at you. It's like a gorgeous mansion from the 1900s, which received no upkeep. It's junk now.
For example, if I want to find new books by an author I've bought from before, I have to go to: returns & orders, digital orders, find book and click, then author's name, all books, language->english, format->kindle, sort by->publication date.
There's no way to set defaults. No way to abridge the process. You mysteriously you cannot click on the author name in "returns & orders". It's simply quite lame.
Every aspect of Amazon is like this now. It was weird workflows throughout the site. It's living on inertia.
AWS has an organically evolved bad product which has been designed by long line of six page memos but a manual support in case things get too confusing or the customer just need emotional support.
Listing metadata is hardly a security issue. The entire reason these List* APIs are distinct from Get* APIs is that they don’t give you access to the object itself, just metadata. And if you’re storing secret information in your bucket names, you have bigger problems.
Metadata about your account, regardless of if you call it “production” or not, is not guaranteed to be treated with the same level of sensitivity as other data. Your threat model should assume that things like bucket names, role names, and other metadata are already known by attackers (and in fact, most are, since many role names managed by AWS have default names common across accounts).
Just wanted to point out that it is not just names of objects in sensitive accounts exposed here - as I wrote, the spoke roles also have iam:ListRoles and iam:ListPolicies, which is IMO much more sensitive than just object names. These contain a whole lot of information about who is allowed to do what, and can point at serious misconfigurations that can then be exploited onwards (e.g. misconfigured role trust policies, or knowing about over-privileged roles to target).
Things like GetKeyPolicy do, but as I mentioned in my comments already, the contents of policies are not sensitive information, and your security model should assume they are already known by would-be attackers.
“My trust policy has a vulnerability in it but I’m safe because the attacker can’t read my policy to find out” is security by obscurity. And chances are, they do know about it, because you need to account for default policies or internal actors who have access to your code base anyway (and you are using IaC, right?)
You’re right to raise awareness about this because it is good to know about, but your blog hyperbolizes the severity of this. This world of “every blog post is a MAJOR security vulnerability” is causing the industry to think of security researchers as the boy who cried wolf.
I think what you mean to say is, "Amazon has decided not to treat the contents of security policies as sensitive information, and told its customers to act accordingly", which is a totally orthogonal claim.
It's extremely unlikely that every decision Amazon makes is the best one for security. This is an example of where it likely is not.
Just because Amazon tells people not to put sensitive information in a security policy, doesn't mean a security policy can't or shouldn't contain sensitive information. It more likely means Amazon failed to properly implement security policies (since they CAN contain sensitive information), and gives their guidance as an excuse/workaround. The proper move would be to properly implement security policies such that the access is as limited as expected, because again, security policies can contain sensitive information.
An analogy would be a car manufacturer that tells owners to not put anything in the car they don't want exploded. "But they said don't do it!" -- Obviously this is still unreasonable: A reasonable person would expect their car to not explode things inside it, just like a reasonable person would expect their cloud provider to treat customer security policies as sensitive data. Don't believe me here? Call up a sample of randomly-selected companies and ask for a copy of their security policies.
This is key to understand here: What Amazon says is best security given their existing decisions is not the best security for a cloud provider to provide customers. We're discussing the latter: Not security given a tool, but security of the tool itself, and the decisions that went into designing the tool. It's certainly not the case that the tool is perfect and can't be improved, and it's not a given that the tool is even good.
The goal in preventing enumeration isn't to hide defects in the security policy. The goal is to make it more difficult for attackers to determine what and how they need to attack to move closer to their target. Less information about what privileges a given user/role have = more noise from the attacker, and more dwell time, all other things being equal. Both of which increase the likelihood of detection prior to full compromise.
https://docs.aws.amazon.com/IAM/latest/APIReference/API_List...
I don’t think this is a major or severe issue — but it certainly would provide information for pivots, eg, ARNs to request and information about from where.
For example, some US government agencies consider computer names sensitive, because the computer name can identify who works in what government role, which is very sensitive information. Yet, depending on context, the computer name can be considered "metadata."
There’s no inherent reason for treating metadata as less sensitive and there would be fewer problems if it were treated with the same sensitivity as normal data.
Said another way, some users expect the metadata to be treated sensitively and Amazon’s subversion of this is an Amazon problem not a user problem since this user expectation is rather reasonable.
longer if using the console
The majority of S3 buckets, especially valuable ones, remain created back when it was the default and thus the metadata sensitivity with bucket names remains (and that isn’t the only metadata issue).
You could figure out how a company names their S3 buckets. It's subtle, but you could create a bunch of typo'd variants of the buckets and sit around waiting for s3 server logs/cloudtrail to tell you when someone hits one of the objects.
When that happens, you could get the accessing AWS Account # (which isn't inherently private, but something that you wouldn't want to tell the world about), IAM user accessing object, and which object was attempted to be accessed.
Say the IAM user is a role with terribly insecure assume role policy... Or one could put an object where the misconfigured service was looking and it'd maybe get processed.
This kind of attack is preventable but I doubt most people are configuring SCPs to the level of detail you'd need to completely prevent this.
It's an Amazon problem to the extent that they lose business over it. But if people choose to use AWS, despite having different requirements for data security than AWS provides, that is a user problem. At some point the onus is on the user to understand what a tool does and doesn't do, and not choose a tool that doesn't meet their requirements.
Make the computer name a random string or random set of words, no relation to the person or department who uses it. Problem solved.
In this case, I was looking for a threat model within which this is a vulnerability but unable to find so.
More over, the issue wasn’t that AWS recommended or automatically setup the environment insecurely. Their documentation simply left the commonly known best practice of disallowing trusts from lower to prod environments implicit, rather than explicitly recommending users follow that best practice in using the solution.
I don’t think over-hyping smaller issues, handled appropriately, helps anyone.
AWS has a pretty simple model: when you split things into multiple accounts those accounts are 100% separate from each other (+/- provisioning capabilities from the root account).
The only way cross account stuff happens is if you explicitly configure resources in one account to allow access from another account.
If you want to create different subsets of accounts under your org with rules that say subset a (prod) shouldn’t be accessed by another subset (dev), then the onus for enforcing those rules are on you.
Those are YOUR abstractions, not AwS abstractions. To them, it’s all prod. Your “prod” accounts and your “dev” account all have the same prod slas and the same prod security requirements.
The article talks about specific text in the AWS instructions:
“Hub stack - Deploy to any member account in your AWS Organization except the Organizations management account."
They label this as a “major security risk” because the instructions didn’t say “make sure that your hub account doesn’t have any security vulnerabilities in it”.
AWS shouldn’t have to tell you that, and calling it a major security risk is dumb.
Finally, the access given is to be able to enumerate the names (and other minor metadata) of various resources and the contents of IAM policies.
None of those things are secret, and we the things that every dev should have access to anyways. If you are using IAC, like terraform, all this data will be checked into GitHub and accessible by all devs.
Making it available from the dev account is not a big deal. Yes, it’s ok for devs to know the names of IAM roles and the names of encryption key aliases, and the contents of IAM policies. This isn’t even an information disclosure vulnerability .
It’s certainly not a “major risk”, and is definitely not the case of “an AWS cross account security tool introducing a cross account security risk”.
This was, at best, a mistake by an engineer that deployed something to “dev” that maybe should have been in “prod” (or even better in a “security tool” environment).
But the actual impact here is tiny.
The set of people with dev access should be limited to your devs, who should have access to source control, which should have all this data init anyways.
So, presumably dev doesn’t require multiple approvals for a human to assume a role, and probably doesn’t require a bastion (and prod might have those controls), do perhaps someone who compromises a dev machine could get some Prod metadata.
However someone who compromises a dev machine also has access to source control, so they could get all this metadata anyways.
The article is just sensationalism.
bulatb•2h ago
smallnix•2h ago
myself248•2h ago
y-curious•1h ago