It also blew my mind that Google Cloud VPCs and autoscaling groups are global, so that you don’t have to jump through hoops and use the Global Accelerator service to architect a global application.
After learning just those two things I’m downright shocked that Google is still in 3rd place in this market. AWS could really use a refactor at this point.
I read a lot of horror stories of people getting in troubles with GCP and not being able to talk to a human person, whereas you would get access to some human presence with AWS.
Things might have been changed, but I guess a lot of people have still this in the back of their mind.
Google’s poor support reputation is deserved, but I’m not sure I’d want to architect extra stuff over that issue. After I found out those facts about GCP I was pretty sure I could have gotten 6 months of my professional life back because of the architecture of GCP being superior.
"Yes accounts is a mess but they're what we have".
How is GCP much better? FWIW I use/evangelize GCP everyday. Their IAM setup is just very naive and seems like it has had things bolted on as an afterthought. AWS is much more well designed and future proof.
Now it's weird in a dozen different ways, and it endlessly spews ridiculous results at you. It's like a gorgeous mansion from the 1900s, which received no upkeep. It's junk now.
For example, if I want to find new books by an author I've bought from before, I have to go to: returns & orders, digital orders, find book and click, then author's name, all books, language->english, format->kindle, sort by->publication date.
There's no way to set defaults. No way to abridge the process. You mysteriously you cannot click on the author name in "returns & orders". It's simply quite lame.
Every aspect of Amazon is like this now. It was weird workflows throughout the site. It's living on inertia.
Your observations imply a root cause. But public information about Amazon’s corporate structure shows that AWS is almost a separate company from the website. Same is true for Google’s search vs YouTube or Apple hardware design vs their iMessages group.
AWS has an organically evolved bad product which has been designed by long line of six page memos but a manual support in case things get too confusing or the customer just need emotional support.
Listing metadata is hardly a security issue. The entire reason these List* APIs are distinct from Get* APIs is that they don’t give you access to the object itself, just metadata. And if you’re storing secret information in your bucket names, you have bigger problems.
Metadata about your account, regardless of if you call it “production” or not, is not guaranteed to be treated with the same level of sensitivity as other data. Your threat model should assume that things like bucket names, role names, and other metadata are already known by attackers (and in fact, most are, since many role names managed by AWS have default names common across accounts).
Just wanted to point out that it is not just names of objects in sensitive accounts exposed here - as I wrote, the spoke roles also have iam:ListRoles and iam:ListPolicies, which is IMO much more sensitive than just object names. These contain a whole lot of information about who is allowed to do what, and can point at serious misconfigurations that can then be exploited onwards (e.g. misconfigured role trust policies, or knowing about over-privileged roles to target).
Things like GetKeyPolicy do, but as I mentioned in my comments already, the contents of policies are not sensitive information, and your security model should assume they are already known by would-be attackers.
“My trust policy has a vulnerability in it but I’m safe because the attacker can’t read my policy to find out” is security by obscurity. And chances are, they do know about it, because you need to account for default policies or internal actors who have access to your code base anyway (and you are using IaC, right?)
You’re right to raise awareness about this because it is good to know about, but your blog hyperbolizes the severity of this. This world of “every blog post is a MAJOR security vulnerability” is causing the industry to think of security researchers as the boy who cried wolf.
I think what you mean to say is, "Amazon has decided not to treat the contents of security policies as sensitive information, and told its customers to act accordingly", which is a totally orthogonal claim.
It's extremely unlikely that every decision Amazon makes is the best one for security. This is an example of where it likely is not.
Just because Amazon tells people not to put sensitive information in a security policy, doesn't mean a security policy can't or shouldn't contain sensitive information. It more likely means Amazon failed to properly implement security policies (since they CAN contain sensitive information), and gives their guidance as an excuse/workaround. The proper move would be to properly implement security policies such that the access is as limited as expected, because again, security policies can contain sensitive information.
An analogy would be a car manufacturer that tells owners to not put anything in the car they don't want exploded. "But they said don't do it!" -- Obviously this is still unreasonable: A reasonable person would expect their car to not explode things inside it, just like a reasonable person would expect their cloud provider to treat customer security policies as sensitive data. Don't believe me here? Call up a sample of randomly-selected companies and ask for a copy of their security policies.
This is key to understand here: What Amazon says is best security given their existing decisions is not the best security for a cloud provider to provide customers. We're discussing the latter: Not security given a tool, but security of the tool itself, and the decisions that went into designing the tool. It's certainly not the case that the tool is perfect and can't be improved, and it's not a given that the tool is even good.
The goal in preventing enumeration isn't to hide defects in the security policy. The goal is to make it more difficult for attackers to determine what and how they need to attack to move closer to their target. Less information about what privileges a given user/role have = more noise from the attacker, and more dwell time, all other things being equal. Both of which increase the likelihood of detection prior to full compromise.
https://docs.aws.amazon.com/IAM/latest/APIReference/API_List...
I don’t think this is a major or severe issue — but it certainly would provide information for pivots, eg, ARNs to request and information about from where.
For example, some US government agencies consider computer names sensitive, because the computer name can identify who works in what government role, which is very sensitive information. Yet, depending on context, the computer name can be considered "metadata."
There’s no inherent reason for treating metadata as less sensitive and there would be fewer problems if it were treated with the same sensitivity as normal data.
Said another way, some users expect the metadata to be treated sensitively and Amazon’s subversion of this is an Amazon problem not a user problem since this user expectation is rather reasonable.
longer if using the console
The majority of S3 buckets, especially valuable ones, remain created back when it was the default and thus the metadata sensitivity with bucket names remains (and that isn’t the only metadata issue).
You could figure out how a company names their S3 buckets. It's subtle, but you could create a bunch of typo'd variants of the buckets and sit around waiting for s3 server logs/cloudtrail to tell you when someone hits one of the objects.
When that happens, you could get the accessing AWS Account # (which isn't inherently private, but something that you wouldn't want to tell the world about), IAM user accessing object, and which object was attempted to be accessed.
Say the IAM user is a role with terribly insecure assume role policy... Or one could put an object where the misconfigured service was looking and it'd maybe get processed.
This kind of attack is preventable but I doubt most people are configuring SCPs to the level of detail you'd need to completely prevent this.
ISTR it’s also possible to apply an SCP that limits S3 reads and writes outside your organization. If not via an SCP then via a permission boundary at the least.
"...Amazon S3 buckets are and always have been private by default. Only the bucket owner can access the bucket or choose to grant access to other users..."
The feature and announcement you linked was about making active an additional safety feature that would block them becoming public. Even if you intentionally ( or accidentally ) configured them with public access.
The well known accidents in the past, of Facebook or the Pentagon having private data in public S3 buckets, I can only attribute to the modern practices of self-paced learning, skipping videos on Udemy courses or deciding formal training is no longer necessary because I can Google it...
It's an Amazon problem to the extent that they lose business over it. But if people choose to use AWS, despite having different requirements for data security than AWS provides, that is a user problem. At some point the onus is on the user to understand what a tool does and doesn't do, and not choose a tool that doesn't meet their requirements.
This is false
- The account manager and the enterprise support TAM can view a list of all resources on the account, including metadata like resource name, instance type and cost explorer tags. Enterprise support routinely present a monthly cost review with us, so it is clear that they can always access this information without our explicit consent. They do not have the ability to view detailed internal information about it though, such as internal logs.
- When opening support case, the ticketing system ask for resource ARN which may contains the name. It seems that the support team can view some data about that object including monitoring data and internal logs, but potentially accessing "customer data" (such as ssh-ing into an RDS instance) requires explicit, one off consent.
- I never opened any issues about IAM policy, so I don't know if they see IAM role policy document
- It seems that the account ID and account name is also often used by both AWS' sales side and reseller's side. I think I read somewhere that it is possible to retrieve the AWS account ID if you know S3 bucket or something, and when exchanging data with external partner via AWS (eg. S3, VPC peering) you're required to exchange account ID to the partner.
Make the computer name a random string or random set of words, no relation to the person or department who uses it. Problem solved.
Now you have to have another system that decodes the random words to human usable words. Is that information going to be stored all in one system? Is each team going to be responsible for the translation? How is that going to be protected from information loss?
I work with systems like this so, yea, it can be done. But it cannot be done trivially.
- Ex-NSA chief Michael Hayden
Metadata is data. In a large corporation, metadata can also reveal projects under NDA that only a select few employees are supposed to know about about.
Yeah but the design should be made on the assumption that some customers will do stupid things, and protect them.
Not an identical case, but I once bought a Cisco router for home lab/learning and it appeared to be a hardware decommissioned by one of European banks, not flashed before being handed over to some asset disposal contractor. It eventually landed on an auctioning portal with bank's configuration. The bank was very meticulous with documenting stuff like the address of the branch where it was installed in device's config and ACL names/descriptions included employees' names and room numbers. You could easily extract the names of people granted extended access to internal systems.
So while I agree with you in principal, even financial institutions do stupid things, lack procedures or their processes don't always follow them. Cloud provider's design should assume their customers not following best practices.
In this case, I was looking for a threat model within which this is a vulnerability but unable to find so.
More over, the issue wasn’t that AWS recommended or automatically setup the environment insecurely. Their documentation simply left the commonly known best practice of disallowing trusts from lower to prod environments implicit, rather than explicitly recommending users follow that best practice in using the solution.
I don’t think over-hyping smaller issues, handled appropriately, helps anyone.
AWS has a pretty simple model: when you split things into multiple accounts those accounts are 100% separate from each other (+/- provisioning capabilities from the root account).
The only way cross account stuff happens is if you explicitly configure resources in one account to allow access from another account.
If you want to create different subsets of accounts under your org with rules that say subset a (prod) shouldn’t be accessed by another subset (dev), then the onus for enforcing those rules are on you.
Those are YOUR abstractions, not AwS abstractions. To them, it’s all prod. Your “prod” accounts and your “dev” account all have the same prod slas and the same prod security requirements.
The article talks about specific text in the AWS instructions:
“Hub stack - Deploy to any member account in your AWS Organization except the Organizations management account."
They label this as a “major security risk” because the instructions didn’t say “make sure that your hub account doesn’t have any security vulnerabilities in it”.
AWS shouldn’t have to tell you that, and calling it a major security risk is dumb.
Finally, the access given is to be able to enumerate the names (and other minor metadata) of various resources and the contents of IAM policies.
None of those things are secret, and every dev should have access to them anyways. If you are using IAC, like terraform, all this data will be checked into GitHub and accessible by all devs.
Making it available from the dev account is not a big deal. Yes, it’s ok for devs to know the names of IAM roles and the names of encryption key aliases, and the contents of IAM policies. This isn’t even an information disclosure vulnerability .
It’s certainly not a “major risk”, and is definitely not a case of “an AWS cross account security tool introducing a cross account security risk”.
This was, at best, a mistake by an engineer that deployed something to “dev” that maybe should have been in “prod” (or even better in a “security tool” environment).
But the actual impact here is tiny.
The set of people with dev access should be limited to your devs, who should have access to source control, which should have all this data in it anyways.
Presumably dev doesn’t require multiple approvals for a human to assume a role, and probably doesn’t require a bastion (and prod might have those controls), so perhaps someone who compromises a dev machine could get some Prod metadata.
However someone who compromises a dev machine also has access to source control, so they could get all this metadata anyways.
The article is just sensationalism.
Imagine a incredibly secure castle. There are thick unclimbable walls, moats, trap rooms, everything compartmentalized, an attacker that gains control of one section didn't achieve much in terms of the whole castle, the men in each section are carefully vetted and are not allowed to have contact or family relationships with men stationed in other sections, so they cannot be easily bribed or forced to open doors. Everything is fine.
But the king is furious, the attackers shouldn't control any part of the castle! As a matter of principle! The architects reassure the king that everything is fine and there is no need to worry. The king is unconvinced, fires them and searches for architects that do his bidding. So the newlyfound architects scramble together and come up with secret hallways and tunnels, connecting all parts of the castle so the defenders can clear the building in each section. The special guards who are in charge of that get high priviledges, so they could even fight attackers who reach the kings bed room. The guard is also tasked to keep in touch with the attackers so they are extra prepared for when they attack and understand their mindset inside out.
The king is pleased, the castle is safe. One night one of those guards turns against the king and the attackers are sneaked into the castle. The enemy is suddenly everywhere and they kill the king. A battle that should have been fought in stages going inwards is now fought from the inside out and the defenders are suddenly trapped in the places that were meant for the very enemies they are fighting. The kingdom has fallen.
The problem with many security solutions – including AV solutions – is that you give the part of your system that comes into contact with the "enemy" the keys to your kingdom, usually with full unchecked priviledges (how else to read everything that is going on in the system). Actual security is the result of strict compartmentalization and a careful and continous vetting of how each section can be abused and leveraged once it has fallen. Just like in mechanical engineering where each new moving part can add a new failure point, in security adding a new priviledged thing adds a lot of new attack surface that wasn't previously there. And if that attack surface gives you the keys to the kingdom it isn't the security solution, it is the target.
Most "security tools" introduce security risks. An antivirus is usually a backdoor to your computer. So are various "endpoint protection" tools.
The whole security industry is a sham.
bulatb•6h ago
smallnix•6h ago
myself248•6h ago
y-curious•5h ago