And I can see how, in very big accounts, small mistakes on your data source when you're doing data crunching, or wrong routing, can put thousands and thousands of dollars on your bill in less than an hour.
--
0: https://blog.cloudflare.com/aws-egregious-egress/By default a NGW is limited to 5Gbps https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway...
A GB transferred through a NGW is billed 0.05 USD
So, at continuous max transfer speed, it would take almost 9 hours to reach $1000
Assuming a setup in multi-AZ with three AZs, it's still 3 hours if you have messed so much that you can manage to max your three NGWs
I get your point but the scale is a bit more nuanced than "thousands and thousands of dollars on your bill in less than an hour"
The default limitations won't allow this.
Crucial for the approval was that we had cost alerts already enabled before it happened and were able to show that this didn't help at all, because they triggered way too late. We also had to explain in detail what measures we implemented to ensure that such a situation doesn't happen again.
Hard no. Had to pay I think 100$ for premium support to find that out.
- raw
- click-ops
Because, when you build your infra from scratch on AWS, you absolutely don't want the service gateways to exist by default. You want to have full control on everything, and that's how it works now. You don't want AWS to insert routes in your route tables on your behalf. Or worse, having hidden routes that are used by default.
But I fully understand that some people don't want to be bothered but those technicalities and want something that work and is optimized following the Well-Architected Framework pillars.
IIRC they already provide some CloudFormation Stacks that can do some of this for you, but it's still too technical and obscure.
Currently they probably rely on their partner network to help onboard new customers, but for small customers it doesn't make sense.
How does this actually work? So you upload your data to AWS S3 and then if you wish to get it back, you pay per GB of what you stored there?
You can see why, from a sales perspective: AWS' customers generally charge their customers for data they download - so they are extracting a % off that. And moreover, it makes migrating away from AWS quite expensive in a lot of circumstances.
Egress bandwidth costs money. Consumer cloud services bake it into a monthly price, and if you’re downloading too much, they throttle you. You can’t download unlimited terabytes from Google Drive. You’ll get a message that reads something like: “Quota exceeded, try again later.” — which also sucks if you happen to need your data from Drive.
AWS is not a consumer service so they make you think about the cost directly.
Sure you can just ram more connections through the lossy links from budget providers or use obscure protocols, but there's a real difference.
Whether it's fairly priced, I suspect not.
Though important to note in this specific case was a misconfiguration that is easy to make/not understand in the data was not intended to leave AWS services (and thus should be free) but due to using the NAT gateway, data did leave the AWS nest and was charged at a higher data rate per GB than if just pulling everything straight out of S3/EC2 by about an order of magnitude (generally speaking YMMV depending on region, requests, total size, if it's an expedited archival retrieval etc etc)
So this is an atypical case, doesn't usually cost $1000 to pull 20TB out of AWS. Still this is an easy mistake to make.
We are programmed to receive. You can check out any time you like, but you can never leave
And people wonder why Cloudflare is so popular, when a random DDoS can decide to start inflicting costs like that on you.
I have never understood why the S3 endpoint isn't deployed by default, except to catch people making this exact mistake.
I was lucky to have experienced all of the same mistakes for free (ex-Amazon employee). My manager just got an email saying the costs had gone through the roof and asked me to look into it.
Feel bad for anyone that actually needs to cough up money for these dark patterns.
A paragraph later.
The solution is to create a VPC Gateway Endpoint for S3. This is a special type of VPC endpoint that creates a direct route from your VPC to S3, bypassing the NAT Gateway entirely.
They made a mistake and are sharing it for the whole word to see in order to help others avoid making it.
It's brave.
Unlike punching down.
And then writing “I regret it” posts that end up on HN.
Why are people not getting the message to not use AWS?
There’s SO MANY other faster cheaper less complex more reliable options but people continue to use AWS. It makes no sense.
https://www.ionos.com/servers/cloud-vps
$22/month for 18 months with a 3-year term 12 vCores CPU 24 GB RAM 720 GB NVMe
Unlimited 1Gbps traffic
And it’s always the same - clouds refuse to provide anything more than alerts (that are delayed) and your only option is prayer and begging for mercy.
Followed by people claiming with absolute certainty that it’s literally technically impossible to provide hard capped accounts to tinkerers despite there being accounts like that in existence already (some azure accounts are hardcapped by amount but ofc that’s not loudly advertised).
It's easier to waive cost overages than deal with any of that.
We are primarily using Hetzner for the self-serve version of Geocodio and have been a very happy customer for decades.
Sure, it decreases the time necessary to get something up running, but the promises of cheaper/easier to manage/more reliable have turned out to be false. Instead of paying x on sysadmin salaries, you pay 5x to mega corps and you lose ownership of all your data and infrastructure.
I think it's bad for the environment, bad for industry practices and bad for wealth accumulation & inequality.
I uploaded a small xls with uid and prodid columns and then kind of forgot about it.
A few months later I get a note from bank saying your account is overdrawn. The account is only used for freelancing work which I wasn't doing at the time, so I never checked that account.
Looks like AWS was charging me over 1K / month while the algo continuously worked on that bit of data that was uploaded one time. They charged until there was no money left.
That was about 5K in weekend earnings gone. Several months worth of salary in my main job. That was a lot of money for me.
Few times I've felt so horrible.
Personally I miss ephemeral storage - having the knowledge that if you start the server from a known good state, going back to that state is just a reboot away. Way back when I was in college, a lot of out big-box servers worked like this.
You can replicate this on AWS with snapshots or formatting the EBS volume into 2 partitions and just clearing the ephemeral part on reboot, but I've found it surprisingly hard to get it working with OverlayFS
fragmede•1h ago
thecodemonkey•1h ago
bravetraveler•59m ago