frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

AWS Restored My Account: The Human Who Made the Difference

https://www.seuros.com/blog/aws-restored-account-plot-twist/
64•mhuot•1h ago
Related: AWS deleted my 10-year account and all data without warning - https://news.ycombinator.com/item?id=44770250 - Aug 2025 (152 comments)

Comments

mhuot•1h ago
Follow-up to the AWS account deletion story from last week. The author's data was recovered after an AWS employee escalated to VP level, and it turned out the instances were just stopped, not terminated.
jacquesm•1h ago
You got very lucky.
sangeeth96•1h ago
Related:

https://news.ycombinator.com/item?id=44770250

Ezhik•1h ago
I hope the author follows up with how they build out a backup system. No snark intended - it really does seem to me like an interesting set of articles especially for others who might be in a similar situation with keeping everything with one cloud provider.
svt1234•1h ago
I don't use AWS but I have a server on Linode and I realized that I am in the same situation should Linode go away, even though I tend to have a pretty rigorous backup routine elsewhere. It is something you can easily miss.

I am inspired now to dump my databases and rsync the content on a schedule.

tetha•53m ago
Within Europe and if you're not at Hetzner already, Hetzner Storage Boxes[1] are very competitive. 4 Euros per month for a terabyte, throw borg[2] with compression + deduplication at it and it'll take you far. Especially if your databases are not changing that much, then borg can make that terabyte of storage a lot bigger.

1: https://www.hetzner.com/storage/storage-box/

2: https://www.borgbackup.org/

webstrand•1h ago
Honestly it's pretty worrying, I don't understand how other people can talk about "you have to have multiple offsite backups" when each offsite runs about $700 per year for a relatively small amount of data. I can only afford one off-site backup, and if my local copy dies it'll take a couple of weeks for me to restore from AWS because of local internet speed limitations.
dylan604•1h ago
Offsite does not have to mean another cloud solution. If you have a copy of data on a thumbdrive or other external media, you can leave it at your parent's house (or other family member). You can keep a copy of it in a safety deposit box. If it's at a family member's house, it's just an excuse to visit on a semi-regular schedule as you swap out with an updated version.
pavel_lishin•57m ago
Yep. My backups - personal ones, not business - live in three places:

1. A hard drive in a fire safe.

2. An S3 bucket, mediated by Wasabi.

3. My friend's server that lives at his house half a continent away.

It would be nice to have a fourth location that's a physical hard-drive that lives outside of my house, but close enough to drive to for pick-up, but it would mean either paying for a safety deposit box as you mentioned, or hassling a friend once a week as I come to pick it up and deposit it.

bn-l•1h ago
This is genuinely newsworthy (not being sarcastic)
pjc50•1h ago
Social media is the true support channel of last resort, especially in a community like HN where your message might just find its way to an insider with enough seniority to go off script and fix things.

Re: "The Support Agents Who Became LLMs"; yes, institutionalized support is terrible almost everywhere. Partly because it costs real money to pay real humans to do it properly, so it ends up as a squeezed cost centre.

IlikeKitties•47m ago
Is it just me or have others here also just plainly given up using support channels of almost any company? I feel like in most cases they are neither able or willing to help you and they rarely are able to do more than yourself using the webuis.
igleria•1h ago
I really want to send a virtual hug to Tarus and you. Reminds me of one of the few reasons I'm still in this industry... solving real problems that real people have.
mkl•58m ago
*Tarus. Auto-incorrect?
igleria•44m ago
Ugh, I should be better at keeping names read on other browser tabs in my meat volatile memory
eqvinox•1h ago
> "people in positions of leadership, such as my boss, are aware of your blog post and I’ve been tasked with finding out what I can"

Translation: "someone noticed it trending on HN, decided it was bad publicity, and that they should do something about it"

Implication: what mattered was the bad publicity, not the poor support infrastructure. The latter won't change, and the next person with similar problems will get the same runaround, and probably lose their data.

/c (cynic, but I suspect realist)

colmmacc•52m ago
Every week at AWS we have an account protection meeting; it's got the teams who handle fraud and abuse detection, compromised accounts (e.g. when a customer has a security issue on their side), non-payment, forgotten creds, as well as our support team. You'll find the most junior members of those teams, all the way up to VPs, in the same meeting diving into the nitty gritty. Every week.

Disabling a legitimate in-use account is one of our absolute nightmares, and I don't care if it was an account paying $3/month we would be having a review of that with our top level management (including our CEO - Matt Garman) no matter how we found out about it. For us, there is not some acceptable rate of this as a cost of doing business.

lotsofpulp•46m ago
It might not be for you, but it is an acceptable cost of doing business for whoever is high enough up the chain that is setting the budget such that sufficient customer support is not available.

And disabling an in use account was not the issue here. There not being a way to get the account re enabled is the issue.

colmmacc•40m ago
In this case, I am high enough in that chain.
CamperBob2•45m ago
So, the key is somehow finding a way to bring problems like this to the group's attention, it sounds like. HN lamentations are great when they work, but they don't exactly scale.

At least one layer of human support needs to have the ability -- not just the ability, but the obligation! -- to escalate to your team when a customer service problem occurs that doesn't fit a pattern of known/active scams and they are unable to help the customer themselves. Sounds like that's not currently the case.

colmmacc•27m ago
Without prejudging the COE; it won't surprise anyone to learn that there are bad actors out there who try "every trick in the book" to have accounts that they don't pay for, and lying to customer support is absolutely one of those tricks, as is trying to be creative with changing payment instruments.

In these cases, it's also really important that customer support stick to a script and can't be abused as part of social engineering, hijacking, or fraud check bypass. "No we can't reset your account" is a very important protection too. I agree that there is an obligation to escalation, but I suspect the focus of the COE will be on how we could have detected this without human judgement. There's got to be a way.

Timshel•16m ago
I love the irony that an issue caused by a failing automation was solved due to human escalation but let's not try to improve the escalation process but add more automation ...
colmmacc•4m ago
We do both; automation never forgets.
Timshel•30m ago
> Disabling a legitimate in-use account is one of our absolute nightmares

Might be your nightmare but at the same time there is no way for your customers to report it or your own support agents to escalate that something wrong might have happened and someone should look again ...

ke4qqq•43m ago
As the cited ‘boss’ I’ll say the publicity wasn’t the concern. The concern was that someone wanted to use our services and we had made that so frustrating that they were writing blog posts about how it had gone wrong.

The various teams (anti-fraud and support) are investigating how we failed this customer so we can improve and hopefully keep this from happening again. (This is the ‘Correction of Error’ process that’s being worked on. And CoE’s aren’t a punitive ‘blame session’ - it’s figuring out how a problem happened and how we can fix or avoid it systemically going forward).

To be fair, the publicity did mean that multiple people were flagging this and driving escalations around it.

qualeed•1h ago
Awesome that the author got their stuff back. But...

>My data is back. Not because of viral pressure. Not because of bad PR. [...]

>“I am devastated to read on your blog about the deletion of your AWS data. I did want to reach out to let you know that people in positions of leadership, such as my boss, are aware of your blog post and I’ve been tasked with finding out what I can, and to at least prevent this from happening in the future.”

So, yes, because of bad PR. Or, at least the possibility of the blog blowing up into a bad PR storm. I'm guessing that if there was no blog, the outcome would be different.

Group_B•51m ago
So lesson learned, write a blog post if you need support with AWS?
seuros•50m ago
The AWS employee actually contacted me before my blog post even reached three digits in views. So no, it wasn’t PR-driven in the reactive sense.

But here’s what I learned from this experience: If you are stuck in a room full of deaf people, stop screaming, just open the door and go find someone who can hear you.

The 20 days of pain I went through, it wasn’t because AWS couldnt fix it.

It’s because I believed that one of the 9 support agents would eventually break script and act like a human. Or that they get monitored by another team.

Turns out, that never happened.

It took someone from outside the ticketing system to actually listen and say: Wait. This makes no sense.

qualeed•46m ago
>So no, it wasn’t PR-driven in the reactive sense.

At my small business, we proactively monitor blogs and forums for mentions of our company name so that we can head off problems before they become big. I'm extremely confident that is what happened here.

It was PR-driven in the proactive sense. Which is still PR-driven. (which, by the way, I have no problem with! the problem is the shitty support when it isn't PR-driven)

Regardless, I 100% feel your pain with dealing with support agents that won't break script, and I am legitimately happy that you both got to reach someone that was high enough up the ladder to act human and that they were able to restore your data.

seuros•31m ago
Thank you for your concern, and I appreciate the nuance in your take.

Yes, it is totally possible that AWS monitors blogs and forums for early damage control, like your company does.

But we shouldn’t paint it like I was bailed out by some algorithmic PR radar and nothing else.

Let’s not fall into the “Fuk the police” style of thinking where every action is assumed to be manipulation. Tarus didn’t reach out like a Scientology agent demanding I take the post down or warning me of consequences.

He came with empathy, internal leverage, and actually made things move.

When before i read Tarus email, i wrote in Slack to Nate Berkopec (puma maintainer): `Hi. AWS destroyed me, i'm going to take a big break .`

Then his email reset my cortisol levels to acceptable level.

Most importantly, this incident triggered a CoE (Correction of Error) process inside AWS.

That means internal systems and defaults are being reviewed, and that’s more than I expected. We’re getting a real update, that will affect cases like mine in the future.

So yeah, it may have started in the visibility layer, but what matters is that someone human got involved, and actual change is now happening.

qualeed•23m ago
>But we shouldn’t paint it like I was bailed out by some algorithmic PR radar and nothing else.

>[...] assumed to be manipulation

I think you're reading way more negativity into "PR" than I'm intending (which is no negativity).

It's very clear Tarus is a caring person who really did empathize with your situation and did their best to rectify the situation. It's not a bad thing that your issue may (most likely) have been brought to his attention because of "PR radar" or whatever.

The bad part, on Amazon and other similar companies, is how they typically respond when a potential PR hit isn't on the line. Which, as I'm sure you know because you experienced it prior to posting your blog, is often a brick wall.

The overwhelming issue is that you often require some sort of threat of damage to their PR to be assisted. That doesn't make the PR itself a bad thing. And that fact implies nothing about the individuals like Tarus who care. Often the lowly tier 1 support empathizes, they just aren't allowed to do anything or say anything.

boogieknite•47m ago
much much much smaller scale but i accidentally signed up for some AWS certificate service i didnt understand or use and took a $700 bill on the chin

customer service was great and refunded my money without me blogging about it. we messaged back and forth about what i was trying to do and what i thought i was signing up for. i think it helped to have a long history of tiny aws instances because they mentioned reviewing my customer history

i want to hate amazon but they provided surprisingly pleasant and personable service to a small fry like me. that exchange alone probably cost amazon more money than ive spent in aws. won my probably misguided customer loyalty

qualeed•33m ago
Money is typically much easier to deal with than data or account restoration. With that said, it's nice to hear a good support story every now and then! People don't usually take the time to write a blog post (or comment, etc.) when they have a quick and productive conversation with a support agent, so all we end up hearing are horror stories.
bigstrat2003•46m ago
I think the blog made a difference, yes - but that doesn't mean it was just a PR move by Amazon. It's perfectly possible that the Amazon employee who contacted the author truly does care and wanted to help because of that. It's fair to say that without the blog post this issue wouldn't have been noticed or fixed, but anything past that is really just speculating about people's motives.
qualeed•43m ago
>but that doesn't mean it was just a PR move by Amazon

Being a PR move isn't inherently a bad thing.

The bad thing is the lack of support when PR isn't at risk.

>It's fair to say that without the blog post this issue wouldn't have been noticed or fixed, but anything past that is really just speculating about people's motives.

My only (minor) issue with the blog post is starting by saying "Not because of PR" when the opening email from the human at amazon was "saw your blog". I think it is evident that Tarus Balog did indeed actually care!

electroly•1h ago
Lesson learned: If you have important workloads in AWS, but don't spend enough to get a dedicated account rep, make sure you have some sort of online presence that you can use later to gain access to the real support group.
mox1•51m ago
...or ensure you have backups of data in a non-AWS location?
electroly•47m ago
It's not an "or" situation--these are orthogonal issues. The way support behaved is about AWS. Backups are about you. You should have backups and AWS should not arbitrarily terminate accounts without support recourse. We can discuss them separately if we want. I care about the uptime of my AWS accounts even though I have comprehensive backups.
bink•14m ago
With modern cloud computing is that enough? Can you migrate AuroraDBs, Lambdas, Kinesis configs, IAM policies, S3 bucket configs, Opensearch configs to another cloud platform easily? I suppose if you're comfortable going back to AWS after they randomly delete all your data then the remote backups will be helpful, but not so much if you plan to migrate to another provider.
seuros•44m ago
That might be a 'lesson', but it’s like saying:

"If you want your paperwork processed in Morocco, make sure you know someone at the commune, and ideally have tea with their cousin."

Yes, it works, but it shouldn’t be the system.

What happened with AWS isn’t a clever survival tip, it’s proof that without an account manager, you are just noise in a ticket queue, unless you bring social proof or online visibility.

This should have never come down to 'who you know' or 'how loud you can go online'.

It a big luck that i'm speaking in english and have online presence, what if i was ranting in French, Arabic, or even Darija in Facebook. Tarus will have never noticed.

electroly•43m ago
We seem to be saying exactly the same thing. I agree, strongly, with everything you just said here. AWS has a support problem if this was necessary, and I'm not personally prepared with an online presence if it happened to me. I'll simply be screwed and will have to recreate a new account from backups. It's something for me to think about. I can't fix AWS--I can only change what I do in response.

I recently opened a DigitalOcean account and it was locked for a few days after I had moved workloads in. They took four days to unlock the account, and for my trouble they continued to charge me for my resources during the time the account was locked when I couldn't log in to delete them. I didn't have any recourse at all. They did issue a credit because I asked nicely, but if they said no, that would have been it.

anonymars•27m ago
Primary lesson learned should be NEVER HAVE ONE POINT OF FAILURE. If all your data is on one account (regardless of where) and nowhere else that is ONE POINT OF FAILURE
jacquesm•1h ago
One of the questions we always ask of cloud hosted companies what their plan is in case they ever lose their cloud account. Typically this is met with incredulity: nobody has ever lost their AWS account and all it's data? Right? Right???

Well, not normally, no. But it does happen. Not often enough to be a meaningful statistical issue, but if it were to happen to you then a little forethought can turn a complete disaster into a survivable event. If you store all your data 'in the cloud' realize that your account could be compromised, used to store illegal data, be subject to social engineering and lots of other ways that could result in a cloud services provider to protect their brand rather than your data. If - like the author - you are lucky you'll only be down for a couple of days. But for most businesses that's the end of the line, especially if you run a multi-tenant SaaS or something like that. So plan for the worst and hope for the best.

ignoramous•52m ago
> Typically this is met with incredulity

Surprising. In my time, things always got pretty serious if your service could not recover from loss due to regretable events.

TFA alluded to a possible but "undocumented" way to restore terminated infrastructure. I don't think all AWS services nuke everything on deletion, but if it is not in writing ...

Traubenfuchs•49m ago
We have a slack channel with more than 20 external @amazon.com engineers and salespeople that instantly respond to all our queries, sometimes proactively inform us of stuff or offer advice...

Doesn't everyone has this?

bink•11m ago
I think you'll find this depends on how much you spend with Amazon. Most accounts (by number) don't have this.
diegocg•45m ago
I would say that the lesson here is that cross-vendor replication is more important than intra-vendor replication. It is clear that technology can (largely) avoid data losses, but there will always be humans at charge
anonymars•24m ago
Nitpick: true replication is high-availability, not disaster-recovery (i.e. not a backup)

If wrong data gets deleted, and that gets replicated, now you simply have two copies of bad data.

seuros•10m ago
Author here: Let’s be clear on backups:

Yes, I had backups everywhere. Across providers, in different countries. But I built a system tied to my AWS account number, my instances, my IDs, my workflows.

When that account went down, all those “other” backups were just dead noise encrypted forever. Bringing them up to the story only invites the 'just use your other backups' fallback, and ignores the real fragility of centralized dependencies.

It is like this: the UK still maintains BBC Radio 4’s Analogue Emergency Broadcast—a signal so vital that if it’s cut, UK nuclear submarines and missile silos automatically trigger retaliation. No questions asked. That's how much stakes they place on a reliable signal.

If your primary analogue link fails, the world ends. That's precisely how I felt when AWS pulled my account, because I’d tied my critical system to a single point of failure. If the account was just Read only, i will waited because i could have access to my data and rotated keys.

AWS is the apex cloud provider on the planet. This isnt about redundancy or best practices.

it's about how much trust and infrastructure we willingly lend to one system.

Remember that if BBC Radio 4 signal get to fail for some reasons, the world will get nuked, only cockroaches will survive… and your RDS and EC2 billing fees.

cnst•2m ago
Keep in mind, just 2 days ago, AWS has provided the following statement to Tom's Hardware:

https://www.tomshardware.com/software/cloud-storage/aws-accu...

https://archive.is/0b3Hc

> Update: August 5 7:30am (ET): In a statement, an AWS spokesperson told Tom's Hardware "We always strive to work with customers to resolve account issues and provided an advance warning of the potential account suspension. The account was suspended as part of AWS’s standard security protocols for accounts that fail the required verification, and it is incorrect to claim this was because of a system error or accident."

This shows a bigger part of this problem.

When these mistakes do happen, they're invariably treated as standard operating procedures.

They're NEVER treated as errors.

It would appear that the entire support personnel chain and PR literally have no escalation path to treat any of these things as errors.

Instead, they simply double-down that it's NOT an error that the accounts was terminated on an insufficient notice over bogus claims and broken policies.