frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

GPT-5

http://openai.com/gpt-5
518•rd•1h ago•524 comments

GPT-5 for Developers

https://openai.com/index/introducing-gpt-5-for-developers
55•6thbit•59m ago•19 comments

Building Bluesky comments for my blog

https://natalie.sh/posts/bluesky-comments/
126•g0xA52A2A•2h ago•51 comments

Infinite Pixels

https://meyerweb.com/eric/thoughts/2025/08/07/infinite-pixels/
178•OuterVale•4h ago•39 comments

SUSE Donates USD 11,500 to the Perl and Raku Foundation

https://www.perl.com/article/suse-donates-to-tprf/
71•oalders•3h ago•19 comments

How to sell if your user is not the buyer

https://writings.founderlabs.io/p/how-to-sell-if-your-user-is-not-the
70•mooreds•2h ago•46 comments

Jepsen: Capela dda5892

https://jepsen.io/analyses/capela-dda5892
32•aphyr•3h ago•2 comments

Foundry (YC F24) Is Hiring Staff Level Product Engineers

https://www.ycombinator.com/companies/foundry/jobs/jwdYx6v-founding-product-engineer
1•lakabimanil•1h ago

Laptop Support and Usability (LSU): July 2025 Report from the FreeBSD Foundation

https://github.com/FreeBSDFoundation/proj-laptop/blob/main/monthly-updates/2025-07.md
69•grahamjperrin•4h ago•35 comments

The Lightweight LSAT

https://lightweightlsat.com/
4•gregsadetsky•14m ago•0 comments

Monte Carlo Crash Course: Quasi-Monte Carlo

https://thenumb.at/QMC/
69•zote•3d ago•9 comments

New AI Coding Teammate: Gemini CLI GitHub Actions

https://blog.google/technology/developers/introducing-gemini-cli-github-actions/
181•michael-sumner•8h ago•78 comments

Show HN: Browser AI agent platform designed for reliability

https://github.com/nottelabs/notte
10•ogandreakiro•54m ago•1 comments

Emailing a one-time code is worse than passwords

https://blog.danielh.cc/blog/passwords
735•max__dev•15h ago•601 comments

GPT-5 System Card [pdf]

https://cdn.openai.com/pdf/8124a3ce-ab78-4f06-96eb-49ea29ffb52f/gpt5-system-card-aug7.pdf
60•6thbit•1h ago•11 comments

The Sunlight Budget of Earth

https://www.asimov.press/p/sunlight-budget
13•mailyk•1h ago•1 comments

Benchmark Framework Desktop Mainboard and 4-node cluster

https://github.com/geerlingguy/ollama-benchmark/issues/21
4•geerlingguy•16m ago•0 comments

Italy's Undercover Pizza Detectives

https://www.bbc.com/travel/article/20250801-italys-undercover-pizza-detectives
12•pseudolus•3d ago•4 comments

Lithium compound can reverse Alzheimer’s in mice: study

https://hms.harvard.edu/news/could-lithium-explain-treat-alzheimers-disease
73•highfrequency•3h ago•48 comments

Arm Desktop: x86 Emulation

https://marcin.juszkiewicz.com.pl/2025/07/22/arm-desktop-emulation/
56•PaulHoule•5h ago•28 comments

Windows XP Professional

https://win32.run/
169•pentagrama•4h ago•110 comments

Sweatshop Data Is Over

https://www.mechanize.work/blog/sweatshop-data-is-over/
34•whoami_nr•4h ago•14 comments

PyPI: Preventing ZIP parser confusion attacks on Python package installers

https://blog.pypi.org/posts/2025-08-07-wheel-archive-confusion-attacks/
20•miketheman•1h ago•2 comments

More shell tricks: first class lists and jq

https://alurm.github.io/blog/2025-08-07-first-class-lists-in-shells.html
19•alurm•3h ago•7 comments

The Whispering Earring (Scott Alexander)

https://croissanthology.com/earring
91•ZeljkoS•7h ago•51 comments

Claude Code IDE integration for Emacs

https://github.com/manzaltu/claude-code-ide.el
723•kgwgk•1d ago•240 comments

Koalas vs. Crows: An Evolutionary Theory of Software

https://ajmoon.com/posts/koalas-vs-crows-an-evolutionary-theory-of-software
10•alex-moon•3d ago•0 comments

Hopfield Networks Is All You Need (2020)

https://arxiv.org/abs/2008.02217
23•liamdgray•2d ago•1 comments

Global Trade Dynamics

https://alhadaqa.github.io/globaltradedynamics/
31•gmays•3h ago•5 comments

Let's stop pretending that managers and executives care about productivity

https://www.baldurbjarnason.com/2025/disingenuous-discourse/
91•speckx•3h ago•47 comments
Open in hackernews

AWS Restored My Account: The Human Who Made the Difference

https://www.seuros.com/blog/aws-restored-account-plot-twist/
68•mhuot•2h ago
Related: AWS deleted my 10-year account and all data without warning - https://news.ycombinator.com/item?id=44770250 - Aug 2025 (152 comments)

Comments

mhuot•2h ago
Follow-up to the AWS account deletion story from last week. The author's data was recovered after an AWS employee escalated to VP level, and it turned out the instances were just stopped, not terminated.
jacquesm•1h ago
You got very lucky.
sangeeth96•2h ago
Related:

https://news.ycombinator.com/item?id=44770250

Ezhik•2h ago
I hope the author follows up with how they build out a backup system. No snark intended - it really does seem to me like an interesting set of articles especially for others who might be in a similar situation with keeping everything with one cloud provider.
svt1234•2h ago
I don't use AWS but I have a server on Linode and I realized that I am in the same situation should Linode go away, even though I tend to have a pretty rigorous backup routine elsewhere. It is something you can easily miss.

I am inspired now to dump my databases and rsync the content on a schedule.

tetha•1h ago
Within Europe and if you're not at Hetzner already, Hetzner Storage Boxes[1] are very competitive. 4 Euros per month for a terabyte, throw borg[2] with compression + deduplication at it and it'll take you far. Especially if your databases are not changing that much, then borg can make that terabyte of storage a lot bigger.

1: https://www.hetzner.com/storage/storage-box/

2: https://www.borgbackup.org/

webstrand•2h ago
Honestly it's pretty worrying, I don't understand how other people can talk about "you have to have multiple offsite backups" when each offsite runs about $700 per year for a relatively small amount of data. I can only afford one off-site backup, and if my local copy dies it'll take a couple of weeks for me to restore from AWS because of local internet speed limitations.
dylan604•1h ago
Offsite does not have to mean another cloud solution. If you have a copy of data on a thumbdrive or other external media, you can leave it at your parent's house (or other family member). You can keep a copy of it in a safety deposit box. If it's at a family member's house, it's just an excuse to visit on a semi-regular schedule as you swap out with an updated version.
pavel_lishin•1h ago
Yep. My backups - personal ones, not business - live in three places:

1. A hard drive in a fire safe.

2. An S3 bucket, mediated by Wasabi.

3. My friend's server that lives at his house half a continent away.

It would be nice to have a fourth location that's a physical hard-drive that lives outside of my house, but close enough to drive to for pick-up, but it would mean either paying for a safety deposit box as you mentioned, or hassling a friend once a week as I come to pick it up and deposit it.

dylan604•48m ago
> My friend's server that lives at his house half a continent away.

I figure that if a disaster that takes out my house and someone across town at the same time, I probably won't be worrying about restoring data. Across continent would only be viable with a server like you mentioned essentially buddyCloud.

bn-l•2h ago
This is genuinely newsworthy (not being sarcastic)
pjc50•2h ago
Social media is the true support channel of last resort, especially in a community like HN where your message might just find its way to an insider with enough seniority to go off script and fix things.

Re: "The Support Agents Who Became LLMs"; yes, institutionalized support is terrible almost everywhere. Partly because it costs real money to pay real humans to do it properly, so it ends up as a squeezed cost centre.

IlikeKitties•1h ago
Is it just me or have others here also just plainly given up using support channels of almost any company? I feel like in most cases they are neither able or willing to help you and they rarely are able to do more than yourself using the webuis.
igleria•2h ago
I really want to send a virtual hug to Tarus and you. Reminds me of one of the few reasons I'm still in this industry... solving real problems that real people have.
mkl•1h ago
*Tarus. Auto-incorrect?
igleria•1h ago
Ugh, I should be better at keeping names read on other browser tabs in my meat volatile memory
eqvinox•2h ago
> "people in positions of leadership, such as my boss, are aware of your blog post and I’ve been tasked with finding out what I can"

Translation: "someone noticed it trending on HN, decided it was bad publicity, and that they should do something about it"

Implication: what mattered was the bad publicity, not the poor support infrastructure. The latter won't change, and the next person with similar problems will get the same runaround, and probably lose their data.

/c (cynic, but I suspect realist)

colmmacc•1h ago
Every week at AWS we have an account protection meeting; it's got the teams who handle fraud and abuse detection, compromised accounts (e.g. when a customer has a security issue on their side), non-payment, forgotten creds, as well as our support team. You'll find the most junior members of those teams, all the way up to VPs, in the same meeting diving into the nitty gritty. Every week.

Disabling a legitimate in-use account is one of our absolute nightmares, and I don't care if it was an account paying $3/month we would be having a review of that with our top level management (including our CEO - Matt Garman) no matter how we found out about it. For us, there is not some acceptable rate of this as a cost of doing business.

lotsofpulp•1h ago
It might not be for you, but it is an acceptable cost of doing business for whoever is high enough up the chain that is setting the budget such that sufficient customer support is not available.

And disabling an in use account was not the issue here. There not being a way to get the account re enabled is the issue.

colmmacc•1h ago
In this case, I am high enough in that chain.
CamperBob2•1h ago
So, the key is somehow finding a way to bring problems like this to the group's attention, it sounds like. HN lamentations are great when they work, but they don't exactly scale.

At least one layer of human support needs to have the ability -- not just the ability, but the obligation! -- to escalate to your team when a customer service problem occurs that doesn't fit a pattern of known/active scams and they are unable to help the customer themselves. Sounds like that's not currently the case.

colmmacc•1h ago
Without prejudging the COE; it won't surprise anyone to learn that there are bad actors out there who try "every trick in the book" to have accounts that they don't pay for, and lying to customer support is absolutely one of those tricks, as is trying to be creative with changing payment instruments.

In these cases, it's also really important that customer support stick to a script and can't be abused as part of social engineering, hijacking, or fraud check bypass. "No we can't reset your account" is a very important protection too. I agree that there is an obligation to escalation, but I suspect the focus of the COE will be on how we could have detected this without human judgement. There's got to be a way.

Timshel•1h ago
I love the irony that an issue caused by a failing automation was solved due to human escalation but let's not try to improve the escalation process but add more automation ...
colmmacc•53m ago
We do both; automation never forgets.
CamperBob2•20m ago
Right, obviously you can't give the level-1 schlubs the keys to the kingdom, but they need to be able to escalate. What you're doing now, trapping customers in a maze of no-reply dead ends, isn't OK. It's never a good long-term play to let bad actors drive your business model. (Well, all right, maybe PayPal has to do that, but you don't.)

One obvious approach would be to charge for access to human support. I'll bet the OP would happily have paid $50 to talk to someone with both the ability and inclination to escalate the issue. In rare instances such as this one where the problem really is on your end, the $50 would be refunded.

Timshel•1h ago
> Disabling a legitimate in-use account is one of our absolute nightmares

Might be your nightmare but at the same time there is no way for your customers to report it or your own support agents to escalate that something wrong might have happened and someone should look again ...

ke4qqq•1h ago
As the cited ‘boss’ I’ll say the publicity wasn’t the concern. The concern was that someone wanted to use our services and we had made that so frustrating that they were writing blog posts about how it had gone wrong.

The various teams (anti-fraud and support) are investigating how we failed this customer so we can improve and hopefully keep this from happening again. (This is the ‘Correction of Error’ process that’s being worked on. And CoE’s aren’t a punitive ‘blame session’ - it’s figuring out how a problem happened and how we can fix or avoid it systemically going forward).

To be fair, the publicity did mean that multiple people were flagging this and driving escalations around it.

eqvinox•47m ago
> The concern was that someone wanted to use our services and we had made that so frustrating that they were writing blog posts about how it had gone wrong.

I'm concerned that you're being very unspecific talking about "our services" and "it" going wrong.

What went wrong here is AWS not spending enough money on humans in the support teams. And of course this is a neverending balancing act between profitability and usability. Like any other profit vs. usability consideration, the curve probably has a knee somewhere when the service becomes too unusable and too many people flee to the competition.

And it seems current economic wisdom is that that knee in the curve is pretty far on the "bad support" side of the scale.

Which is to say, the cynic in me doesn't believe you'll be making any changes, mostly because that knee in the curve is in fact pretty far on the "bad support" side, and economics compels you to exploit that.

qualeed•2h ago
Awesome that the author got their stuff back. But...

>My data is back. Not because of viral pressure. Not because of bad PR. [...]

>“I am devastated to read on your blog about the deletion of your AWS data. I did want to reach out to let you know that people in positions of leadership, such as my boss, are aware of your blog post and I’ve been tasked with finding out what I can, and to at least prevent this from happening in the future.”

So, yes, because of bad PR. Or, at least the possibility of the blog blowing up into a bad PR storm. I'm guessing that if there was no blog, the outcome would be different.

Group_B•1h ago
So lesson learned, write a blog post if you need support with AWS?
seuros•1h ago
The AWS employee actually contacted me before my blog post even reached three digits in views. So no, it wasn’t PR-driven in the reactive sense.

But here’s what I learned from this experience: If you are stuck in a room full of deaf people, stop screaming, just open the door and go find someone who can hear you.

The 20 days of pain I went through, it wasn’t because AWS couldnt fix it.

It’s because I believed that one of the 9 support agents would eventually break script and act like a human. Or that they get monitored by another team.

Turns out, that never happened.

It took someone from outside the ticketing system to actually listen and say: Wait. This makes no sense.

qualeed•1h ago
>So no, it wasn’t PR-driven in the reactive sense.

At my small business, we proactively monitor blogs and forums for mentions of our company name so that we can head off problems before they become big. I'm extremely confident that is what happened here.

It was PR-driven in the proactive sense. Which is still PR-driven. (which, by the way, I have no problem with! the problem is the shitty support when it isn't PR-driven)

Regardless, I 100% feel your pain with dealing with support agents that won't break script, and I am legitimately happy that you both got to reach someone that was high enough up the ladder to act human and that they were able to restore your data.

seuros•1h ago
Thank you for your concern, and I appreciate the nuance in your take.

Yes, it is totally possible that AWS monitors blogs and forums for early damage control, like your company does.

But we shouldn’t paint it like I was bailed out by some algorithmic PR radar and nothing else.

Let’s not fall into the “Fuk the police” style of thinking where every action is assumed to be manipulation. Tarus didn’t reach out like a Scientology agent demanding I take the post down or warning me of consequences.

He came with empathy, internal leverage, and actually made things move.

When before i read Tarus email, i wrote in Slack to Nate Berkopec (puma maintainer): `Hi. AWS destroyed me, i'm going to take a big break .`

Then his email reset my cortisol levels to acceptable level.

Most importantly, this incident triggered a CoE (Correction of Error) process inside AWS.

That means internal systems and defaults are being reviewed, and that’s more than I expected. We’re getting a real update, that will affect cases like mine in the future.

So yeah, it may have started in the visibility layer, but what matters is that someone human got involved, and actual change is now happening.

qualeed•1h ago
>But we shouldn’t paint it like I was bailed out by some algorithmic PR radar and nothing else.

>[...] assumed to be manipulation

I think you're reading way more negativity into "PR" than I'm intending (which is no negativity).

It's very clear Tarus is a caring person who really did empathize with your situation and did their best to rectify the situation. It's not a bad thing that your issue may (most likely) have been brought to his attention because of "PR radar" or whatever.

The bad part, on Amazon and other similar companies, is how they typically respond when a potential PR hit isn't on the line. Which, as I'm sure you know because you experienced it prior to posting your blog, is often a brick wall.

The overwhelming issue is that you often require some sort of threat of damage to their PR to be assisted. That doesn't make the PR itself a bad thing. And that fact implies nothing about the individuals like Tarus who care. Often the lowly tier 1 support empathizes, they just aren't allowed to do anything or say anything.

gamblor956•12m ago
It took someone from outside the ticketing system to actually listen and say: Wait. This makes no sense.

Which only happened because of your blog post. In other words, the effort to prevent bad PR led to them fixing your problem immediately, while 20 days of doing things the "right" way yielded absolutely no results.

This actually makes the problem you've described even worse: it indicates that AWS has absolutely no qualms about failing to properly support the majority of its customers.

The proper thing for them to do was not to have a human "outside the system" fix your problem. It was for them to fix the system so that the system could have fixed your problem.

That being said: Azure is so much worse than AWS. Even bad PR won't push them to fix things.

boogieknite•1h ago
much much much smaller scale but i accidentally signed up for some AWS certificate service i didnt understand or use and took a $700 bill on the chin

customer service was great and refunded my money without me blogging about it. we messaged back and forth about what i was trying to do and what i thought i was signing up for. i think it helped to have a long history of tiny aws instances because they mentioned reviewing my customer history

i want to hate amazon but they provided surprisingly pleasant and personable service to a small fry like me. that exchange alone probably cost amazon more money than ive spent in aws. won my probably misguided customer loyalty

qualeed•1h ago
Money is typically much easier to deal with than data or account restoration. With that said, it's nice to hear a good support story every now and then! People don't usually take the time to write a blog post (or comment, etc.) when they have a quick and productive conversation with a support agent, so all we end up hearing are horror stories.
bigstrat2003•1h ago
I think the blog made a difference, yes - but that doesn't mean it was just a PR move by Amazon. It's perfectly possible that the Amazon employee who contacted the author truly does care and wanted to help because of that. It's fair to say that without the blog post this issue wouldn't have been noticed or fixed, but anything past that is really just speculating about people's motives.
qualeed•1h ago
>but that doesn't mean it was just a PR move by Amazon

Being a PR move isn't inherently a bad thing.

The bad thing is the lack of support when PR isn't at risk.

>It's fair to say that without the blog post this issue wouldn't have been noticed or fixed, but anything past that is really just speculating about people's motives.

My only (minor) issue with the blog post is starting by saying "Not because of PR" when the opening email from the human at amazon was "saw your blog". I think it is evident that Tarus Balog did indeed actually care!

electroly•1h ago
Lesson learned: If you have important workloads in AWS, but don't spend enough to get a dedicated account rep, make sure you have some sort of online presence that you can use later to gain access to the real support group.
mox1•1h ago
...or ensure you have backups of data in a non-AWS location?
electroly•1h ago
It's not an "or" situation--these are orthogonal issues. The way support behaved is about AWS. Backups are about you. You should have backups and AWS should not arbitrarily terminate accounts without support recourse. We can discuss them separately if we want. I care about the uptime of my AWS accounts even though I have comprehensive backups.
bink•1h ago
With modern cloud computing is that enough? Can you migrate AuroraDBs, Lambdas, Kinesis configs, IAM policies, S3 bucket configs, Opensearch configs to another cloud platform easily? I suppose if you're comfortable going back to AWS after they randomly delete all your data then the remote backups will be helpful, but not so much if you plan to migrate to another provider.
seuros•1h ago
That might be a 'lesson', but it’s like saying:

"If you want your paperwork processed in Morocco, make sure you know someone at the commune, and ideally have tea with their cousin."

Yes, it works, but it shouldn’t be the system.

What happened with AWS isn’t a clever survival tip, it’s proof that without an account manager, you are just noise in a ticket queue, unless you bring social proof or online visibility.

This should have never come down to 'who you know' or 'how loud you can go online'.

It a big luck that i'm speaking in english and have online presence, what if i was ranting in French, Arabic, or even Darija in Facebook. Tarus will have never noticed.

electroly•1h ago
We seem to be saying exactly the same thing. I agree, strongly, with everything you just said here. AWS has a support problem if this was necessary, and I'm not personally prepared with an online presence if it happened to me. I'll simply be screwed and will have to recreate a new account from backups. It's something for me to think about. I can't fix AWS--I can only change what I do in response.

I recently opened a DigitalOcean account and it was locked for a few days after I had moved workloads in. They took four days to unlock the account, and for my trouble they continued to charge me for my resources during the time the account was locked when I couldn't log in to delete them. I didn't have any recourse at all. They did issue a credit because I asked nicely, but if they said no, that would have been it.

anonymars•1h ago
Primary lesson learned should be NEVER HAVE ONE POINT OF FAILURE. If all your data is on one account (regardless of where) and nowhere else that is ONE POINT OF FAILURE
jacquesm•1h ago
One of the questions we always ask of cloud hosted companies what their plan is in case they ever lose their cloud account. Typically this is met with incredulity: nobody has ever lost their AWS account and all it's data? Right? Right???

Well, not normally, no. But it does happen. Not often enough to be a meaningful statistical issue, but if it were to happen to you then a little forethought can turn a complete disaster into a survivable event. If you store all your data 'in the cloud' realize that your account could be compromised, used to store illegal data, be subject to social engineering and lots of other ways that could result in a cloud services provider to protect their brand rather than your data. If - like the author - you are lucky you'll only be down for a couple of days. But for most businesses that's the end of the line, especially if you run a multi-tenant SaaS or something like that. So plan for the worst and hope for the best.

ignoramous•1h ago
> Typically this is met with incredulity

Surprising. In my time, things always got pretty serious if your service could not recover from loss due to regretable events.

TFA alluded to a possible but "undocumented" way to restore terminated infrastructure. I don't think all AWS services nuke everything on deletion, but if it is not in writing ...

Traubenfuchs•1h ago
We have a slack channel with more than 20 external @amazon.com engineers and salespeople that instantly respond to all our queries, sometimes proactively inform us of stuff or offer advice...

Doesn't everyone has this?

bink•59m ago
I think you'll find this depends on how much you spend with Amazon. Most accounts (by number) don't have this.
diegocg•1h ago
I would say that the lesson here is that cross-vendor replication is more important than intra-vendor replication. It is clear that technology can (largely) avoid data losses, but there will always be humans at charge
anonymars•1h ago
Nitpick: true replication is high-availability, not disaster-recovery (i.e. not a backup)

If wrong data gets deleted, and that gets replicated, now you simply have two copies of bad data.

seuros•58m ago
Author here: Let’s be clear on backups:

Yes, I had backups everywhere. Across providers, in different countries. But I built a system tied to my AWS account number, my instances, my IDs, my workflows.

When that account went down, all those “other” backups were just dead noise encrypted forever. Bringing them up to the story only invites the 'just use your other backups' fallback, and ignores the real fragility of centralized dependencies.

It is like this: the UK still maintains BBC Radio 4’s Analogue Emergency Broadcast—a signal so vital that if it’s cut, UK nuclear submarines and missile silos automatically trigger retaliation. No questions asked. That's how much stakes they place on a reliable signal.

If your primary analogue link fails, the world ends. That's precisely how I felt when AWS pulled my account, because I’d tied my critical system to a single point of failure. If the account was just Read only, i will waited because i could have access to my data and rotated keys.

AWS is the apex cloud provider on the planet. This isnt about redundancy or best practices.

it's about how much trust and infrastructure we willingly lend to one system.

Remember that if BBC Radio 4 signal get to fail for some reasons, the world will get nuked, only cockroaches will survive… and your RDS and EC2 billing fees.

cnst•51m ago
Keep in mind, just 2 days ago, AWS has provided the following statement to Tom's Hardware:

https://www.tomshardware.com/software/cloud-storage/aws-accu...

https://archive.is/0b3Hc

> Update: August 5 7:30am (ET): In a statement, an AWS spokesperson told Tom's Hardware "We always strive to work with customers to resolve account issues and provided an advance warning of the potential account suspension. The account was suspended as part of AWS’s standard security protocols for accounts that fail the required verification, and it is incorrect to claim this was because of a system error or accident."

This shows a bigger part of this problem.

When these mistakes do happen, they're invariably treated as standard operating procedures.

They're NEVER treated as errors.

It would appear that the entire support personnel chain and PR literally have no escalation path to treat any of these things as errors.

Instead, they simply double-down that it's NOT an error that the accounts was terminated on an insufficient notice over bogus claims and broken policies.