https://www.cnbc.com/2026/05/08/aws-outage-data-center-fandu...
https://www.theregister.com/off-prem/2026/05/08/aws-warns-of...
https://www.cnbc.com/2026/05/08/aws-outage-data-center-fandu...
https://www.theregister.com/off-prem/2026/05/08/aws-warns-of...
So did some cooling equipment fail here or was there an external reason for the overheating? Or does Amazon overbook the cooling in their data centers?
Cooling in datacenters is like everything else both over and under provisioned.
It's overprovisioned in the sense that the big heat exchange units are N+1 (or in very critical and smaller load facilities 2N/3N). This is done because you need to regularly take these down for maintenance work and they have a relatively high failure rate compared to traditional DC components and require mechanical repairs that require specialized labor and long lead times. In a bigger facility its not uncommon to have cooling be N+3 or more when N becomes a bigger number because you're effectively always servicing something or have something down waiting for a blower assembly which needs to be literally made by a machinist with a lathe because that part doesn't exist anymore but that's still cheaper than replacing the whole unit.
The system are also under-provisioned in the sense that if every compute capacity in the facility suddenly went from average power draw to 100% power draw you would overload the cooling capacity, you would also commonly overload things in the electrical and other paths too. Over provisioning is just the nature of the industry.
In general neither of these things poses a real problem because compute loads don't spike to 100% of capacity and when they do spike they don't spike for terribly long and nobody builds facilities on a knife-edge of cooling or power capacity.
The problem comes when you have the intersection of multiple events.
You designed your cooling system to handle 200% of average load which is great because you have lots of headroom for maintenance/outages.
Repair guy comes on Tuesday to do work on a unit and finds a bad bearing, has to get it from the next state over so he leaves the unit off overnight to not risk damaging the whole fan assembly (which would take weeks to fabricate).
The two adjacent cooling units are now working JUST A BIT harder to compensate and one of them also had a motor which was just slightly imbalanced or a fuse which was loose and warming up a bit and now with an increased duty cycle that thing which worked fine for years goes pop.
Now you're minus two units in an N+2 facility. Not really terrible, remember you designed for 200% of average load.
That 3rd unit on the other side of the first failed unit, now under way more load, also has a fault. You're now minus 3 in a N+2 facility.
Still, not catastrophic because really you designed for 200% of average load.
The thing is, it's now 4AM, the onsite ops guy can't fix these faults and needs to call the vendor who doesn't wake up till 7AM and won't be onsite till 9.
Your load starts ramping up.
Everything up above happens daily in some datacenter in the USA. It happens in every datacenter probably once a year.
What happens next is the confluence of events which puts you in the news.
One of your bigger customers decides now is a great time to start a huge batch processing job. Some fintech wants to run a huge model before market open or some oil firm wants to do some quick analysis of a new field.
They spin up 10000 new VMs.
Normally, this is fine, you have the spare capacity.
But, remember, you planned for 200% of AVERAGE cooling capacity and this is not nodes which are busy but not terribly busy, these are nodes doing intense optimized number crunching work which means they draw max power and thus expel max waste heat.
Not only has your load in terms of aggregate number of machines spiked but their waste heat impact is also greater on average.
Boom, cascading failure, your cooling is now N-4.
Server fans start ramping up faster which consumes more power.
Your cooling is now N-5.
Alarms are blaring all over the place.
Safeties on the cooling units start to trip as they exceed their load and refrigerant pressures rise.
Your cooling is now N-6.
Your cooling is now N-7.
Your cooling is now 0.
But they did load-shed. Perhaps not soon enough, but the reason this is publicly known is because they reduced the amount of heat being produced.
But this is the physical world, shit happens.
The algorithm didn't know that fuse was lose and fine at 50% duty cycle but was high resistance and going to blow at 100%.
Two loop cycle with heat exchanger to get rid of the heat
In one of the slides, there were factors that influence the decision of where to build a data center, and several of the items involved finding a place with enough space and skilled people to work at this data center. He also commented sometimes there is politics involved on choosing the place for a next data center.
Coastal land much more expensive. If you go to a remote coastal site, you probably won't have as good access to power.
Coastal sites usually exposed to more severe weather events.
Other fun unpredicatble things eg-Diablo Canyon nuclear facility has had issues with debris and jellyfish migration blocking their saltwater cooling intake.
https://www.nbcnews.com/news/world/diablo-canyon-nuclear-pla...
But NoVA is basically the same sort of economic cluster that Paul Krugman won his Nobel Prize in Economics for studying, just for datacenters.
Toronto is the textbook example of this working. It's on a freshwater lake that is deep relatively close to the shore, and the downtown has expensive real estate blocking traditional methods.
https://en.wikipedia.org/wiki/Deep_Lake_Water_Cooling_System
merek•19h ago
AWS EC2 outage in use1-az4 (us-east-1)
https://news.ycombinator.com/item?id=48057294