Weird that https://www.cloudflarestatus.com/ isn't reporting this properly. It should be full of red blinking lights.
Something must have gone really wrong.
If a closing brace take your whole infra. down, my guess is that we'll see more of this.
I don't think anyone's is.
They shouldn't need to do that unless they're really disorganised. CEOs are not there for day to day operations.
> Investigating - Cloudflare is investigating issues with Cloudflare Dashboard and related APIs.
> These issues do not affect the serving of cached files via the Cloudflare CDN or other security features at the Cloudflare Edge.
> Customers using the Dashboard / Cloudflare APIs are impacted as requests might fail and/or errors may be displayed.
Their own website seems down too https://www.cloudflare.com/
--
500 Internal Server Error
cloudflare
"Might fail"
which datacenter got flooded?
It's a scheduled maintenance, so SLA should not apply right ?
They seem to now, a few min after your comment
That's not how status pages if implemented correctly work. The real reason status pages aren't updated is SLAs. If you agree on a contract to have 99.99% uptime your status page better reflect that or it invalidates many contracts. This is why AWS also lies about it's uptime and status page.
These services rarely experience outages according their own figures but rather 'degraded performance' or some other language that talks around the issue rather than acknowledging it.
It's like when buying a house you need an independent surveyor not the one offered by the developer/seller to check for problems with foundations or rotting timber.
Most of the time people will just get by and ignore even full day of downtime as minor inconvenience. Loss of revenue for the day - well you most likely will have to eat that, because going to court and having lawyers fighting over it most likely will cost you as much as just forgetting about it.
If your company goes bankrupt because AWS/Cloudflare/GCP/Azure is down for a day or two - guess what - you won't have money to sue them ¯\_(ツ)_/¯ and most likely will have bunch of more pressing problems on your hand.
I'm sure there are gray areas in such contracts but something being down or not is pretty black and white.
Is it? Say you've got some big geographically distributed service doing some billions of requests per day with a background error rate of 0.0001%, what's your threshold for saying whether the service is up or down? Your error rate might go to 0.0002% because a particular customer has an issue so that customer would say it's down for them, but for all your other customers it would be working as normal.
This is so obviously not true that I'm not sure if you're even being serious.
Is the control panel being inaccessible for one region "down"? Is their DNS "down" if the edit API doesn't work, but existing records still get resolved? Is their reverse proxy service "down" if it's still proxying fine, just not caching assets?
Reality is that in an incident, everyone is focused on fixing issue, not updating status pages; automated checks fail or have false positives often too. :/
If communication disappears entirely during an outage, the whole operation suffers. And if that is truly how a company handles incidents, then it is not a practice I would want to rely on. Good operations teams build processes that protect both the system and the people using it. Communication is one of those processes.
There is no quicker way for customers to lose trust in your service than it to be down and for them to not know that you're aware and trying to fix it as quickly as possible. One of the things Cloudflare gets right is the frequent public updates when there's a problem.
You should give someone the responsibility for keeping everyone up to date during an incident. It's a good idea to give that task to someone quite junior - they're not much help during the crisis, and they learn a lot about both the tech and communication by managing it.
"Cloudflare Dashboard and Cloudflare API service issues"
Investigating - Cloudflare is investigating issues with Cloudflare Dashboard and related APIs.
Customers using the Dashboard / Cloudflare APIs are impacted as requests might fail and/or errors may be displayed. Dec 05, 2025 - 08:56 UTC
500 Internal Server Error cloudflare
No need. Yikes.
(edit: it's working now (detecting downdetector's down))
This one is green: https://downdetectorsdowndetector.com
This one is not openning: https://downdetectorsdowndetectorsdowndetector.com
This one is red: https://downdetectorsdowndetectorsdowndetectorsdowndetector....
software was a mistake
Imagine how productive we'll be now!
We can now see which companies have failed in their performative systems design interviews.
Looking forward to the post-mortem.
On what? There are lots of CDN providers out there.
If you switch from CF to the next CF competitor, you've not improved this dependency.
The alternative here, is complex or even non-existing. Complex would be some system that allows you to hotswap a CDN, or to have fallback DDOS protection services, or to build you own in-house. Which, IMO, is the worst to do if your business is elsewhere. If you sell, say, petfood online, the dependency-risk that comes with a vendor like CF, quite certainly is less than the investment needed- and risk associted with- building a DDOS protection or CDN on your own; all investment that's not directed to selling more pet-food or get higher margins at doing so.
It turns out so far, there isn't one. Other than contacting the CEO of Cloudflare rather than switching on a temporary mitigation measure to ensure minimal downtime.
Therefore, many engineers at affected companies would have failed their own systems design interviews.
In some cases it is also a valid business decision. If you have 2 hour down time every 5 years, it may not have a significant revenue impact. Most customers think it's too much bother to switch to a competitor anyway, and even if it were simple the competition might not be better. Nobody gets fired for buying IBM
The decision was probably made by someone else who moved on to a different company, so they can blame that person. It's only when down time significantly impacts your future ARR (and bonus) that leadership cares (assuming that someone can even prove that they actually lose customers).
If it turns out that this was really just random bad luck, it shouldn't affect their reputation (if humans were rational, that is...)
But if it is what many people seem to imply, that this is the outcome of internal problems/cuttings/restructuring/profit-increase etc, then I truly very much hope it affects their reputation.
But I'm afraid it won't. Just like Microsoft continues to push out software, that, compared to competitors, is unstable, insecure, frustrating to use, lacks features, etc, without it harming their reputation or even bottomlines too much. I'm afraid Cloudflare has a de-facto monopoly (technically: big moat) and can get away with offering poorer quality, for increasing pricing by now.
I've said to many people/friends that use Cloudflare to look elsewhere. When such a huge percentage of the internet flows through a single provider, and when that provider offers a service that allows them to decrypt all your traffic (if you let them install HTTPS certs for you), not only is that a hugely juicy target for nation-states but the company itself has too much power.
But again, what other companies can offer the insane amount of protection they can?
They problem is architectural.
"How do you know?"
"I'm holding it!"
Reddit was once down for a full day and that month they reported 99.5% uptime instead of 99.99% as they normally claimed for most months.
There is this amazing combination of nonsense going on to achieve these kinds of numbers:
1. Straight up fraudulent information on status page. Reporting incendents as more minor than any internal monitors would claim.
2. If it's working for at least a few percent of customers it's not down. Degraded is not counted.
3. If any part of anything is working then it's not down. For example with the reddit example even if the site was dead as long as the image server is still at 1% functional with some internal ping the status is good.
canva.com
chess.com
claude.com
coinbase.com
kraken.com
linkedin.com
medium.com
notion.so
npmjs.com
shopify.com (!)
and many more I won't add bc I don't want to be spammy.
Edit: Just checked all my websites hosted there (~12), they're all ok. Other people with small websites are doing well.
Only huge sites seem to be down. Perhaps they deal with them separately, the premium-tier of Cloudflare clients, ... and those went down, dang.
Can't get to the Dashboard though.
Nice thing about Cloudflare being down is that almost everything is down at once. Time for peace and quiet.
>We will be performing scheduled maintenance in ORD (Chicago) datacenter
>Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region.
Looks like it's not just Chicago that CF brought down...
I thought we were meant to learn something ... ?
The previous one affected European users for >1h and made many Cloudflare websites nearly unusable for them.
Of course, vibe coding will always find a way to make something horribly broken but pretty.
So it seems like it's just the big ol' "throw this big orange reverse proxy in front of your site for better uptime!" is what's broken...
[0] Workers, Durable Objects, KV, R2, etc
Cynicism aside, something seems to be going wrong in our industry.
P.S. it’s a joke, guys, but you have to admit it’s at least partially what’s happening
.unwrap() literally means “I’m not going to handle the error branch of this result, please crash”.
> The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. There’s nothing wrong with it. It doesn’t acquire bugs just by sitting around on your hard drive.
> Back to that two page function. Yes, I know, it’s just a simple function to display a window, but it has grown little hairs and stuff on it and nobody knows why. Well, I’ll tell you why: those are bug fixes. One of them fixes that bug that Nancy had when she tried to install the thing on a computer that didn’t have Internet Explorer. Another one fixes that bug that occurs in low memory conditions. Another one fixes that bug that occurred when the file is on a floppy disk and the user yanks out the disk in the middle. That LoadLibrary call is ugly but it makes the code work on old versions of Windows 95.
> Each of these bugs took weeks of real-world usage before they were found. The programmer might have spent a couple of days reproducing the bug in the lab and fixing it. If it’s like a lot of bugs, the fix might be one line of code, or it might even be a couple of characters, but a lot of work and time went into those two characters.
> When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work.
From https://www.joelonsoftware.com/2000/04/06/things-you-should-...
Also, I don't think their every service got affected. I am using their proxy and pages service and both are still up.
Impossible not to feel bad for whoever is tasked to cleanup the mess.
But my goodness, they're really struggling over the last couple weeks... Can't wait to read the next blog post.
I have a few domains on cloudflare and all of them are working with no issues so it might not be a global issue
Please avoid Imgur.
>We are sorry, something went wrong. >Please try refreshing the page in a few minutes. If the problem persists, please visit status.cloud.microsoft for updates regarding known issues.
The status page of course says nothing
Even if you could, having two sets of TLS termination is going to be a pain as well.
Then I go to Hacker News to check. Lo and behold, it's Cloudflare. This is sort of worrying...
bunny.net
fastly.com
gcore.com
keycdn.com
Cloudfront
Probably some more I forgot now. CF is not the only option and definitely not the best option.
All give me
"500 Internal Server Error cloudflare.."
So I'm guessing yes.
Representative of having the best developers behind it.
Blender Artists works, but DownDetector and Quillbot dont.
So My guess is yes It´s down.
I've been a Cloudflare fan for the longest time, but the more they grow the more they look like the weak link of the internet. This is the second major outage in less than few weeks. Terrible.
Is this a joke?
And their blog of above statement is also down:
unknown: failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/....
so coffee time.The Cloudflare status page says that it's the dashboard and Cloudflare APIs that are down. I wonder if the problem is focused on larger sites because they are more dependent on / integrated with Cloudflare APIs. Or perhaps it's only an Enterprise tier feature that's broken.
If it's not everything that is down, I guess things are slightly more resilient than last time?
At this point picking vendors that don't use Cloudflare in any way becomes the right thing to do.
If a company was able to overcome all the red tape within three weeks and not be impacted today, that's impressive.
Whenever I deploy a new release to my 5 customers, I am pedantic about having a fast rollback.. Maybe I'm not following the apparent industry standard and instead should just wing it.
but wow, it must be stressful to deal with this
Gentle reminder that every affected company brought it upon themselves. Very few companies care about making their system resilient to 3rd party failures. This is just another wake-up call for them.
All the sites that were 500 error before are able to load now.
Time for everyone to drop this company and move on to better solutions (until those better solutions rot from the inside out, just like their predecessor did)
If you host something that actually matters that other people depend upon and, please review your actual needs and if possible stop making yourself _completely_ dependent on giant cloud corporations.
Thank you, Cloudflare, for again proving my point.
> Dec 05, 2025 - 09:12 UTC
If it weren’t for recent cloudflare outages, never would have considered this was the problem.
Even until I saw this, I assumed it was an ISP issue, since Starlink still worked using 1.1.1.1. Now I’m thinking it’s a cloudflare routing problem?
You haven't actually watched Mad Max, have you? I do recommend it.
The other companies working at that scale have all sensibly split off into geographical regions & product verticals with redundancy & it's rare that "absolutely all of AWS everywhere is offline". This is two total global outages in as many weeks from Cloudflare, and a third "mostly global outage" the week before.
Wise was just down which is a pretty big one.
Also odd how some websites were down this time that previously weren't down with the global outage in November
This is becoming a meme.
It also went down multiple times in the past; not to say that's bad, everyone does from time to time.
And they're back before I finished the comment. Such a pity, I was hoping to hog some more Claude for myself through Claude Code.
Either way it's been interesting to see the bullets I've been dodging.
What solutions are there for Multi DNS/CDN failover that don't rely on a single point of failure?
kaliqt•1h ago
chokominto•46m ago