> Maybe that's the core of this message. Face your fears. Put your service on the internet. Maybe it goes down, but at least not by yet another Cloudflare outage.
Well I'd rather have my website going down (along with half the internet) be the concern of a billion dollar corporation with thousands of engineers - than mine.
Still a bit weird to pretend we now have cyber weather that takes our webpages down.
The reaction to AWS US-East-1 going down demonstrates this. As so many others were in the same boat, companies got a pass on their infrastructure failing. Everyone was understanding.
Depends on the frame of reference of “single point-of-failure”.
In the context of technical SPOFs, sure. It’s a distributed system across multiple geographies and failure domains to mitigate disaster in the event any one of those failure domains, well, fails.
It doesn’t fix that technology is operated by humans who form part of the sociotechnical system and build their own feedback loops (whose failures may not be, in fact are likely not going to be, independent events).
SPOFs also need to contemplate the resilience and independence of the operators of the system from the managing organisation. There is one company that bears accountability for operating CF infra. The pressures, headwinds, policies and culture of that organisation can still influence a failure in their supposedly fully distributed and immune system.
For most people hosting behind Cloudflare probably makes sense. But you need to understand what you’re giving up in doing so, or what you’re sacrificing in that process. For others, this will lead to a decision _not_ to use them and that’s also okay.
We once had a cloudflare outage. My CEO asked "mitigate it" I hit him back with, okay, but that'll take me weeks/months potentially, since we're tiny, do you really want to take away that many resources just to mitigate a once every few years half the internet is down issue?
He got it really quickly.
I did mitigate certain issues that were just too common not to, but when it comes to this sort of thing, you gotta ask "is it worth it"
Edit: If you're so small, cloudflare isn't needed, then you don't care if you go down if half the internet does. If you're so big that you need cloudflare, you don't wanna build that sort of feature set. The perfect problem.
If you're using other features like page rules you may need to stand up additional infrastructure to handle things like URI rewrites.
If you're using CDN, your backend might not be powerful enough to serve static assets without Cloudflare.
If your using all of the above, you're work to temporarily disable becomes fairly complicated.
DDoS protection is one nice side effect of privacy, but I'd imagine there are others too.
Are these common?
I guess by using cloudflare you are pooling your connection with other services that are afraid of being ddosed and actively targetted, whether by politics or by sheer volume. Unless you have volume or political motivations, it might be better not to pool, (or to pool for other purposes)
The last I saw you can hire DDoS as a service for like $5 for a short DDoS, and many hosts will terminate clients who get DDoSed.
Its incredible we took a decentralized model and centralized it with things like cloudflare and social media. I think we need pushback on this somehow, buts hard right now to see how its possible. I think the recent talk about federation has been helpful and with the world falling into right-wing dictatorships, this privacy and decentralization is more important than ever.
I am hosted on Cloudflare but my stack is also capable of running on a single server if needed, most libraries are not design with this in mind.
I’m also wondering if all these recent outages are connected to cyber attacks, the timing is strange.
Self hosting will also bring its own set of problems and costs.
Something like TTL 86400 gets you over a lot of outages just because all the caches will still have your entries.
You can also separate your DNS provider from your registrar, so that you can switch DNS providers if your registrar is still online.
> Fair point but you also get exposed if the dns provider has an outage
The usual workaround here is to put two IP addresses in your A record, one that points to your main server on hosting provider A, and the other to your mirror server on hosting provider B.
If your DNS provider goes down, cached DNS should still contain both IPs. And if one of your hosting providers goes down as well, clients should timeout and then fallback to the other IP (I believe all major browsers implement this).
Of course this is extra hassle/cost to maintain, and if you aren't quite careful in selecting hosting providers A and B, there's a good chance they have coordinated failures anyway (i.e. both have a dependency on some 3rd party like AWS/Cloudflare).
Oh, you weren’t doing that? You usually have a reason to act instead of just making work for yourself? Then this post is useless to you.
I would have shared bleeping computers blog post about the same attack but it's behind Cloudflare haha
But yeah, if you don't need Cloudflare, like, at all, obviously don't use them. But, who can predict whether they're going to be DDOS-ed in advance? Fact is, most sites are better off with Cloudflare than without.
Until something like this happens, of course, but even then the question of annual availability remains. I tried to ask Claude how to solve this conundrum, but it just told me to allow access to some .cloudflare.com site, so, ehhm, not sure...
Seriously: having someone in charge of your first-line traffic that is aware of today's security landscape is worth it. Even if they require an upgrade to the "enterprise plan" before actually helping you out.
I see many people saying this but be honest, do you know this for sure or are you just guessing? I've experienced DDoS so I know I'm not just guessing when I say that if your website gets DDoSed your hosting service would just take your website down for good. Then good luck running circles around their support staff to bring your website back up again. Maybe it won't kill your business but it'll surely create a lot of bad PR when your customers find out how you let a simple DDoS attack spiral out of control so bad that your host is refusing to run your website anymore.
Citation direly needed.
In particular I wonder: Who is that total mass of sites where you consider most being better off using cloudflare? I would be curious on what facts you base your assumption. How was the catalog of "all" procured? How are you so confident that "most" of this catalogue are better off using cf? Do you know lots of internals about how strangers (to you) run their sites? If so, mind sharing them?
Most. A lot of simple sites are hosted at providers that will be taken down themselves by run-of-the-mill DDOS attacks.
So, what will such providers do when confronted with that scenario? Nuke your simple site (and most likely the associated DNS hosting and email) from orbit.
Recovering from that will take several days, if not weeks, if not forever.
Dud(ett)e, it's a message board comment, not a scientific study.
But do you really doubt that most ISPs will gladly disable your 1Gb/s home-slash-SMB connection for the rest of the month in face of an incoming 1Tb/s DDOS? Sure, they'll refund your €29,95, but... that's about it, and you should probably be happy they don't disconnect you permanently?
There's no but... - just claims you made that I dared to question just for fundamentals, which obviously you want to dodge. I won't go as far as questioning your intellectual honesty here, but I really have a hard time seeing it. So now for reals, good day
In fact, I expect my host to kick weird porn websites from their servers so that I don't have any bad neighbours, we're running legitimate businesses here sir.
Maybe they'd push me into upgrading my server, as a sort of way of charging me for the increased resources, which is fine. If I'm coasting on a 7$ VPS and my host tanks a DDoS like a hero, sure, let's set up a 50-100$ dedicated server man.
In business loyalty pays and it goes both ways.
I have more than 1 hosting provider though, so I can reroute if needed, and even choose not to reroute to avoid infecting other services, isolating the ddosed asset.
If this is their core argument for not using CDN, then this post sounds like a terribly bad advice. Hopes and prayers do not make a valid security strategy. Appropriate controls and defenses do. The author seems to be completely missing that it takes only a few bucks to buy DDoS as a service. Sometimes people do DDoS your small blog because some random stranger didn't like something you said somewhere online. Speaking from experience. Very much the reason I'm posting this with a throwaway account. If your website receives DDoS, your hosts will take down your server. Nobody wants to be in this situation even if for a personal, small blog.
I'm not too worried about someone DDOSing my personal site. Yeah, they could do it. And then what? Who cares?
Have you experienced a targeted DDoS attack on your personal site? I have. I too had this attitude like yours when I didn't know how nasty targeted DDoS attacks can get.
If you're not too worried about someone DDoSing your personal site, then your host taking your website down and then you having to run circles around their support staff to bring back the website up again, then I guess, you don't have a problem. It's nice that you don't care. (Honestly speaking. Not being sarcastic at all.)
Personally, I wouldn't mind DDoS on my personal site if the problem was just the DDoS. Unfortunately, mostly it isn't. A DDoS has other repercussions which I don't want to deal with exactly because it's a personal site. I just don't want to spend time with customer support staff to find out if and when I can bring my website back up again. DDoS on my personal website by itself isn't all that bad for me. But having to deal with the fallout is a pain in the neck.
These are very different situations. With a DDoS the disruption ends when the attack ends, and your site should become available without any intervention. Your host taking down your site is a whole different matter, you have to take action to have this fixed, waiting around won't cut it.
It is obvious those two are very different situations. I'm not sure I understand your point. Yeah, nobody will be bothered by a short 15 minute DDoS attack. I prolly wouldn't even notice it unless I'm actively checking the logs. But DDoS attacks rarely that short. When someone is DDoSing you, they're doing it with a purpose. Maybe they're just pissed at you.
My point is... a sustained DDoS attack will just make your host drop you. So one situation directly leads to another and you are forced to deal with both situations, like it or not.
Your host taking down the site and forgetting to bring it back up after a DDoS attack isn't a common thing with any host, unless it's the kind that does this routinely even without a DDoS. And then you should look long and hard at your choice of hosting.
Either you suffer from a DDoS attack and come back when it's over, or you have a host that occasionally brings your site down and fails to bring it up until you chase them. But one does not follow the other without a lot of twisting.
Assume a "personal" blog or site is not making money for the owner, and they have backups of the site to restore if the VM gets wiped or defaced. Why spend money on DDoS protection if it is unlikely to ever occur, much less affect someone monetarily?
This is not considering other issues with Cloudflare, like them MITM the entire internet and effectively being an unregulated internet gatekeeper.
And resulting downtime might be even bigger than that with cloudflare.
This is a good essay: https://inoticeiamconfused.substack.com/p/ive-never-had-a-re...
Your host, assuming you're hosting your site on a VPS. Many of them have a policy of terminating clients who get DDoSed.
I also blocked all the AI crawlers after moving to CloudFlare and have stopped a huge amount of traffic theft with it.
My website is definitely much more stable, and loads insanely faster, since moving to CloudFlare.
less reliable (more hops -> less reliable)
dependence on the US regime
What is the benefit to having small blogs be decentralized?
Here's your confusion: personal sites don't need a valid security strategy. They don't need nine nines uptime. They don't need CDN, and ability to deploy, etc, etc. That's all (and forgive the origins of the expression but it is the most accurate description) cargo culting. There's no issue if they're down for a couple days. Laugh it off.
Whereas if you put your site behind a defaults of a cloudflare denial of service wall then real human people won't be able to access your site for as long as you use cloudflare. That's much longer and many more actual humans blocked than any DDoS from some script kiddie. Cloudflare is the ultimate denial of service to everyone that doesn't use Chrome or some other corporate browser.
And forget about hosting feeds on your website if you're behind cloudflare. CF doesn't allow feed readers because they're not bleeding edge JS virtual machines.
As you say, the risk is not a temp outage for small users, the risk is your isp or host or whatever disowning you.
I haven't tried managing my own site in ages, but I get the impression that the modern Internet is pretty much just one big constant DDoS attack, punctuated by the occasional uptick in load when someone decides to do it on purpose instead of out of garden variety apathetic psychopathy.
True, but they are free and effortless, unlike "appropriate controls and defenses"
It might overwhelm their routers etc too?
Yes. Moderation can only do so much.
I would gladly be in this situation if it otherwise lets me remove a large source of complexity, avoid paying a few bucks, and increasing the avoidable centralization of the Internet on my personal, small blog.
"No one was fired for buying IBM (or cloudflare)."
Fat chance arguing against the people holding the purse strings.
I had an issue with the theme of your site probably not being important anyway. If your site probably isn’t important then it’s probably ok that it’s down too.
Unless there is a better option, just asking real businesses (no matter how small) to not use cloudflare is not an option.
Don’t trust your traffic to autopilot, get a it back in your hands, take a look into your bots (1), perhaps there is no real need for CloudFlare at all.
So, every now and then I think about at least putting our assets on a cdn with the option of using it in the case of a ddos attack but then I see things like today and the recent Aws problems and I just get the feeling I should keep everything close.
Usually it's big actors like Facebook, Azure and OpenAI who bombard my servers without any respect or logic. I need to update my access rules constantly to keep them away (using Cloudflare) Sometimes it's clustered traffic, more classic DDoS, from China, Russia or America. That I could easily filter with the DDos protection from my hosting (which is cheaper than cloudflare anyway)
What should I do if not Cloudflare to block with "complex rules" that is strong enough to survive hundreds of concurrent requests by big companies?
And DDOS is hardly my concern, and was never the reason I went to CF in the first place, so the whole foundation of this seems to be a strawman.
the stories are real, and in some cases you may need it — in most cases you don’t. and it clearly doesn’t always protect you.
I don't think it is fair to characterize Cloudflare as a single point of failure, at least in the tradition sense.
If you are a small site, it means that on the rare occurrence that CF goes down, you will have hardly any exposure to upset users.
And... if you are a small site, it probably means you're not going to be constantly logging into your shitty small VPS trying to do security audits and updates, mitigate new zero days, keep every single piece of software in your stack up to date, and CF is an excellent security blanket.
Even on top of ALL of that, you literally are going to propose to change away from a piece of software with quite literally hundreds of convenience benefits (free CDN, workers that can act as reverse proxies, security layers, instant DNS, argo routing which anecdotally seems to help, blah blah blah), because of......... a few hours of downtime in a year? really?
I can't think of any reason not to use cloudflare. It's _dead easy_ to set up too.
I can't help but think that the author understands what cloudflare actually does, or just has a poor understanding of what goes on on the internet. Probably a bit of just being in a bad mood about cloudflare being down too.
But of course I understand that for most users this isn't really a concern and the benefits that cf provides are much more important rather then the centralization problem.
I'm all for decentralizing and I don't feel the need for CloudFlare personally, but yes, arguing that people really shouldn't be doing it, period, requires some good technical reason or a more convincing political stance.
The gateway was checked regularly for random data and the client would stop a download after 1MB, causing the gateway to stop sending the rest of the file.
However, Cloudflare CDN wouldn't stop when the client stop, causing the gateway to send the whole file. Some files are multiple GBs big, so I suddenly got an invoice of 600€.
Do i need to? Definitely not. Am i going to stop using cloudflare? Also no.
When it comes to bigger sites, i think having someone to blame for an outage (especially when these big ones are effectively "the whole Internet broke") is still probably preferable to managing it all yourself.
Way back years ago when I used to roll my own, any problems I had to fix took extremely long and painful. Could I do it again today ? Yeah sure, but I know I couldn't do a better job than Cloudflare.
Anyone have a suggestion for an alternative? I don’t want to pay per domain but I would pay an agency fee for like 100 domains for a few hundred bucks sorta think, like migadu offers for email.
And we all lived happily ever after.
Should I just stop being paranoid about "leaking my IP address" and self-host it 100%? All I fear is that my family will have to live with degraded internet experience because some script kiddie targeted me for fun.
1. Put a moderate amount of money toward having the world's experts in uptime keep your site performing fast, and accept that occasionally your service goes down at the same time as everyone else.
2. Roll your own service, hire a large number of expensive experts to try to solve these problems yourself, and be responsible for your own outages and failures which will happen eventually and probably more frequently.
If no one is going to die from your service going down, it seems like this is a perfectly reasonable third-party dependency. And if the issue is just your contract's SLA or a financial customer, the saving that comes from using Cloudflare can probably be worked through via negotiations.
Cloudflare handles caching of static resources, rate limiting, and blocking of bots with very little configuration.
Also, my ISP here in the UK doesn't provide static IP addresses, so Cloudflare allows me to avoid using a dynamic DNS service, and avoid exposing ports on my router.
Incidentally, if you can make a site "static", so far I'm mostly liking AWS CloudFront served from S3. After many years serving my site from a series of VPSs/hosters/colo/bedroom. It's fast and inexpensive, and so far perfectly solid.
Deploying consists of updating S3, and then triggering a CloudFront invalidation, which takes several seconds. The two key fragments of my deploy script (not including error checking, etc.), after the Web site generator has spat all the files into a staging directory on my laptop where I can test them as `file:` URLs, are:
aws s3 sync \
--profile "$AwsProfile" \
--exclude "*~" \
--delete \
"$WebStagingDir" \
"s3://${S3Bucket}/"
and then: aws cloudfront create-invalidation \
--profile "$AwsProfile" \
--distribution-id "$CloudFrontDistId" \
--paths "/*" \
< /dev/null 2>&1 | cat
The main thing I don't like about it (other than the initial setup wizards having a couple bugs) is that it doesn't automatically map `foo/` URLs to `foo/index.html` S3 objects. The recommended solution was to use AWS Lambda, which I did temporarily, and it works. But when I get a chance, I will see whether I can make my deploy script duplicate S3 `foo/index.html` as S3 `foo/` and/or `foo`, so that I can get rid of the worse kludge of using Lambda. Unless CloudFront offers a feature to do this before then.
thejazzman•1h ago
You’d see those same errors if someone took their own site down while working on it , probably accidentally