The only time I still get asked to click motorcycles is not Cloudflare, it's Google. They absolutely hate when you try to do a search in an incognito window and will give you an unpassable captcha until you give up and use DDG instead.
The shorter the average interaction with the site, the worse the burden cloudflare becomes. E.g. looking up a definition is usually a 5-10 second job, Cloudflare can make it take almost an order of magnitude longer.
In defence of motorcylces and traffic lights, those captchas are annoying too, but they do help humanity in a tiny, tiny way. By contrast, watching a cloudflare loading spinner is stupefyingly useless.
(apologies for ranting. I find it disproportionately irritating even though it's only a few minutes per day, possibly due to the sheer repetition involved).
You should choose one so things like caching will work properly, also search engines really want you to keep to a single domain hostname for the same content.
add tfd = http://web.archive.org/web/2026if_/https://www.thefreedictio... as a search engine. (This is our internet now.)
This may be one reason:
https://blog.cloudflare.com/ddos-threat-report-2025-q3/
Top 10 largest sources of DDoS attacks: 2025 Q3
1. Indonesia
2. Thailand
3. Bangladesh
Vietnam and Singapore also make it into the top 10. The latter is a bit of an outlier being rich and having a small population.
What's the better solution? I certainly don't know.
Even sites that I manage with Cloudflare, I see the same. Even if I use relaxed mode on, If I visit the site via mobile, it can trigger the Cloudflare human validation.
Yes, every one-pager running on Vercel/Netlify sits behind Cloudflare now because no one wants to risk an insane cloud bill in case of an attack. People have become hopelessly dependent on managed cloud services.
However, to be fair, there's less captcha solving nowadays, since they introduced their one-click challenge (but that's not always the case).
I'm in the same situation. Linux, Firefox, Sweden, with a residential IP that has been mine for weeks/months. Who's massively DDoS'ing with residential Telia IPs?!
The difference is: I can't get past Cloudflare's captcha for the past 2-3 years (on Firefox), have to use Chrome for the few sites I do need/want to see behind this stupid wall.
By now I've sent hundreds of feedback through their captcha feedback link, I keep doing it in the hopes at some point someone will see those...
Cloudflare makes region blocking very easy.
Businesses were perfectly fine to accept the low security of 1990s email, webserver, and all the other configurations and software. They did not suddenly out of nowhere ask for more restrictions (such as email sending restricted to using the email server "officially responsible" for that domain- it used to be you could do the same as with physical mail, where you can drop letters into mailboxes writing a "From" address that was not in the same city as the mailbox location). They certainly did not volunteer to make everything much more difficult -- and expensive -- to set up and use. It also leads to a lot more work for their IT staff and a lot more user problems to respond to.
All these annoying restrictions were forced to be implemented by attacks of all kinds.
Because it is so difficult, compromises needed to be made. CFs methods are of course full of them, such as taking country and IP ranges into account. Feel free to make practical and implementable and affordable suggestions for alternative solutions. You may even get a reward from CF if you can come up with something good that allows them to cut back on restrictive policies while at least maintaining the current level of security. It is in the interest of CFs customers to be as accessible as possible, after all.
Spammers have been around since forever and it used to be the webmaster/sysadmin's responsibility to deal with spam in a way that would not hinder user experience. With Cloudflare all that responsibility is aggressively passed on to the user, cumulatively wasting _years_.
As for attackers, I wonder if Cloudflare publishes data showing how many of the billions of websites it "protects" have experienced a significant attack. They don't offer free protection to save the internet, but rather for control -- and no single company should have this much control.
Is the fallacy here not obvious? Yes, spammers have been around since forever, but it's not the same amount of spammers. Whether it's two spammers or two million spammers does make a difference.
I remember some clients in the mid 2000s. They got several spam emails per minute on some accounts. Not kidding. I haven't seen anything like that in recent years.
At that time I was an admin of said student network, and at the same time built TCP/IP based network and email infrastructure at a subsidiary of a large German company as a side job.
So I was an admin of routers, switches, various services (email, Usenet server, webservers, fax server).
Funny enough - we only added a firewall in front of the student network to protect against our own student's experiments rather than against outside intrusions, at least initially (for example, one person setting up their own Usenet server brought down DNS by flooding it with queries)!
We never had any problems with spam or attackers. "you just didn't notice the attacks!" - NO. When you go online today you get an eternal stream of automated intrusion attempts, visible in all your log files.
Today does not even remotely compare with the easy-going Internet of the 1990s.
Usenet, forums, email - they were all very much usable with minimal or zero spam, and very basic user management. Today, with such a basic setup like we used to have, you would be 100% and chock-full of spam shortly after putting such a server online.
Well this is where your argument goes a little wrong IMO. When you're on something more niche (eg Firefox on Linux) they just don't care as much about making it work for you because there's so few of us blocked in the process.
And this problem should really be solved with a proper solution, not this fiddly black magic ruleset stuff. The email thing you mention is a good example. DKIM and SPF are good things that makes things more secure in an understandable way. Specifying your legit mail handlers is not a workaround, it's good security. In some ways Altman has a good idea with his WorldCoin eyeballs. But I don't support it for obvious reasons. I don't want my internet identity tied to a single tech bro and some crypto. If we do this kind of thing it has to be a proper government or NGO effort with proper oversight and appeals process.
I've tried to make my Linux Firefox identify as edge on windows and that makes it a lot better on some sites (especially Microsoft breaks a lot of M365 functions on purpose if you're not using the "invented here" browser). And many sites don't give me captchas then. But in some cases Cloudflare goes even more nasty and blocks me outright which is really annoying. If I use Linux a lot more sites break but Cloudflare sticks with captchas.
Anyway I think the age of the captcha is soon over anyway. AI will make it unviable.
> All these annoying restrictions were forced to be implemented by attacks of all kinds.
Ps it's not always attacks but also to block things that are good for consumers but bad for the sites' business model. Like preventing screen scraping which can legit help price comparison sites.
That's unaccountability thinking. If I have pests in my rosegarden and as a reaction I napalm the backyard of everyone in my neighbourhood, that is not the bugs' fault.
But since in reality there is friction, there is no magic mechanism to make those interest force CF to implement a better system as, for example, the customers might not have enough knowledge / tech expertise to understand they're losing 1% due to crude CF filters and ask for a fix
The data Cloudflare shows people in the number of requests it "protected" you from and the number of requests it thought legit. There is no indication of the number of false positives, and IIRC of the number of people asked to pass a captcha. The wording implies zero false positives and I think many people simply assume its negligible.
No, that's what I said - they may lack knowledge.
"Your face ran into my fist!"
At the same time, the bots are dumb as hell. I have honey pots that basically are as simple as "if you visit this obscure, hidden URL you're banned" or "if the same obscure page for a course is visited for four different courses in a row, then ban all the IPs that were part of that." But they keep coming... like, an infinite number of IPs. I genuinely don't want to use Cloudflare, but I understand why people do. It's absolutely crazy out there.
The user would know that each pageview is $0.001.
The website owner would know each pageview pays for itself.
We probably could get there with some type of crypto approach. Probably one that is already invented, but not popular yet. I don't know too much about crypto payments, but maybe the Bitcoin Lightning network or a similar technology.
What would be the incentive to send failing payment requests?
The sender does not have a direct communication channel with the receiver. They send the payment to a hop they are connected to (they have a channel with) and it gets routed to the receiver. The first hop would already drop an invalid payment. If they spam them with more invalid payments, all that would happen is that their connection to the Lightning Network would get lost as their channel partners would disconnect from them. The receiver would not receive a single network packet in the whole process.
A lot of people try to downplay cloudflare's ddos protection but it saved me so many times over the years. I honestly don't think that there exists a good solution that someone with without a lot of money and resources can leverage besides cloudflare.
It's almost dismissed in the first sentence as "basic DDOS protection" as if there is any other company that provides an ironclad solution besides cloudflare, especially free for a tiny niche community. There is none that I am aware of.
The fact is internet was built around the idea that everything should be decentralized in order to be resilient.
Resilient to attacks, resilient to outages or any form of censorship.
So, each time amazon, cloudflare (...) fails, it reminds us that nobody like SPOFs.
E.g. for a frontend yourself a budget of 1MB for a static site and 2MB for dynamic one and go from there.
Some previous discussions:
binaryturtle•1mo ago
It's also the entire blockage of older or less mainstream systems that no longer can access, sometimes critical, websites at all when the Cloudflare check blocks things entirely because the "browser is out of date" or not on their whitelist. Therefore causing excessive discrimination of poorer folks that can't afford upgrading to never/ other systems that still are legible to pass Cloudflare's "grace".
inferiorhuman•1mo ago
grim_io•1mo ago
inferiorhuman•1mo ago
hombre_fatal•1mo ago
graemep•1mo ago
inferiorhuman•1mo ago
legumesoflight•1mo ago
azalemeth•1mo ago
Egor3f•1mo ago
hombre_fatal•1mo ago
akst•1mo ago
Is there any data that’s supports this suggestion users with older devices are actually being discriminated? (% of users actually using older devices incapable of upgrading to browser versions supported by cloud flare)
I just find it hard to believe users are actually getting denied access because their device are old. Surely you can still run new versions of Chrome and Firefox on most things [1].
——————
[1] Don’t get me wrong I use Safari and I find it inflammatory when a site tells me to use a modern browser because they doesn’t support safari (the language more so). But I wouldn’t call it discrimination seeing as I have an opinion to run firefox/chrome from time to time.