after yesterday's reveal[1]: facebook should certainly be down as "scams"
I don’t suppose they use DNS to find their command-and-control servers? It’d be funny if Cloudflare could steal the botnet that way. (For the public good. I know that actually doing such a thing would raise serious concerns. Never know, maybe there would be a revival of interest in DNSSEC.) I remember reading a case within the last few years of finding expired domains in some malware’s list of C2 servers, and registering them in order to administer disinfectant. Sadly, IoT nonsense probably can’t be properly fixed, so they could probably reinfect it even if you disinfected it.
That's PEBCAK.
bradly•30m ago
Isn't identifying real humans an unsolved problem? I'm not sure efforts to hide the truth that these domain are actually the most requested domains does anyone any favors. Is there something using these rankings as an authoritative list or are they just vanity metrics similar to the Alexa Top Site rankings of yore? If they are authoritative, then Cloudflare defining "trusted" is going to be problematic as I would expect them to hide that logic to avoid gaming.
iamkonstantin•24m ago
I'm not sure this was ever a problem to begin with. The obsession with "confirm you are human" has created a lot of "bureaucracy" on technical level without actually protecting websites from unauthorised use. Why not actually bite the bullet and allow automations to interact with web resources instead of bothering humans to solve puzzles 10 times per day?
> Cloudflare defining "trusted"
They would love to monetise the opportunity, no doubt
nickff•17m ago
This is a great idea if you've developed your 'full-stack', but if you're interfacing with others, it often doesn't work well. For example, if you use an external payment processor, and allow bots to constantly test stolen credit card data, you will eventually get booted from the service.
isodev•2m ago
bradly•36s ago
I mostly just let the bots have my site, but I also don't have anything popular enough that it costs me money to do so. If I was paying for extra compute or bandwidth to accommodate bots, I may have a stronger stance.
I do feel a burden with my private site that has a request an account form that has no captcha of bot blocking technology. Fake account requests are 100 to 1 real account, but this is my burden as a site owner, not my users burden. Currently the fake account requests are easy enough to scan and I think I do a good job of picking out the humans, but I can't be sure and I fear this works because I run small software.