frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Ask HN: How to stop an AWS bot sending 2B requests/month?

288•lgats•3mo ago
I have been struggling with a bot– 'Mozilla/5.0 (compatible; crawler)' coming from AWS Singapore – and sending an absurd number of requests to a domain of mine, averaging over 700 requests/second for several months now. Thankfully, CloudFlare is able to handle the traffic with a simple WAF rule and 444 response to reduce the outbound traffic.

I've submitted several complaints to AWS to get this traffic to stop, their typical followup is: We have engaged with our customer, and based on this engagement have determined that the reported activity does not require further action from AWS at this time.

I've tried various 4XX responses to see if the bot will back off, I've tried 30X redirects (which it follows) to no avail.

The traffic is hitting numbers that require me to re-negotiate my contract with CloudFlare and is otherwise a nuisance when reviewing analytics/logs.

I've considered redirecting the entirety of the traffic to aws abuse report page, but at this scall, it's essentially a small DDoS network and sending it anywhere could be considered abuse in itself.

Are there others that have similar experience?

Comments

giardini•3mo ago
Hire a lawyer and have him send the bill for his services to them immediately with a note on the consequences of ignoring his notices. Bill them aggressively.
Animats•3mo ago
Yes. Computer Fraud and Abuse Act to start.

The first demand letter from a lawyer will usually stop this. The great thing about suing big companies is that they have to show up. You have no contractual agreement which prevents suing; this is entirely from the outside.

SoftTalker•3mo ago
Threatening to sue is one thing. Actually doing it will cost you time and money. And even if you get a judgement how are you going to collect from some rando in Singapore?
tracker1•3mo ago
AWS isn't some rando in Singapore.
SoftTalker•3mo ago
AWS isn't doing this. The rando renting the AWS instance in Singapore is.
Animats•3mo ago
There are ways. You sue AWS and "Does 1-50". Then AWS's lawyers become eager to tell you who misused their service so you can sue the other party. Talk to a lawyer.
impossiblefork•3mo ago
It's AWS's system and they have been informed that the spam/DDOS is ongoing.

They have control of what goes on on their computers and they are responsible.

tempestn•3mo ago
That's not how lawyers or bills work, unfortunately in this case, but fortunately in general.
bigfatkitten•3mo ago
Do you receive, or expect to receive any legitimate traffic from AWS Singapore? If not, why not blackhole the whole thing?
caprock•3mo ago
Agreed. You should be able to set the waf to just drop the packets and not even bother with the overhead of a response. I think cloud flare waf calls this "block".
marginalia_nu•3mo ago
Yeah, this is the way. Dropping the packets makes the requests cheaper to respond to than to make.

The problem with DDoS-attacks is generally the asymmetry, where it requires more resources to deal with the request than to make it. Cute attempts to get back at the attacker with various tarpits generally magnifies this and makes it hit even harder.

jihadjihad•3mo ago
When the WAF drops packets, how does pricing work? I am assuming there is still a non-zero cost to handling that? Kind of sounded from OP that they are looking to shake the monkey off their back for good, and cheaply.
firecall•3mo ago
Yep, I did this for a while.

The TikTok Byte Dance / Byte Spider bots were making millions of image requests from my site.

Over and over again and they would not stop.

I eventually got Cloudinary to block all the relevant user agents, and initially just totally blocked Singapore.

It’s very abusive on the part of these bot running AI scraping companies!

If I hadn’t been using the kind and generous Cloudinary, I could have been stuck with some seriously expensive hosting bills!

Nowadays I just block all AI bots with Cloudflare and be done with it!

lozenge•3mo ago
Here's the IP address ranges- https://docs.aws.amazon.com/vpc/latest/userguide/aws-ip-work...
Jean-Papoulos•3mo ago
You don't even need to send a response. Just block the traffic and move on
MrThoughtful•3mo ago
If it follows redirects, have you tried redirecting it to its own domain?
lgats•3mo ago
I've tried localhost redirects, doesn't impact the speed of their requests, all ports are closed on the suspect machines
Scotrix•3mo ago
Just find a Hoster with low traffic egress cost, reverse proxy normal traffic to Cloudflare and reply with 2GB files for the bot, they annoy you/cost you money, make them pay.
tgsovlerkhgsel•3mo ago
Isn't ingress free at AWS? You'd have to find a way to generate absurd amounts of egress traffic - absurd enough to be noticed compared to billions of HTTP requests. 2B requests at 1 KB/request is 2 TB/month so they're likely paying a double-digit dollar amount just for the traffic they're sending to you (wtf - where does that money come from?).

But since AWS considers this fine, I'd absolutely take the "redirecting the entirety of the traffic to aws abuse report page" approach. If they consider it abuse - great, they can go turn it off then. The bot could behave differently but at least curl won't add a referer header or similar when it is redirected, so the obvious target would be their instance hosting the bot, not you.

Actually, I would find the biggest file I can that is hosted by Amazon itself (not another AWS customer) and redirect them to it. I bet they're hosting linux images somewhere. Besides being more annoying (and thus hopefully attention-getting) for Amazon, it should keep the bot busy for longer, reducing the amount of traffic hitting you.

If the bot doesn't eat files over a certain size, try to find something smaller or something that doesn't report the size in response to a HEAD request.

ndriscoll•3mo ago
If it's making outbound requests it might be going through a NAT gateway, in which case response traffic will be expensive.
hylaride•3mo ago
I'd be surprised to see a mass-scraping bot behind a NAT gateway. They're probably using public lambdas where they can't even control the egress IPs (unless something has changed in the last 6 months since I last looked) and sending results to a queue or bucket somewhere.

What I'd do is block the AWS AP range at the edge (unless there's something else there that needs access to your site) - you can get regularly updated JSON formatted lists around the internet, or have something match its fingerprint to send it heaps of garbage, like the zip-bombs others have suggested. It could be a recursive "you're abusing my site - go away" or what-have-you. You could also do some-kind of grey-listing, where you limit the speed to a crawl so that each connection just consumes crawler resources and gets little content. If they are tracking this, they'll see the performance issues and maybe adjust.

2000swebgeek•3mo ago
block the IPs or setup an WAF on AWS if you cannot be on Cloudflare.
re-thc•3mo ago
AWS WAF isn’t free. Definitely cheaper but all the hits still cost.
shishcat•3mo ago
if it follows redirect, redirct him to a 10gb gzip bomb
nake89•3mo ago
I was just going to post the same thing. Happy somebody else thought of the same thing :D
sixtyj•3mo ago
You nasty ones ;)
cantor_S_drug•3mo ago
https://zadzmo.org/code/nepenthes/

This is a tarpit intended to catch web crawlers. Specifically, it targets crawlers that scrape data for LLMs - but really, like the plants it is named after, it'll eat just about anything that finds it's way inside.

It works by generating an endless sequences of pages, each of which with dozens of links, that simply go back into a the tarpit. Pages are randomly generated, but in a deterministic way, causing them to appear to be flat files that never change. Intentional delay is added to prevent crawlers from bogging down your server, in addition to wasting their time. Lastly, Markov-babble is added to the pages, to give the crawlers something to scrape up and train their LLMs on, hopefully accelerating model collapse.

https://news.ycombinator.com/item?id=42725147

Is this a good solution??

iberator•3mo ago
Best tarpit ever.
brunkerhart•3mo ago
Write to aws abuse team
lgats•3mo ago
"[AWS has] engaged with our customer, and based on this engagement have determined that the reported activity does not require further action from AWS at this time."
swiftcoder•3mo ago
Making the obviously-abusive bot prohibitively expensive is one way to go, if you control the terminating server.

gzip bomb is good if the bot happens to be vulnerable, but even just slowing down their connection rate is often sufficient - waiting just 10 seconds before responding with your 404 is going to consume ~7,000 ports on their box, which should be enough to crash most linux processes (nginx + mod-http-echo is a really easy way to set this up)

Orochikaku•3mo ago
Thinking along the same lines a PoW check like like anubis[1] may work for OP as well.

[1] https://github.com/TecharoHQ/anubis

hshdhdhehd•3mo ago
Avoid if you dont have to. It is not really good traffic friendly. Especially if current blocking works.
CaptainOfCoit•3mo ago
> Especially if current blocking works.

The submission and the context is when current blocking doesn't work...

hshdhdhehd•3mo ago
> Thankfully, CloudFlare is able to handle the traffic with a simple WAF rule and 444 response to reduce the outbound traffic.

That is strictly less resource intensive than serving 200 and some challenge.

CaptainOfCoit•3mo ago
Right, but if you re-read the submission, OP already tried that and found the costs to be potentially be too high, and are looking for alternatives...
winnie_ua•3mo ago
It was blocking me from accessing GNOME's gitlab instance from my cell phone.

So it mistakedly flagged me as bot. IDK. And it forces legitimate users to wait a while. Not great UX.

lagosfractal42•3mo ago
This kind of reasoning assumes the bot continues to be non-stealthy
swiftcoder•3mo ago
I mean, forcing them to spend engineering effort the make their bot stealthy (or to be able to maintains 10's of thousands of open ports), is still driving up their costs, so I'd count it as a win. The OP doesn't say why the bot is hitting their endpoints, but I doubt the bot is a profit centre for the operator.
lagosfractal42•3mo ago
You risk flagging real users as bots, which drives down your profits and reputation
swiftcoder•3mo ago
In this case I don't think they do - unless the legitimate users are also hitting your site at 700 RPS (in which case, the added load from the bot is going to be negligible)
hansvm•3mo ago
Once the bot is stealthy (the current sub-thread if I haven't misread) they absolutely do. A couple examples where I've been flagged as a bot for normal traffic:

1. Discord's telemetry was broken on my browser, and on failure they immediately retried. It didn't take many actions queued up on the site before my browser was initiating over 100RPS, on their behalf.

2. Target and eBay still flag my sessions as bot traffic (presumably because they don't recognize the user agent or because I use Linux or something). Target allows browsing their site for a few items before heavily rate-limiting me for a day or so, and eBay just resets my password a day or two after I log in, every single bloody time.

The problem is that from time to time normal users will generate large traffic volumes, and if the bot owner uses many IPs then you're forced to use less reliable signals for that ban hammer (i.e., no single user will be near 700 RPS).

heavyset_go•3mo ago
If going stealth means not blatantly DDoS'ing the OP then that's a better outcome than what's currently happening
somat•3mo ago
xkcd 810 comes to mind. https://xkcd.com/810/

"what if we make the bots go stealthy and indistinguishable from actual human requests?"

"Mission Accomplished"

HPsquared•3mo ago
This has pretty much happened now in the internet at large, and it's kinda sad.
lotsofpulp•3mo ago
“Constructive” and “Helpful” are unfortunately not out weighed by garbage.
lucastech•3mo ago
Yeah, there are some botnets I've been seeing that are much more stealthy, using 900-3000 IP's with rotating user agents to send enormous amounts of traffic.

I've resorted to blocking entire AS routes to prevent it (fortunately I am mostly hosting US sites with US only residential audiences). I'm not sure who's behind it, but one of the later data centers is oxylabs, so they're probably involved somehow.

https://wxp.io/blog/the-bots-that-keep-on-giving

mkj•3mo ago
AWS customers have to pay for outbound traffic. Is there a way to get them to send you (or cloudflare) huge volumes of traffic?
horseradish7k•3mo ago
yeah, could use a free worker
compootr•3mo ago
free workers only get 100k reqs per day or something
_pdp_•3mo ago
A KB zip file can expand to giga / petabytes through recursive nesting - though it depends on their implementation.
sim7c00•3mo ago
thats traffic in the other direction
swiftcoder•3mo ago
The main joy of a zip bomb is that it doesn't consume much bandwidth - the transferred compressed file is relatively small, and it only becomes huge when the client tries to decompress it in memory afterwards
crazygringo•3mo ago
It's still going in the wrong direction.
dns_snek•3mo ago
It doesn't matter either way. OP was thinking about ways to consume someone's bandwidth. A zip bomb doesn't consume bandwidth, it consumes computing resources of its recipient when they try to unpack it.
crazygringo•3mo ago
I know. I was pointing out that it doesn't matter what it consumes if it's going the wrong way to begin with.
sim7c00•3mo ago
i wouldnt assume someone sending 700 req per minute or so to a single domain repeatedly (likely to the same resources) will bother opening zip files.

the bot in the article is likely being tested (as author noted), or its a very bad 'stresser'.

if it was looking for content grabbing it will access differently. (grab resources once and be on its way).

its not bad to host zip bombs tho, for the content grabbers :D nomnom.

saw an article about a guy on here who generated arbitrary pngs or so. also classy haha.

if u have a friendly vps provider who gives unlimited bandwidth these options can be fun. u can make a dashboard which bot has consumed the most junk.

ruined•3mo ago
nearly every http response is gzipped. unpacking automatically is a default feature of every http client.
sim7c00•3mo ago
Accept-Encoding i think would be logical on scrapers these days but maybe its not helpful idk. server should adhere to what client requests afaik.
mjmas•3mo ago
This is using the builtin compression in http:

  Transfer-Encoding: gzip
gildas•3mo ago
Great idea, some people have already implemented it for the same type of need, it would seem (see the list of user agents in the source code). Implementation seems simple.

https://github.com/0x48piraj/gz-bomb/blob/master/gz-bomb-ser...

kijin•3mo ago
Be careful using this if you're behind cloudflare. You might inadvertently bomb your closest ally in the battle.
CWuestefeld•3mo ago
We've been a similar situation. One thing we considered doing is to give them bad data.

It was pretty clear in our case that they were scraping our site to get our pricing data. Our master catalog had several million SKUs, priced dynamically based on availability, customer contracts, and other factors. And we tried to add some value to the product pages, with relevant recommendations for cross-sells, alternate choices, etc. This was pretty compute-intensive, and the volume of the scraping could amount to a DoS at times. Like, they could bury us in bursts of requests so quickly that our infrastructure couldn't spin up new virtual servers, and once we were buried, it was difficult to dig back out from under the load. We learned a lot during this period, including some very counterintuitive stuff about how some approaches to queuing and prioritizing that appeared sounded great on paper, actually could have unintended effects that made such situations worse.

One strategy we talked about was that, rather than blocking the bad guys, we'd tag the incoming traffic. We couldn't do this perfect accuracy, but the inaccuracy was such that we could at least ensure that it wasn't affecting real customers (because we could always know when it was a real, logged-in user). We realized that we could at least cache the data in the borderline cases so we wouldn't have to recalculate (it was a particularly stupid bot that was attacking us, re-requesting the same stuff many times over); from that it was a small step to see that we could at the same time add a random fudge factor into any numbers, hoping to get to a state where the data did our attacker more harm than good.

We wound up doing what the OP is now doing, working with CloudFlare to identify and mitigate "attacks" as rapidly as possible. But there's no doubt that it cost us a LOT, in terms of developer time, payments to CF, and customer dissatisfaction.

By the way, this was all the more frustrating because we had circumstantial evidence that the attacker was a service contracted by one of our competitors. And if they'd come straight to us to talk about it, we'd have been much happier (and I think they would have been as well) to offer an API through which they could get the catalog data easily and in a way where we don't have to spend all the compute on the value-added stuff we were doing for humans. But of course they'd never come to us, or even admit it if asked, so we were stuck. And while this was going, there was also a case in the courts that was discussed many times here on HN. It was a question about blocking access to public sites, and the consensus here was something like "if you're going to have a site on the web, then it's up to you to ensure that you can support any requests, and if you can't find a way to withstand DoS-level traffic, it's your own fault for having a bad design". So it's interesting today to see that attitudes have changed.

gwbas1c•3mo ago
> rather than blocking the bad guys, we'd tag the incoming traffic

> had circumstantial evidence that the attacker was a service contracted by one of our competitors

> we'd have been much happier ... to offer an API through which they could get the catalog data easily

Why not feed them bad data?

CWuestefeld•3mo ago
We didn't like the ethics of it, especially since we couldn't guarantee that the bogus data was going only to the attacker (rather than to innocent but not-yet-authenticated "general public").
IshKebab•3mo ago
I guess you could have required login to show prices to suspicious requests. Then it shouldn't affect most people and if it accidentally does the worst outcome is they need to log in.
miga•3mo ago
Do they change IP numbers so often?
CWuestefeld•3mo ago
Oh, lord yes! Frequently they're scraping us from multiple distinct CIDR blocks simultaneously. But we can tell it's the same organization doing it not just because the requests look similar, but it's even possible occasionally to see a request for a search from one CIDR that's followed up immediately by requests for details for the products that had been returned by the search.

While at the same time, because our site is B2B ecommerce, where our typical customer is a decent-sized corporation, it's not uncommon for a single legit user to have consecutive requests originate from different IPs, as their internal proxies use different egress points.

kristianp•3mo ago
Stupid question, won't that consume 7000 ports on your own box as well?
Neywiny•3mo ago
I think it'll eat 7000 connection objects, maybe threads, but they'll all be on port 80 or 443? So if you can keep the overhead of each connection down, presumably easy because you don't need it to be fast, it'll be fine
kijin•3mo ago
Each TCP connection requires a unique combination of (server port, client port). Your server port is fixed: 80 or 443. They need to use a new ephemeral port for each connection.

You will have 7000 sockets (file descriptors), but that's much more manageable than 7000 ports.

swiftcoder•3mo ago
7000 sockets, at any rate, but provided you've anticipated the need, this isn't challenging to support (and nginx is very good at handling large numbers of open sockets)
SergeAx•3mo ago
Wouldn't it consume the same number of connections on my server?
snvzz•3mo ago
Null-route the entirety of AWS ip space.
JCM9•3mo ago
Have ChatGPT write you a sternly worded cease and desist letter and send it to Amazon legal via registered mail.

AWS has become rather large and bloated and does stupid things sometimes, but they do still respond when you get their lawyers involved.

molszanski•3mo ago
Maybe add this IP to a blacklist? https://iplists.firehol.org/ It would be easier to pressure AWS when it is there
reisse•3mo ago
What kind of content do you serve? 700 RPS is not a big number at all, for sure not enough to qualify as a DoS. I'm not surprised AWS did not take any action.
marginalia_nu•3mo ago
FWIW, a HN hug of death, which fairly regularly knocks sites offline tends to peak at a few dozen RP.
reisse•3mo ago
On the other hand, I've only seen complaint letters from AWS for doing tens of thousands of RPS on rate-limited endpoints for multiple days. Even then, AWS wasn't the initiator of inquiry (it was their customer being polled), and it wasn't a "cease and desist" kind of letter, it was "please explain what you're doing and prove you're not violating our ToS".
hsbauauvhabzb•3mo ago
Why would aws care if you’re consuming one of their customers resources when the customer is the one that pays?
Hizonner•3mo ago
> 700 RPS is not a big number at all, for sure not enough to qualify as a DoS.

That depends on what's serving the requests. And if you're making the requests, it is your job to know that beforehand.

pingoo101010•3mo ago
Take a look at https://github.com/pingooio/pingoo

It's a reverse-proxy / load balancer with built-in firewall and automatic HTTPS. You will be able to easily block the annoying bots with rules (https://pingoo.io/docs/rules)

neya•3mo ago
I had this issue on one of my personal sites. It was a blog I used to write maybe 7-8 years ago. All of a sudden, I see insane traffic spikes in analytics. I thought some article went viral, but realized it was too robotic to be true. And so I narrowed it down to some developer trying to test their bot/crawler on my site. I tried asking nicely, several times, over several months.

I was so pissed off that I setup a redirect rule for it to send them over to random porn sites. That actually stopped it.

sim7c00•3mo ago
this is the best approach honestly. redirect them to some place that undermines their efforts. either back to themselves, their own provider, or nasty crap that no one want to find in their crawler logs.
throwaway422432•3mo ago
Goatse?

Wouldn't recommend Googling it. You either know or just take a guess.

Rendello•3mo ago
I googled a lot of shock sites after seeing them referenced and not knowing what they were. Luckily Google and Wikipedia tended to shield my innocent eyes while explaining what I should be seeing.

The first goatse I actually saw was in ASCII form, funnily enough.

antonymoose•3mo ago
I use the ASCII form to reply to spammers, since it will not trip up on an attachment filter or anything most usually. I get mixed results from them, but the results are usually funny.
sph•3mo ago
I've never seen it in ASCII form, and I don't want to search for it as google will inevitably disregard my instructions and show me the 4K version in full color.
nosrepa•3mo ago
The Jason Scott method.
specialist•3mo ago
Maybe someone will publish a "nastylist" for redirecting bots.

Decades later, I'm still traumatized by goatse, so it'll have to be someone with more fortitude than me.

sim7c00•3mo ago
goatse, lemonparty, meatspin. take ur pick of the gross but clearnetable things.

mind you before google and the likes and the great purge of internet, these things were mild and humorous...

znpy•3mo ago
> I've tried 30X redirects (which it follows) to no avail

Make it follow redirects to some kind of illegal website. Be creative, I guess.

The reasoning being that if you can get AWS to trigger security measures on their side, maybe AWS will shut down their whole account.

_pdp_•3mo ago
As others have suggested you can try to fight back depending on the capabilities of your infrastructure. All crawlers will have some kind of queuing system. If you manage to cause for the queues to fill up then the crawler wont be able to send as many requests. For example, you can allow the crawler to open the socket but you only send the data very slowly causing the queues to get filled quickly with busy workers.

Depending on how the crawler is designed this may or may not work. If they are using SQS with Lambda then that will obviously not work but it will fire back nevertheless because the serverless functions will be running for longer (5 - 15 minutes).

Another technique that comes to mind is to try to force the client to upgrade the connection (i.e. websocket). See what will happen. Mostly it will fail but even if it gets stalled for 30 seconds that is a win.

stevoski•3mo ago
> Thankfully, CloudFlare is able to handle the traffic with a simple WAF rule and 444 response to reduce the outbound traffic.

This is from your own post, and is almost the best answer I know of.

I recommending you configure a Cloudflare WAF rule to block the bot - and then move on with your life.

Simply block the bot and move on with your life.

burnte•3mo ago
> The traffic is hitting numbers that require me to re-negotiate my contract with CloudFlare and is otherwise a nuisance when reviewing analytics/logs.

It's having negative financial repercussions now. It's not ignorable anymore.

hamburgererror•3mo ago
There might be some ideas to dig here: https://news.ycombinator.com/item?id=41923635
theginger•3mo ago
If it follows the redirect I would redirect it to random binary files hosted by Amazon, then see if it continues to not require any further action
nurettin•3mo ago
What kind of website is this that makes it so lucrative to run so many requests?
sim7c00•3mo ago
if they have some service up on the machines the bot connect from then u can redirect them to themselves.

otherwise, maybe redirect to aws customer portal or something -_- maybe they will stop it if it hit themselves...

hyperknot•3mo ago
Use a simple block rule, not a WAF rule, those are free.
ahazred8ta•3mo ago
Silly suggestion: feed them bogus DNS info. See if you can figure out where their DNS requests are coming from.
lgats•3mo ago
they're using google dns, unfortunately.
locusm•3mo ago
I am dealing with a similar situation and kinda screwed up as I managed to get Google Ads suspended due to blocking Singapore. I see a mix of traffic from AWS, Tencent and Huawei cloud at the moment. Currently Im just scanning server logs and blocking ip ranges.
crazygringo•3mo ago
> I managed to get Google Ads suspended due to blocking Singapore

How did that happen, why? I feel like a lot of people here would not want to make the same mistake, so details would be very welcome.

As long as pages weren't being served and so there was never any case of requesting ads but never showing them, I don't understand why Ads would care?

kijin•3mo ago
Not the parent, but it sounds like they blocked the entire country, including Googlebot's Singaporean IP ranges.

If your server returns different content when Google crawls it compared to when normal users visit, they might suspect that you are trying to game the system. And yes, they do check from multiple locations with non-Googlebot user agents.

I'm not sure if showing an error page also counts as returning different content, but I guess the problem could be exacerbated by any content you include in the error page unless you're careful with the response code. Definitely don't make it too friendly. Whitelist important business partners.

locusm•3mo ago
If you run Google Ads for your customers or yourself and you through whatever means block the Google Adsbot it will come up as a "site unreachable" error and Google will suspend all your ads running. If you dig down further it will state some kind of DNS error as the problem. This is why blocking entire countries is problematic.
bcwhite•3mo ago
I redirect such traffic to a subdomain with an IP address that isn't assigned (or legally assignable). The bots just wait for a response to connection requests but never gets them. This seems to typically cost 10s waiting. The traffic doesn't come to my servers and it doesn't risk legitimate users who might hit it by mistake.
lgats•3mo ago
I've attempted a few of these, redirecting to invalid domains or https://en.wikipedia.org/wiki/Black_hole_(networking)#:~:tex...
MaxikCZ•3mo ago
Perhaps naive question, but wouldnt also your hardware be waiting for reply from non-existing network? Wouldnt you just add to their DoS power this way?
arielcostas•3mo ago
No, the client is the one trying to connect to the non-existing server. You just redirect them, for example with a 301 saying "go here instead", and when they try to go there, they will find an invalid IP
bcwhite•3mo ago
An idea I had was a custom kernel that replied ACK (or SYN+ACK) to every TCP packet. All connections would appear to stay open forever, eating all incoming traffic, and never replying, all while using zero resources of the device. Bots might wait minutes (or even forever) per connection.
fabioyy•3mo ago
no need to mess with the kernel, block on the local machine firewall outgoing RST packet ,create a program that reads raw socket for incoming SYN and answer the syn/ack). but anyway, this technique will not differentiate legitimate connections.
pclmulqdq•3mo ago
As I understand it, you can probably do this with XDP in the Linux kernel and it will be pretty cheap.
Kubuxu•3mo ago
I've done that in the past (8+ years ago) with raw IP sockets.
xena•3mo ago
Main author of Anubis here. Have CloudFlare return a HTTP 200 response instead of a rejection at non-200. That makes the bots stop hammering until they get a 200 response.
andrewmcwatters•3mo ago
I've also gotten good results just dropping the connection if it hits the application layer, and you can't get CloudFlare to return the desired behavior first.

Not ideal, but it seems to work against primitive bots.

kingforaday•3mo ago
If you see this, something isn't working with your main site: https://anubis.techaro.lol/
Rothnargoth•3mo ago
Blocking before the traffic reaches the application servers (what you're doing) is the most effective and cost/time efficient.

It sounds like the bot operator is spending enough on AWS to withstand the current level of abuse reports.

If you really wanted to retaliate, you could try getting a warrant to force AWS to disclose the owners of that AWS instance.

jedberg•3mo ago
Tell cloudflare it's abusive, and they will block it outside your account so it doesn't count against you.
HumanOstrich•3mo ago
Not true, especially for OP's scale.
reconnecting•3mo ago
tirreno(1) guy here.

I'd suggest taking a look into patterns and IP rotation (if any) and perhaps blocking IP CIDR at the web server level, if the range is short.

Why simple deny from 12.123.0.0/16 (Apache) is not working for you?

1. https://github.com/tirrenotechnologies/tirreno

AdamJacobMuller•3mo ago
> I've tried 30X redirects (which it follows)

301 response to a selection of very large files hosted by companies you don't like.

When their AWS instances start downloading 70000 windows ISOs in parallel, they might notice.

Hard to do with cloudflare but you can also tar pit them. Accept the request and send a response, one character at a time (make sure you uncork and flush buffers/etc), with a 30 second delay between characters.

700 requests/second with say 10Kb headers/response. Sure is a shame your server is so slow.

notatoad•3mo ago
>301 response to a selection of very large files hosted by companies you don't like.

i suggest amazon

lgats•3mo ago
unfortunately, it seems AWS even has firewalls that will quickly start failing these requests after a few thousand, then they're back up to their high-concurrency rate
knowitnone3•3mo ago
Microsoft
gruez•3mo ago
>When their AWS instances start downloading 70000 windows ISOs in parallel, they might notice.

Inbound traffic is free for AWS

jacquesm•3mo ago
It's free, but it's not infinite.
kadoban•3mo ago
Free just means you get in trouble when you abuse it.
gitgud•3mo ago
> Accept the request and send a response, one character at a time

Sounds like the opposite of the [1] Slow Loris DDOS attack. Instead of attacking with slow connections, you’re defending with slow connections

[1] https://www.cloudflare.com/en-au/learning/ddos/ddos-attack-t...

tliltocatl•3mo ago
That's why it is actually sometimes called inverse slow loris.
amy_petrik•3mo ago
it's called the slow sirol in my circles
tremon•3mo ago
As an alternative: 301 redirect to an official .sg government site, let local law enforcement deal with it.
integralid•3mo ago
Don't actually do this, unless you fancy meeting AWS lawyers in court and love explaining intricate details of HTTP to judges.
more_corn•3mo ago
I like this idea. Here’s how it plays out: Singapore law enforcement gets involved. They send a nasty-gram to AWS. lawyers get involved. AWS lawyers collect facts. Find that the culprit is not you, find that you’ve asked for help, find that they (AWS) failed to remediate, properly fix responsibility on the culprit and secondary responsibility on themselves, punch themselves in the crotch for a minute, and then solve the problem by canceling the account of the offending party.
kadoban•3mo ago
> Find that the culprit is not you, find that you’ve asked for help, find that they (AWS) failed to remediate, properly fix responsibility on the culprit and secondary responsibility on themselves, punch themselves in the crotch for a minute, and then solve the problem by canceling the account of the offending party.

Yeah, lawyers are notorious for blaming themselves and taking responsibility. You definitely won't just get blamed.

anakaine•3mo ago
A lawyer who can see an easy defence to a path they wish to pursue is going to consider that in their response. If thay defence looks like their own clients vulnerability would be exposed in defence because of their clients action or inaction, their first response will almost certainly be to get the client to fix that action or inaction.
more_corn•3mo ago
^ I love you
n_u•3mo ago
Dumb question but just cuz I didn’t see it mentioned have you tried using a Disallow: / in your robots.txt? Or Crawl-delay: 10? That would be the first thing I would try.

Sometimes these crawlers are just poorly written not malicious. Sometimes it’s both.

I would try a zip bomb next. I know there’s one that is 10 MB over the network and unzips to ~200TB.

pknerd•3mo ago
It's for crawlers not custom scrapers
n_u•3mo ago
Respecting robots.txt is a convention not enforced by anything so yes the bot is certainly free to ignore it.

But I’m not sure I understand your distinction. A scraper is a crawler regardless of whether it is “custom”or an off the shelf solution.

The author also said the bot identifed itself as a crawler

> Mozilla/5.0 (compatible; crawler)

pknerd•3mo ago
Redirect it to Trump's website. He will take care of it
g-mork•3mo ago
CloudFlare page rule or similar to a custom internal URL with the max request timeout jacked up as high as possible (or whatever) set, stick a little async web server behind it that hangs every request after the first byte for say.. 1 hour. Give the aync web server a good chunk of RAM to waste. Most providers don't bill for time, only bytes, and most bots have some timeout tolerance, especially when the status headers and body are already being sent

Similarly, you can also try delivering one byte every 10 seconds or 30 seconds or whatever keeps the client on the other end hanging around for without hitting an internal timeout.

    for char in itertools.repeat(b"FUCKOFF"):
        await resp.send(char)
        await resp.flush()
        await asyncio.sleep(10)
        # etc
In the SMTP years we called this tarpitting IIRC
yabones•3mo ago
Return a 200 with the EICAR test string in the body. Nothing like some data poisoning for some vindictive fun

https://en.wikipedia.org/wiki/EICAR_test_file

tetha•3mo ago
Heh, I was wondering if you could do something like SSRF exploits, just the other way around. You know, redirect the bot to <cloud-provider-metadata-api>/shutdown.

Even funnier, include the EICAR test string in the redirect ot the cloud provider metadata. Maybe we could trip some automated compromise detection.

kachapopopow•3mo ago
redirect it to the client ip, not abuse since you're just an innocent redirect to client-ip service and the (most probable) timeout should consider the service dead after a couple of days or even better they just overload their own servers if there is a page on the client ip or even better is that it causes automatic abuse trigger to kick in and shut down the service.
lgats•3mo ago
I've tried sending a redirect to http://localhost or http://127.0.0.1 to no avail
eek2121•3mo ago
That isn't the address you should be using. Use whatever public addresses they are hitting you from.
redleader55•3mo ago
And random ports. If you only hit 80/443, they might be closed
scrps•3mo ago
Singapore's comms regulator bans porn (even possessing it), serve up some softcore to the bot, e-mail the regulator and AWS.
CaptainOfCoit•3mo ago
To be honest, I'd give that a try too. When someone is bothering you across the internet, the best way to reply is to use their local law system against them, not many other parties will care otherwise.
2OEH8eoCRo0•3mo ago
IANAL- sue them for DDoSing and disrupting your service.

> The traffic is hitting numbers that require me to re-negotiate my contract with CloudFlare and is otherwise a nuisance when reviewing analytics/logs.

So you're able to show financial hardship

Bender•3mo ago
'Mozilla/5.0 (compatible; crawler)'

Assuming one trusts the user-agent in this case one could reduce the traffic reply to them and avoid touching the disk or any applications in Nginx with something like:

    if ($http_user_agent ~ (crawler|some-other-bot) ) { return 200 '\n\n\n\nBot quota exceeded, check back in 2150 years.\n\n\n\n'; }
There are other variables to look for to see if something is a bot but such things should be very well tested. $http_accept_language, $http_sec_fetch_mode, etc...

I don't use CF but maybe they have a way to block the entire ASN for AWS on your account assuming one does not need inbound connections from them. I just blackhole their CIDR blocks [1] but that won't help someone using a CDN.

[1] - https://ip-ranges.amazonaws.com/ip-ranges.json

lloydatkinson•3mo ago
I blocked the entirety of Singapore via Cloudflare for my personal site. I was seeing persistent weird traffic patterns and sometimes very odd if a little creepy. Not anymore though, the whole country is blocked.
geraldcombs•3mo ago
I ran into a similar situation a couple of years ago. It wasn't at the scale you describe, but it was an absurd number of requests for a ~80 MB software installer. I ended up redirecting the offending requests to a file named "please-stop.txt" that contained a short note explaining what was happening and asking them to stop. A short time later they did.
jeroenhd•3mo ago
So far I've been able to get away with just blocking the data centers/countries that cause problems for my servers. Singapore and China are common causes for trouble.

As for trying to get them to stop, maybe redirect the bot to random IP:port combinations in a network that's less friendly to being scanned? I believe certain parts of DoD IP space tends to not look kindly upon attempts to scan them.

Depending on your setup, you could try to poison the bot's DNS for your domain. Send them the IP address of their local police force maybe.

My guess is that this is yet another AI scraper. There are others complaining about this bot online but all they seem to come up with is blocking the ASN in Cloudflare.

If there's no technical solution, if consider consulting with a legal professional to see if you can get Amazon to take action. Lawyers are expensive, but so is a Cloudflare bill when they decide you need to be on the "enterprise" tier.

1a527dd5•3mo ago
We've seen tons of illegitimate traffic emanating from SG. So much so, that it is a part of the standard WAF country block (along with CN).
leros•3mo ago
That's interesting. I've been getting 1k requests per second from Meta bots from SG.They slowed down after a month of 429 responses.
lucastech•3mo ago
Meta Ireland is just as bad, I've noticed a lot of Tencent from SG.
lucastech•3mo ago
I wrote about this a few weeks ago, because it really is quite insane.

I wish AWS would curtail abuse from their networks. My hope is to build some tools to automate detection and reporting of this sort of abuse, so we can force it into AWS's court.

https://wxp.io/blog/abuse-from-amazon-ip-networks-never-end

pickle-wizard•3mo ago
Do you have any legitimate traffic coming from AWS? My thought is to just drop all traffic from their ASN. Once they can't contact you for a while they'll move along and you could unblock.
kijin•3mo ago
If it's all from a single AWS region, this is the way to go.

I tend to be careful with residential or office IP ranges. But if it looks like a datacenter, it will be blocked, no second thoughts. Especially if it's a cloud provider that makes it too easy for customers to rotate IPs. Identify the ASN within which they're rotating their IPs, and block it. This is much more effective than blocking based on arbitrary CIDRs or geographical boundaries.

Unless you're running an API for developers, there's no legitimate (non-crawling) reason for someone to request your site from an AWS resource. Even less so for something like Huawei Cloud.

mat_epice•3mo ago
> there's no legitimate (non-crawling) reason for someone to request your site from an AWS resource

I used to run an X instance in the cloud that I would sometimes browse websites from. It sucked but it was also legitimate.

kijin•3mo ago
"Legitimate" is relative here. I would count you as using unusual software to hide your actual source address. Not a huge concern because if you're doing that, I assume you also know how to move around to avoid getting blocked.

In fact, the ability to move to a different cloud on short notice is also part of the CAPTCHA, because large cloud-based botnets usually can't. They'd get instabanned if they tried to move their crawling boxes to something like DigitalOcean.

nijave•3mo ago
Have you tried redirecting the bot in a loop? That should allow it to keep making a ton of requests and hopefully generate traffic they'll have to pay for.

Another idea is replying with large cookies and seeing if the bot saves them and replies with them (once again, to eat traffic)

The idea is to increase their egress to the point someone notices (the bill)

jimrandomh•3mo ago
In addition to whatever other mitigations you do, you should put a deny rule for the bot's user-agent in robots.txt, and use a status code of 429 (Too Many Requests), even if the bot doesn't respect these. This will strengthen your case if you need to convince a third party (AWS, or a court, or a different part of the company that's operating the bot) that it's abusive.
Retric•3mo ago
A 100% legal solution is to sue them and name Amazon as a party in the lawsuit.

Through discovery you can get the name of the parties involved from Amazon, but Amazon is very likely to drop them as a client solving the issue.

Waterluvian•3mo ago
This sounds like it would probably cost tens of thousands of dollars just to get off the starting line.
Retric•3mo ago
Actually going through a lawsuit is expensive, “bluffing” long enough to send a nasty and credible letter can be relatively inexpensive.

Importantly it’s also getting moderately expensive for the other side which really discourages this kind of behavior. Suiting an arbitrary person you have no connection with invites a counter suit for wasting their money, but that largely goes away with such a one sided provocation.

cactusplant7374•3mo ago
This sounds like a fun project.
sp1982•3mo ago
If you are using cloudflare, add a rule to do managed JS challenge. Your backend shouldn’t see the requests unless they pass challenge.
janis1234•3mo ago
Have you considered EBPF filter that looks for 'Mozilla/5.0 (compatible; crawler)' and drops packets from that IP for 1 hr where it just straight drops packets. I.e, this is probably best way to handle bots, don't even reply so they have to timeout which usually is a few seconds.
throwaway127482•3mo ago
Completely and utterly off topic: why on earth does HN use a dim gray font for the post description? It's so hard to read. I understand why downvoted comments are grayed out but why the post description???
rkagerer•3mo ago
I had a similar problem back in 2018, though at a smaller scale.

I wrote a quick-and-dirty program that reads the authoritative list of all AWS IP ranges from https://ip-ranges.amazonaws.com/ip-ranges.json (more about that URL at the blog post https://aws.amazon.com/blogs/aws/aws-ip-ranges-json/), and creates rules in Windows Firewall to simply block all of them. Granted, it was a sledgehammer, but it worked well enough.

Here's the README.md I wrote for the program, though I never got around to releasing the the code: https://markdownpastebin.com/?id=22eadf6c608448a98b6643606d1...

It ran for some years as a scheduled task on a small handful of servers, but I'm not sure if it's still in use today or even works anymore. If there's enough interest I might consider publishing the code (or sharing it with someone who wants to pick up the mantle). Alternatively it wouldn't be hard for someone to recreate that effort.

G'luck!

tushar-r•3mo ago
Block the AWS IP ranges. You will have reasonably good results blocking all datacenter ranges - cloud providers, VPSs etc., if you don't expect traffic from them. You can get the ranges from Udger (paid) and it isn't very bad w.r.to false positives. Alternatively just whitelist expected regions and block everything else. More false positives prone, but easier.
realaaa•3mo ago
zip bomb it yeah !
TZubiri•3mo ago
Block the traffic from those ip address. You may use fail2ban to automate that if it becomes common.
ipaddr•3mo ago
I started forwarding to amazon that worked.
jiggawatts•3mo ago
Ask a lawyer to send a hand-delivered letter to the AWS legal department demanding compensation or face court for damages. Mention of potential criminal proceedings for actively supporting ongoing cyber attacks might not hurt.

Instant results, I guarantee it.

Look up key AWS staff names in Singapore (blogs, talks, etc…) and mention them as plaintiffs.

Nobody cares about these things until they are directly impacted themselves.

Nothing has to actually happen! A letter is cheap.

But it’s the implication that matters. Just discovery can cost them more than the profit from some scummy web scraper.

lgats•3mo ago
update: thanks for all the suggestions

I decided to do some testing with redirecting to a small vps that just keeps the connections open and sends a byte every 10-30 seconds. This worked and the traffic substantially dropped off. After doing some more digging though, I got concerned this may be in itself an abuse of my VPS providers ToS. The risk did not outweigh the benefit. Gzip bombs fell under a similar category of concern.

sph•3mo ago

  iptables -A INPUT -s $bot_ip -j DROP
___timor___•3mo ago
Redirect them to 169.254.169.254, where AWS serve some instance metadata. Maybe auth endpoint, which should result in throttling of their own instances and hopefully cause then troubles.

Gzip, deflate bomb could be another alternative.

Or maybe redirect them to a boat that can respond with TARP IT, filling their connections.

Discuss – Do AI agents deserve all the hype they are getting?

4•MicroWagie•3h ago•1 comments

Ask HN: Anyone Using a Mac Studio for Local AI/LLM?

48•UmYeahNo•1d ago•30 comments

LLMs are powerful, but enterprises are deterministic by nature

3•prateekdalal•7h ago•5 comments

Ask HN: Non AI-obsessed tech forums

28•nanocat•18h ago•25 comments

Ask HN: Ideas for small ways to make the world a better place

18•jlmcgraw•21h ago•21 comments

Ask HN: 10 months since the Llama-4 release: what happened to Meta AI?

44•Invictus0•1d ago•11 comments

Ask HN: Who wants to be hired? (February 2026)

139•whoishiring•5d ago•520 comments

Ask HN: Who is hiring? (February 2026)

313•whoishiring•5d ago•514 comments

Ask HN: Non-profit, volunteers run org needs CRM. Is Odoo Community a good sol.?

2•netfortius•16h ago•1 comments

AI Regex Scientist: A self-improving regex solver

7•PranoyP•22h ago•1 comments

Tell HN: Another round of Zendesk email spam

104•Philpax•2d ago•54 comments

Ask HN: Is Connecting via SSH Risky?

19•atrevbot•2d ago•37 comments

Ask HN: Has your whole engineering team gone big into AI coding? How's it going?

18•jchung•2d ago•13 comments

Ask HN: Why LLM providers sell access instead of consulting services?

5•pera•1d ago•13 comments

Ask HN: How does ChatGPT decide which websites to recommend?

5•nworley•1d ago•11 comments

Ask HN: What is the most complicated Algorithm you came up with yourself?

3•meffmadd•1d ago•7 comments

Ask HN: Is it just me or are most businesses insane?

8•justenough•1d ago•7 comments

Ask HN: Mem0 stores memories, but doesn't learn user patterns

9•fliellerjulian•2d ago•6 comments

Ask HN: Is there anyone here who still uses slide rules?

123•blenderob•4d ago•122 comments

Kernighan on Programming

170•chrisjj•5d ago•61 comments

Ask HN: Anyone Seeing YT ads related to chats on ChatGPT?

2•guhsnamih•1d ago•4 comments

Ask HN: Does global decoupling from the USA signal comeback of the desktop app?

5•wewewedxfgdf•1d ago•3 comments

Ask HN: Any International Job Boards for International Workers?

2•15charslong•18h ago•2 comments

We built a serverless GPU inference platform with predictable latency

5•QubridAI•2d ago•1 comments

Ask HN: Does a good "read it later" app exist?

8•buchanae•3d ago•18 comments

Ask HN: Have you been fired because of AI?

17•s-stude•4d ago•15 comments

Ask HN: Anyone have a "sovereign" solution for phone calls?

12•kldg•4d ago•1 comments

Ask HN: Cheap laptop for Linux without GUI (for writing)

15•locusofself•3d ago•16 comments

Ask HN: How Did You Validate?

4•haute_cuisine•2d ago•6 comments

Ask HN: OpenClaw users, what is your token spend?

14•8cvor6j844qw_d6•4d ago•6 comments