frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Andrej Karpathy – AGI is still a decade away

https://www.dwarkesh.com/p/andrej-karpathy
61•ctoth•36m ago•33 comments

Live Stream from the Namib Desert

https://bookofjoe2.blogspot.com/2025/10/live-stream-from-namib-desert.html
266•surprisetalk•5h ago•57 comments

Scientists discover intercellular nanotubular communication system in brain

https://www.science.org/doi/10.1126/science.adr7403
72•marshfram•2h ago•13 comments

EVs are depreciating faster than gas-powered cars

https://restofworld.org/2025/ev-depreciation-blusmart-collapse/
143•belter•7h ago•331 comments

Ruby core team takes ownership of RubyGems and Bundler

https://www.ruby-lang.org/en/news/2025/10/17/rubygems-repository-transition/
433•sebiw•5h ago•207 comments

I built an F5 QKview scanner for CISA ED 26-01

https://www.usenabla.com/blog/emergency-scanning-cisa-endpoint
5•jdbohrman•5h ago•0 comments

Meow.camera

https://meow.camera/
514•southwindcg•14h ago•179 comments

Migrating from AWS to Hetzner

https://digitalsociety.coop/posts/migrating-to-hetzner-cloud/
899•pingoo101010•8h ago•496 comments

AI has a cargo cult problem

https://www.ft.com/content/f2025ac7-a71f-464f-a3a6-1e39c98612c7
55•cs702•1h ago•38 comments

The Rapper 50 Cent, Adjusted for Inflation

https://50centadjustedforinflation.com/
223•gaws•1h ago•63 comments

Resizeable Bar Support on the Raspberry Pi

https://www.jeffgeerling.com/blog/2025/resizeable-bar-support-on-raspberry-pi
77•speckx•1w ago•23 comments

4Chan Lawyer publishes Ofcom correspondence

https://alecmuffett.com/article/117792
153•alecmuffett•10h ago•212 comments

You did no fact checking, and I must scream

https://shkspr.mobi/blog/2025/10/i-have-no-facts-and-i-must-scream/
233•blenderob•3h ago•125 comments

OpenAI Needs $400B In The Next 12 Months

https://www.wheresyoured.at/openai400bn/
15•chilipepperhott•19m ago•5 comments

Claude Skills are awesome, maybe a bigger deal than MCP

https://simonwillison.net/2025/Oct/16/claude-skills/
5•weinzierl•20m ago•3 comments

Dead or Alive creator Tomonobu Itagaki, 58 passes away

https://www.gamedeveloper.com/design/dead-or-alive-creator-tomonobu-itagaki-has-passed-away-at-58
45•corvad•2h ago•9 comments

Let's write a macro in Rust

https://hackeryarn.com/post/rust-macros-1/
77•hackeryarn•1w ago•31 comments

Cartridge Chaos: The Official Nintendo Region Converter and More

https://nicole.express/2025/not-just-for-robert.html
11•zdw•5d ago•0 comments

How I bypassed Amazon's Kindle web DRM

https://blog.pixelmelt.dev/kindle-web-drm/
1449•pixelmelt•21h ago•446 comments

Ask HN: How to stop an AWS bot sending 2B requests/month?

145•lgats•12h ago•83 comments

MIT physicists improve the precision of atomic clocks

https://news.mit.edu/2025/mit-physicists-improve-atomic-clocks-precision-1008
8•pykello•5d ago•1 comments

Trap the Critters with Paint

https://deepanwadhwa.github.io/freeze_trap/
25•deepanwadhwa•6d ago•14 comments

Email bombs exploit lax authentication in Zendesk

https://krebsonsecurity.com/2025/10/email-bombs-exploit-lax-authentication-in-zendesk/
40•todsacerdoti•6h ago•11 comments

Read your way through Hà Nội

https://vietnamesetypography.com/samples/read-your-way-through-ha-noi/
62•jxmorris12•6d ago•55 comments

Show HN: OnlyJPG – Client-Side PNG/HEIC/AVIF/PDF/etc to JPG

https://onlyjpg.com
43•johnnyApplePRNG•6h ago•22 comments

Stinkbug Leg Organ Hosts Symbiotic Fungi That Protect Eggs from Parasitic Wasps

https://bioengineer.org/stinkbug-leg-organ-hosts-symbiotic-fungi-that-protect-eggs-from-parasitic...
8•gmays•3h ago•1 comments

Next steps for BPF support in the GNU toolchain

https://lwn.net/Articles/1039827/
95•signa11•14h ago•18 comments

Amazon-backed, nuclear facility for Washington state

https://www.geekwire.com/2025/a-first-look-at-the-amazon-backed-next-generation-nuclear-facility-...
16•stikit•2h ago•2 comments

Metropolis 1998 lets you design every building in an isometric, pixel-art city (2024)

https://arstechnica.com/gaming/2024/08/metropolis-1998-lets-you-design-every-building-in-an-isome...
78•YesBox•3h ago•30 comments

New computer model helps reveal how the brain both adapts and misfires

https://now.tufts.edu/2025/10/16/flight-simulator-brain-reveals-how-we-learn-and-why-minds-someti...
50•XzetaU8•12h ago•18 comments
Open in hackernews

Ask HN: How to stop an AWS bot sending 2B requests/month?

144•lgats•12h ago
I have been struggling with a bot– 'Mozilla/5.0 (compatible; crawler)' coming from AWS Singapore – and sending an absurd number of requests to a domain of mine, averaging over 700 requests/second for several months now. Thankfully, CloudFlare is able to handle the traffic with a simple WAF rule and 444 response to reduce the outbound traffic.

I've submitted several complaints to AWS to get this traffic to stop, their typical followup is: We have engaged with our customer, and based on this engagement have determined that the reported activity does not require further action from AWS at this time.

I've tried various 4XX responses to see if the bot will back off, I've tried 30X redirects (which it follows) to no avail.

The traffic is hitting numbers that require me to re-negotiate my contract with CloudFlare and is otherwise a nuisance when reviewing analytics/logs.

I've considered redirecting the entirety of the traffic to aws abuse report page, but at this scall, it's essentially a small DDoS network and sending it anywhere could be considered abuse in itself.

Are there others that have similar experience?

Comments

giardini•12h ago
Hire a lawyer and have him send the bill for his services to them immediately with a note on the consequences of ignoring his notices. Bill them aggressively.
Animats•10h ago
Yes. Computer Fraud and Abuse Act to start.

The first demand letter from a lawyer will usually stop this. The great thing about suing big companies is that they have to show up. You have no contractual agreement which prevents suing; this is entirely from the outside.

SoftTalker•14m ago
Threatening to sue is one thing. Actually doing it will cost you time and money. And even if you get a judgement how are you going to collect from some rando in Singapore?
tempestn•10h ago
That's not how lawyers or bills work, unfortunately in this case, but fortunately in general.
bigfatkitten•12h ago
Do you receive, or expect to receive any legitimate traffic from AWS Singapore? If not, why not blackhole the whole thing?
caprock•11h ago
Agreed. You should be able to set the waf to just drop the packets and not even bother with the overhead of a response. I think cloud flare waf calls this "block".
marginalia_nu•10h ago
Yeah, this is the way. Dropping the packets makes the requests cheaper to respond to than to make.

The problem with DDoS-attacks is generally the asymmetry, where it requires more resources to deal with the request than to make it. Cute attempts to get back at the attacker with various tarpits generally magnifies this and makes it hit even harder.

firecall•10h ago
Yep, I did this for a while.

The TikTok Byte Dance / Byte Spider bots were making millions of image requests from my site.

Over and over again and they would not stop.

I eventually got Cloudinary to block all the relevant user agents, and initially just totally blocked Singapore.

It’s very abusive on the part of these bot running AI scraping companies!

If I hadn’t been using the kind and generous Cloudinary, I could have been stuck with some seriously expensive hosting bills!

Nowadays I just block all AI bots with Cloudflare and be done with it!

lozenge•9h ago
Here's the IP address ranges- https://docs.aws.amazon.com/vpc/latest/userguide/aws-ip-work...
Jean-Papoulos•11h ago
You don't even need to send a response. Just block the traffic and move on
MrThoughtful•11h ago
If it follows redirects, have you tried redirecting it to its own domain?
Scotrix•11h ago
Just find a Hoster with low traffic egress cost, reverse proxy normal traffic to Cloudflare and reply with 2GB files for the bot, they annoy you/cost you money, make them pay.
tgsovlerkhgsel•10h ago
Isn't ingress free at AWS? You'd have to find a way to generate absurd amounts of egress traffic - absurd enough to be noticed compared to billions of HTTP requests. 2B requests at 1 KB/request is 2 TB/month so they're likely paying a double-digit dollar amount just for the traffic they're sending to you (wtf - where does that money come from?).

But since AWS considers this fine, I'd absolutely take the "redirecting the entirety of the traffic to aws abuse report page" approach. If they consider it abuse - great, they can go turn it off then. The bot could behave differently but at least curl won't add a referer header or similar when it is redirected, so the obvious target would be their instance hosting the bot, not you.

Actually, I would find the biggest file I can that is hosted by Amazon itself (not another AWS customer) and redirect them to it. I bet they're hosting linux images somewhere. Besides being more annoying (and thus hopefully attention-getting) for Amazon, it should keep the bot busy for longer, reducing the amount of traffic hitting you.

If the bot doesn't eat files over a certain size, try to find something smaller or something that doesn't report the size in response to a HEAD request.

ndriscoll•1h ago
If it's making outbound requests it might be going through a NAT gateway, in which case response traffic will be expensive.
hylaride•1h ago
I'd be surprised to see a mass-scraping bot behind a NAT gateway. They're probably using public lambdas where they can't even control the egress IPs (unless something has changed in the last 6 months since I last looked) and sending results to a queue or bucket somewhere.

What I'd do is block the AWS AP range at the edge (unless there's something else there that needs access to your site) - you can get regularly updated JSON formatted lists around the internet, or have something match its fingerprint to send it heaps of garbage, like the zip-bombs others have suggested. It could be a recursive "you're abusing my site - go away" or what-have-you. You could also do some-kind of grey-listing, where you limit the speed to a crawl so that each connection just consumes crawler resources and gets little content. If they are tracking this, they'll see the performance issues and maybe adjust.

2000swebgeek•11h ago
block the IPs or setup an WAF on AWS if you cannot be on Cloudflare.
re-thc•10h ago
AWS WAF isn’t free. Definitely cheaper but all the hits still cost.
shishcat•11h ago
if it follows redirect, redirct him to a 10gb gzip bomb
nake89•11h ago
I was just going to post the same thing. Happy somebody else thought of the same thing :D
sixtyj•10h ago
You nasty ones ;)
cantor_S_drug•10h ago
https://zadzmo.org/code/nepenthes/

This is a tarpit intended to catch web crawlers. Specifically, it targets crawlers that scrape data for LLMs - but really, like the plants it is named after, it'll eat just about anything that finds it's way inside.

It works by generating an endless sequences of pages, each of which with dozens of links, that simply go back into a the tarpit. Pages are randomly generated, but in a deterministic way, causing them to appear to be flat files that never change. Intentional delay is added to prevent crawlers from bogging down your server, in addition to wasting their time. Lastly, Markov-babble is added to the pages, to give the crawlers something to scrape up and train their LLMs on, hopefully accelerating model collapse.

https://news.ycombinator.com/item?id=42725147

Is this a good solution??

iberator•9h ago
Best tarpit ever.
brunkerhart•11h ago
Write to aws abuse team
swiftcoder•11h ago
Making the obviously-abusive bot prohibitively expensive is one way to go, if you control the terminating server.

gzip bomb is good if the bot happens to be vulnerable, but even just slowing down their connection rate is often sufficient - waiting just 10 seconds before responding with your 404 is going to consume ~7,000 ports on their box, which should be enough to crash most linux processes (nginx + mod-http-echo is a really easy way to set this up)

Orochikaku•10h ago
Thinking along the same lines a PoW check like like anubis[1] may work for OP as well.

[1] https://github.com/TecharoHQ/anubis

hshdhdhehd•8h ago
Avoid if you dont have to. It is not really good traffic friendly. Especially if current blocking works.
CaptainOfCoit•4m ago
> Especially if current blocking works.

The submission and the context is when current blocking doesn't work...

lagosfractal42•10h ago
This kind of reasoning assumes the bot continues to be non-stealthy
swiftcoder•9h ago
I mean, forcing them to spend engineering effort the make their bot stealthy (or to be able to maintains 10's of thousands of open ports), is still driving up their costs, so I'd count it as a win. The OP doesn't say why the bot is hitting their endpoints, but I doubt the bot is a profit centre for the operator.
lagosfractal42•5h ago
You risk flagging real users as bots, which drives down your profits and reputation
swiftcoder•5h ago
In this case I don't think they do - unless the legitimate users are also hitting your site at 700 RPS (in which case, the added load from the bot is going to be negligible)
heavyset_go•1h ago
If going stealth means not blatantly DDoS'ing the OP then that's a better outcome than what's currently happening
somat•36m ago
xkcd 810 comes to mind. https://xkcd.com/810/

"what if we make the bots go stealthy and indistinguishable from actual human requests?"

"Mission Accomplished"

mkj•9h ago
AWS customers have to pay for outbound traffic. Is there a way to get them to send you (or cloudflare) huge volumes of traffic?
horseradish7k•9h ago
yeah, could use a free worker
_pdp_•9h ago
A KB zip file can expand to giga / petabytes through recursive nesting - though it depends on their implementation.
sim7c00•8h ago
thats traffic in the other direction
swiftcoder•4h ago
The main joy of a zip bomb is that it doesn't consume much bandwidth - the transferred compressed file is relatively small, and it only becomes huge when the client tries to decompress it in memory afterwards
crazygringo•28m ago
It's still going in the wrong direction.
dns_snek•15m ago
It doesn't matter either way. OP was thinking about ways to consume someone's bandwidth. A zip bomb doesn't consume bandwidth, it consumes computing resources of its recipient when they try to unpack it.
gildas•9h ago
Great idea, some people have already implemented it for the same type of need, it would seem (see the list of user agents in the source code). Implementation seems simple.

https://github.com/0x48piraj/gz-bomb/blob/master/gz-bomb-ser...

CWuestefeld•28m ago
We've been a similar situation. One thing we considered doing is to give them bad data.

It was pretty clear in our case that they were scraping our site to get our pricing data. Our master catalog had several million SKUs, priced dynamically based on availability, customer contracts, and other factors. And we tried to add some value to the product pages, with relevant recommendations for cross-sells, alternate choices, etc. This was pretty compute-intensive, and the volume of the scraping could amount to a DoS at times. Like, they could bury us in bursts of requests so quickly that our infrastructure couldn't spin up new virtual servers, and once we were buried, it was difficult to dig back out from under the load. We learned a lot during this period, including some very counterintuitive stuff about how some approaches to queuing and prioritizing that appeared sounded great on paper, actually could have unintended effects that made such situations worse.

One strategy we talked about was that, rather than blocking the bad guys, we'd tag the incoming traffic. We couldn't do this perfect accuracy, but the inaccuracy was such that we could at least ensure that it wasn't affecting real customers (because we could always know when it was a real, logged-in user). We realized that we could at least cache the data in the borderline cases so we wouldn't have to recalculate (it was a particularly stupid bot that was attacking us, re-requesting the same stuff many times over); from that it was a small step to see that we could at the same time add a random fudge factor into any numbers, hoping to get to a state where the data did our attacker more harm than good.

We wound up doing what the OP is now doing, working with CloudFlare to identify and mitigate "attacks" as rapidly as possible. But there's no doubt that it cost us a LOT, in terms of developer time, payments to CF, and customer dissatisfaction.

By the way, this was all the more frustrating because we had circumstantial evidence that the attacker was a service contracted by one of our competitors. And if they'd come straight to us to talk about it, we'd have been much happier (and I think they would have been as well) to offer an API through which they could get the catalog data easily and in a way where we don't have to spend all the compute on the value-added stuff we were doing for humans. But of course they'd never come to us, or even admit it if asked, so we were stuck. And while this was going, there was also a case in the courts that was discussed many times here on HN. It was a question about blocking access to public sites, and the consensus here was something like "if you're going to have a site on the web, then it's up to you to ensure that you can support any requests, and if you can't find a way to withstand DoS-level traffic, it's your own fault for having a bad design". So it's interesting today to see that attitudes have changed.

snvzz•10h ago
Null-route the entirety of AWS ip space.
JCM9•10h ago
Have ChatGPT write you a sternly worded cease and desist letter and send it to Amazon legal via registered mail.

AWS has become rather large and bloated and does stupid things sometimes, but they do still respond when you get their lawyers involved.

molszanski•10h ago
Maybe add this IP to a blacklist? https://iplists.firehol.org/ It would be easier to pressure AWS when it is there
reisse•10h ago
What kind of content do you serve? 700 RPS is not a big number at all, for sure not enough to qualify as a DoS. I'm not surprised AWS did not take any action.
marginalia_nu•10h ago
FWIW, a HN hug of death, which fairly regularly knocks sites offline tends to peak at a few dozen RP.
reisse•10h ago
On the other hand, I've only seen complaint letters from AWS for doing tens of thousands of RPS on rate-limited endpoints for multiple days. Even then, AWS wasn't the initiator of inquiry (it was their customer being polled), and it wasn't a "cease and desist" kind of letter, it was "please explain what you're doing and prove you're not violating our ToS".
hsbauauvhabzb•9h ago
Why would aws care if you’re consuming one of their customers resources when the customer is the one that pays?
Hizonner•4h ago
> 700 RPS is not a big number at all, for sure not enough to qualify as a DoS.

That depends on what's serving the requests. And if you're making the requests, it is your job to know that beforehand.

pingoo101010•10h ago
Take a look at https://github.com/pingooio/pingoo

It's a reverse-proxy / load balancer with built-in firewall and automatic HTTPS. You will be able to easily block the annoying bots with rules (https://pingoo.io/docs/rules)

neya•10h ago
I had this issue on one of my personal sites. It was a blog I used to write maybe 7-8 years ago. All of a sudden, I see insane traffic spikes in analytics. I thought some article went viral, but realized it was too robotic to be true. And so I narrowed it down to some developer trying to test their bot/crawler on my site. I tried asking nicely, several times, over several months.

I was so pissed off that I setup a redirect rule for it to send them over to random porn sites. That actually stopped it.

sim7c00•7h ago
this is the best approach honestly. redirect them to some place that undermines their efforts. either back to themselves, their own provider, or nasty crap that no one want to find in their crawler logs.
throwaway422432•7h ago
Goatse?

Wouldn't recommend Googling it. You either know or just take a guess.

Rendello•45m ago
I googled a lot of shock sites after seeing them referenced and not knowing what they were. Luckily Google and Wikipedia tended to shield my innocent eyes while explaining what I should be seeing.

The first goatse I actually saw was in ASCII form, funnily enough.

znpy•9h ago
> I've tried 30X redirects (which it follows) to no avail

Make it follow redirects to some kind of illegal website. Be creative, I guess.

The reasoning being that if you can get AWS to trigger security measures on their side, maybe AWS will shut down their whole account.

_pdp_•9h ago
As others have suggested you can try to fight back depending on the capabilities of your infrastructure. All crawlers will have some kind of queuing system. If you manage to cause for the queues to fill up then the crawler wont be able to send as many requests. For example, you can allow the crawler to open the socket but you only send the data very slowly causing the queues to get filled quickly with busy workers.

Depending on how the crawler is designed this may or may not work. If they are using SQS with Lambda then that will obviously not work but it will fire back nevertheless because the serverless functions will be running for longer (5 - 15 minutes).

Another technique that comes to mind is to try to force the client to upgrade the connection (i.e. websocket). See what will happen. Mostly it will fail but even if it gets stalled for 30 seconds that is a win.

stevoski•9h ago
> Thankfully, CloudFlare is able to handle the traffic with a simple WAF rule and 444 response to reduce the outbound traffic.

This is from your own post, and is almost the best answer I know of.

I recommending you configure a Cloudflare WAF rule to block the bot - and then move on with your life.

Simply block the bot and move on with your life.

hamburgererror•8h ago
There might be some ideas to dig here: https://news.ycombinator.com/item?id=41923635
theginger•8h ago
If it follows the redirect I would redirect it to random binary files hosted by Amazon, then see if it continues to not require any further action
nurettin•8h ago
What kind of website is this that makes it so lucrative to run so many requests?
sim7c00•8h ago
if they have some service up on the machines the bot connect from then u can redirect them to themselves.

otherwise, maybe redirect to aws customer portal or something -_- maybe they will stop it if it hit themselves...

hyperknot•7h ago
Use a simple block rule, not a WAF rule, those are free.
ahazred8ta•7h ago
Silly suggestion: feed them bogus DNS info. See if you can figure out where their DNS requests are coming from.
locusm•7h ago
I am dealing with a similar situation and kinda screwed up as I managed to get Google Ads suspended due to blocking Singapore. I see a mix of traffic from AWS, Tencent and Huawei cloud at the moment. Currently Im just scanning server logs and blocking ip ranges.
crazygringo•22m ago
> I managed to get Google Ads suspended due to blocking Singapore

How did that happen, why? I feel like a lot of people here would not want to make the same mistake, so details would be very welcome.

As long as pages weren't being served and so there was never any case of requesting ads but never showing them, I don't understand why Ads would care?

bcwhite•5h ago
I redirect such traffic to a subdomain with an IP address that isn't assigned (or legally assignable). The bots just wait for a response to connection requests but never gets them. This seems to typically cost 10s waiting. The traffic doesn't come to my servers and it doesn't risk legitimate users who might hit it by mistake.
bcwhite•5h ago
An idea I had was a custom kernel that replied ACK (or SYN+ACK) to every TCP packet. All connections would appear to stay open forever, eating all incoming traffic, and never replying, all while using zero resources of the device. Bots might wait minutes (or even forever) per connection.
xena•1h ago
Main author of Anubis here. Have CloudFlare return a HTTP 200 response instead of a rejection at non-200. That makes the bots stop hammering until they get a 200 response.
Rothnargoth•1h ago
Blocking before the traffic reaches the application servers (what you're doing) is the most effective and cost/time efficient.

It sounds like the bot operator is spending enough on AWS to withstand the current level of abuse reports.

If you really wanted to retaliate, you could try getting a warrant to force AWS to disclose the owners of that AWS instance.

jedberg•1h ago
Tell cloudflare it's abusive, and they will block it outside your account so it doesn't count against you.
reconnecting•1h ago
tirreno(1) guy here.

I'd suggest taking a look into patterns and IP rotation (if any) and perhaps blocking IP CIDR at the web server level, if the range is short.

Why simple deny from 12.123.0.0/16 (Apache) is not working for you?

1. https://github.com/tirrenotechnologies/tirreno

AdamJacobMuller•1h ago
> I've tried 30X redirects (which it follows)

301 response to a selection of very large files hosted by companies you don't like.

When their AWS instances start downloading 70000 windows ISOs in parallel, they might notice.

Hard to do with cloudflare but you can also tar pit them. Accept the request and send a response, one character at a time (make sure you uncork and flush buffers/etc), with a 30 second delay between characters.

700 requests/second with say 10Kb headers/response. Sure is a shame your server is so slow.

notatoad•54m ago
>301 response to a selection of very large files hosted by companies you don't like.

i suggest amazon

n_u•1h ago
Dumb question but just cuz I didn’t see it mentioned have you tried using a Disallow: / in your robots.txt? Or Crawl-delay: 10? That would be the first thing I would try.

Sometimes these crawlers are just poorly written not malicious. Sometimes it’s both.

I would try a zip bomb next. I know there’s one that is 10 MB over the network and unzips to ~200TB.

pknerd•1h ago
It's for crawlers not custom scrapers
n_u•49m ago
Respecting robots.txt is a convention not enforced by anything so yes the bot is certainly free to ignore it.

But I’m not sure I understand your distinction. A scraper is a crawler regardless of whether it is “custom”or an off the shelf solution.

The author also said the bot identifed itself as a crawler

> Mozilla/5.0 (compatible; crawler)

pknerd•58m ago
Redirect it to Trump's website. He will take care of it
g-mork•51m ago
CloudFlare page rule or similar to a custom internal URL with the max request timeout jacked up as high as possible (or whatever) set, stick a little async web server behind it that hangs every request after the first byte for say.. 1 hour. Give the aync web server a good chunk of RAM to waste. Most providers don't bill for time, only bytes, and most bots have some timeout tolerance, especially when the status headers and body are already being sent

Similarly, you can also try delivering one byte every 10 seconds or 30 seconds or whatever keeps the client on the other end hanging around for without hitting an internal timeout.

    for char in itertools.repeat(b"FUCKOFF"):
        await resp.send(char)
        await resp.flush()
        await asyncio.sleep(10)
        # etc
In the SMTP years we called this tarpitting IIRC
yabones•40m ago
Return a 200 with the EICAR test string in the body. Nothing like some data poisoning for some vindictive fun

https://en.wikipedia.org/wiki/EICAR_test_file

kachapopopow•32m ago
redirect it to the client ip, not abuse since you're just an innocent redirect to client-ip service and the (most probable) timeout should consider the service dead after a couple of days or even better they just overload their own servers if there is a page on the client ip or even better is that it causes automatic abuse trigger to kick in and shut down the service.
scrps•22m ago
Singapore's comms regulator bans porn (even possessing it), serve up some softcore to the bot, e-mail the regulator and AWS.
CaptainOfCoit•2m ago
To be honest, I'd give that a try too. When someone is bothering you across the internet, the best way to reply is to use their local law system against them, not many other parties will care otherwise.