More useless and harmful anti-bot nonsense, probably with many false detections, when a simple and neutral rate-limiting 429 does the job.
halb•1h ago
I guess the blame is on me here for providing only a very brief context on the topic, which makes it sound like this is just anti-scraping solutions.
This kind of fingerprinting solutions are widely used everywhere, and they don't have the goal of directly detecting or blocking bots, especially harmless scrapers. They just provide an additional datapoint which can be used to track patterns in website traffic, and eventually block fraud or automated attacks - that kind of bots.
OutOfHere•1h ago
If it's making a legitimate request, it's not an automated attack. If it's exceeding its usage quota, that's a simple problem that doesn't require eBPF.
halb•1h ago
What kind of websites do you have in mind when I talk about fraud patterns? not everything is a static website, and I absolutely agree with you on that point: If your static website is struggling under the load of a scraper there is something deeply wrong with your architecture. We live in wonderful times, Nginx on my 2015 laptop can gracefully handle 10k Requests per second before I even activate ratelimiting.
Unfortunately there are bad people out there, and they know how to write code. Take a look at popular websites like TikTok, amazon, or facebook. They are inundated by fraud requests whose goal is to use their services in a way that is harmful to others, or straight up illegal. From spam to money laundering. On social medial, bots impersonate people in an attempt to influence public discourse and undermine democracies.
Retr0id•27m ago
This is an overly simplistic view that does not reflect reality in 2025.
konsalexee•1h ago
Sure, buts its a nice exploration to layer 4 type of detection
aorth•1h ago
Why is it useless and harmful? Many of us are struggling—without massive budgets or engineering teams—to keep services up due to incredible load from scrapers in recent years. We do use rate limiting, but scrapers circumvent it with residential proxies and brute force. I often see concurrent requests from hundreds or thousands of IPs in one data center. Who do these people think they are?
OutOfHere•1h ago
It is harmful because innocent users routinely get caught in your dragnet. And why even have a public website if the goal is not to serve it?
What is the actual problem with serving users? You mentioned incredible load. I would stop using inefficient PHP or JavaScript or Ruby for web servers. I would use Go or Rust or a comparable efficient server with native concurrency. Survival always requires adaptation.
How do you know that the alleged proxies belong to the same scrapers? I would look carefully at the values contained in the IP chain as determined by XFF to know which subnets to rate-limit as per their membership in the XFF.
Another way is to require authentication for expensive endpoints.
immibis•26m ago
Residential proxy users are paying on the order of $5 per gigabyte, so send them really big files once detected. Or "click here to load the page properly" followed by a trickle of garbage data.
jeffbee•1h ago
Guy who has never operated anything, right here ^
TCP fingerprinting is so powerful that sending SMTP temporary failure codes to Windows boxes stops most spam. You can't ignore that kind of utility.
ghotli•43m ago
I downvoted you due to the way you're communicating in this thread. Be kind, rewind. Review the guidelines here perhaps since your account is only a little over a year old.
I found this article useful and insightful. I don't have a bot problem at present I have an adjacent problem and found this context useful for an ongoing investigation.
OutOfHere•2h ago
halb•1h ago
This kind of fingerprinting solutions are widely used everywhere, and they don't have the goal of directly detecting or blocking bots, especially harmless scrapers. They just provide an additional datapoint which can be used to track patterns in website traffic, and eventually block fraud or automated attacks - that kind of bots.
OutOfHere•1h ago
halb•1h ago
Unfortunately there are bad people out there, and they know how to write code. Take a look at popular websites like TikTok, amazon, or facebook. They are inundated by fraud requests whose goal is to use their services in a way that is harmful to others, or straight up illegal. From spam to money laundering. On social medial, bots impersonate people in an attempt to influence public discourse and undermine democracies.
Retr0id•27m ago
konsalexee•1h ago
aorth•1h ago
OutOfHere•1h ago
What is the actual problem with serving users? You mentioned incredible load. I would stop using inefficient PHP or JavaScript or Ruby for web servers. I would use Go or Rust or a comparable efficient server with native concurrency. Survival always requires adaptation.
How do you know that the alleged proxies belong to the same scrapers? I would look carefully at the values contained in the IP chain as determined by XFF to know which subnets to rate-limit as per their membership in the XFF.
Another way is to require authentication for expensive endpoints.
immibis•26m ago
jeffbee•1h ago
TCP fingerprinting is so powerful that sending SMTP temporary failure codes to Windows boxes stops most spam. You can't ignore that kind of utility.
ghotli•43m ago
I found this article useful and insightful. I don't have a bot problem at present I have an adjacent problem and found this context useful for an ongoing investigation.