We'll be happy to gather initial feedback on usability and features, especialy from people having good or bad experience wit bots.
*Requirements*
The analyzer relies on 3 Tempesta FW specific features which you still can get with other HTTP servers or accelerators:
1. JA5 client fingerprinting (https://tempesta-tech.com/knowledge-base/Traffic-Filtering-b...). This is a HTTP and TLS layers fingerprinting, similar to JA4 (https://blog.foxio.io/ja4%2B-network-fingerprinting) and JA3 fingerprints. The last is also available in Envoy (https://www.envoyproxy.io/docs/envoy/latest/api-v3/extension...) or Nginx module (https://github.com/fooinha/nginx-ssl-ja3), so check the documentation for your web server
2. Access logs are directly written to Clickhouse analytics database, which can cunsume large data batches and quickly run analytic queries. For other web proxies beside Tempesta FW, you typically need to build a custom pipeline to load access logs into Clickhouse. Such pipeliens aren't so rare though.
3. Abbility to block web clients by IP or JA5 hashes. IP blocking is probably available in any HTTP proxy.
*How does it work*
This is a daemon, which
1. Learns normal traffic profiles: means and standard deviations for client requests per second, error responses, bytes per second and so on. Also it remembers client IPs and fingerprints.
2. If it sees a spike in z-score (https://en.wikipedia.org/wiki/Standard_score) for traffic characteristics or can be triggered manually. Next, it goes in data model search mode
3. For example, the first model could be top 100 JA5 HTTP hashes, which produce the most error responses per second (typical for password crackers). Or it could be top 1000 IP addresses generating the most requests per second (L7 DDoS). Next, this model is going to be verified
4. The daemon repeats the query, but for some time, long enough history, in the past to see if in the past we saw a hige fraction of clients in both the query results. If yes, then the model is bad and we got to previous step to try another one. If not, then we (likely) has found the representative query.
5. Transfer the IP addresses or JA5 hashes from the query results into the web proxy blocking configuration and reload the proxy configuration (on-the-fly).
imiric•27m ago
The heuristics you use are interesting, but this will likely only be a hindrance to lazy bot creators. TLS fingerprints can be spoofed relatively easily, and most bots rotate their IPs and signals to avoid detection. With ML tools becoming more accessible, it's only a matter of time until bots are able to mimic human traffic well enough, both on the protocol and application level. They probably exist already, even if the cost is prohibitively high for most attackers, but that will go down.
Theoretically, deploying ML-based defenses is the only viable path forward, but even that will become infeasible. As the amount of internet traffic generated by bots surpasses the current ~50%, you can't realistically block half the internet.
So, ultimately, I think allow lists are the only option if we want to have a usable internet for humans. We need a secure and user-friendly way to identify trusted clients, which, unfortunately, is ripe to be exploited by companies and governments. All proposed device attestation and identity services I've seen make me uneasy. This needs to be a standard built into the internet, based on modern open cryptography, and not controlled by a single company or government.
I suppose it already exists with TLS client authentication, but that is highly impractical to deploy. Is there an ACME protocol for clients? ... Huh, Let's Encrypt did support issuing client certs, but they dropped it[1].
[1]: https://news.ycombinator.com/item?id=44018400