frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
2•AlexeyBrin•49s ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
1•machielrey•2m ago•0 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
2•tablets•6m ago•0 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•9m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•11m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
2•pastage•11m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
1•billiob•12m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
2•birdculture•17m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•23m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•24m ago•1 comments

Slop News - HN front page right now hallucinated as 100% AI SLOP

https://slop-news.pages.dev/slop-news
1•keepamovin•29m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•31m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
3•tosh•37m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
4•oxxoxoxooo•40m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•41m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
3•goranmoomin•45m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•46m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•47m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•50m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
3•myk-e•53m ago•5 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•53m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
4•1vuio0pswjnm7•55m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
2•1vuio0pswjnm7•57m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•59m ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•1h ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•1h ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•1h ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•1h ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•1h ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•1h ago•1 comments
Open in hackernews

Show HN: An open source access logs analytics script to block bot attacks

https://github.com/tempesta-tech/webshield
37•krizhanovsky•3mo ago
This is a small PoC Python project for web server access logs analyzing to classify and dynamically block bad bots, such as L7 (application-level) DDoS bots, web scrappers and so on.

We'll be happy to gather initial feedback on usability and features, especialy from people having good or bad experience wit bots.

*Requirements*

The analyzer relies on 3 Tempesta FW specific features which you still can get with other HTTP servers or accelerators:

1. JA5 client fingerprinting (https://tempesta-tech.com/knowledge-base/Traffic-Filtering-b...). This is a HTTP and TLS layers fingerprinting, similar to JA4 (https://blog.foxio.io/ja4%2B-network-fingerprinting) and JA3 fingerprints. The last is also available in Envoy (https://www.envoyproxy.io/docs/envoy/latest/api-v3/extension...) or Nginx module (https://github.com/fooinha/nginx-ssl-ja3), so check the documentation for your web server

2. Access logs are directly written to Clickhouse analytics database, which can cunsume large data batches and quickly run analytic queries. For other web proxies beside Tempesta FW, you typically need to build a custom pipeline to load access logs into Clickhouse. Such pipeliens aren't so rare though.

3. Abbility to block web clients by IP or JA5 hashes. IP blocking is probably available in any HTTP proxy.

*How does it work*

This is a daemon, which

1. Learns normal traffic profiles: means and standard deviations for client requests per second, error responses, bytes per second and so on. Also it remembers client IPs and fingerprints.

2. If it sees a spike in z-score (https://en.wikipedia.org/wiki/Standard_score) for traffic characteristics or can be triggered manually. Next, it goes in data model search mode

3. For example, the first model could be top 100 JA5 HTTP hashes, which produce the most error responses per second (typical for password crackers). Or it could be top 1000 IP addresses generating the most requests per second (L7 DDoS). Next, this model is going to be verified

4. The daemon repeats the query, but for some time, long enough history, in the past to see if in the past we saw a hige fraction of clients in both the query results. If yes, then the model is bad and we got to previous step to try another one. If not, then we (likely) has found the representative query.

5. Transfer the IP addresses or JA5 hashes from the query results into the web proxy blocking configuration and reload the proxy configuration (on-the-fly).

Comments

imiric•3mo ago
Thanks for sharing!

The heuristics you use are interesting, but this will likely only be a hindrance to lazy bot creators. TLS fingerprints can be spoofed relatively easily, and most bots rotate their IPs and signals to avoid detection. With ML tools becoming more accessible, it's only a matter of time until bots are able to mimic human traffic well enough, both on the protocol and application level. They probably exist already, even if the cost is prohibitively high for most attackers, but that will go down.

Theoretically, deploying ML-based defenses is the only viable path forward, but even that will become infeasible. As the amount of internet traffic generated by bots surpasses the current ~50%, you can't realistically block half the internet.

So, ultimately, I think allow lists are the only option if we want to have a usable internet for humans. We need a secure and user-friendly way to identify trusted clients, which, unfortunately, is ripe to be exploited by companies and governments. All proposed device attestation and identity services I've seen make me uneasy. This needs to be a standard built into the internet, based on modern open cryptography, and not controlled by a single company or government.

I suppose it already exists with TLS client authentication, but that is highly impractical to deploy. Is there an ACME protocol for clients? ... Huh, Let's Encrypt did support issuing client certs, but they dropped it[1].

[1]: https://news.ycombinator.com/item?id=44018400

krizhanovsky•3mo ago
This is quite insightful, thank you.

This particular project, WebShield, is simple and it didn't take too long to develop. Basically, with this project we're trying to figure out what can be built having fingerprints and traffic characteristics in an analytic database. It's seems easy to make PoCs with these features.

For now, if this tool can stop some dummy bots, we'll be happy. We definitely need more development and more sophisticated algorithms to fight against some paid scrapping proxies.

It's more or less simple to classify DDoS bots because they have clear impact - the system performance degrades. For some bots we also can introduce the target, for the bots and the protection system, e.g. the booked slots for a visa appointments. For some scrappers this is harder.

Another opportunity is to dynamically generate classification features and verify resulting models, build web page transition graphs and so on.

This is a good point about possible blocking of ~50% of the Internet. For DDoS we _mitigate_ an attack, not _block_ it, so probably for bots we should do the same - just rate limit them instead of full blocking.

Technically, we can implement verification of client side certificates, but, yes, the main problem of adoption on the client side.

monster_truck•3mo ago
This is not realistically useful
gpi•3mo ago
Where can I learn more about JA5? John Althouse (What JA stands for) had not published anything about JA5 yet.
krizhanovsky•3mo ago
Hi,

thank you for the reply!

You can read about JA5 at https://tempesta-tech.com/knowledge-base/Traffic-Filtering-b... .

But the thing is that the hashes were just inspired by the work of John Althouse and there is no any relation.

Unfortunately, we didn't realize what "JA" stands for at the time we were designing the feature. We will rename it https://github.com/tempesta-tech/tempesta/issues/2533 .

Sorry for the confusion.

goodthink•3mo ago
My advice to little sites: Basic Auth stops bots dead in their tracks. The credentials have to be discoverable of course, which makes it useless for most sites, but it's not that inconvenient. Once entered, browsers offer to use the same creds on subsequent visits. I have several apps that use 3D assets that are large enough for it to become worrisome when hundreds of bots are requesting them day and night. Not any more, lol.
krizhanovsky•3mo ago
That's a good advice, thank you.

In our approach we do our best to not to affect user experience. E.g. consider an example of a company website with a blog. The company does it's best to engage more audience to their blog, products whatever. I guess quite a part of the audience will be lost due to requirement of authentication on website, which they see first time.

However, for returning, and especially regular, clients I think that is a really simple and good solution.