frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Lumo by Proton Mail

https://lumo.proton.me/
1•doener•3m ago•0 comments

Cqdam Free – single-binary in-memory KV store (RESP subset), ~2.5M ops/SEC

https://github.com/LaminarInstruments/Laminar-Flow-In-Memory-Key-Value-Store
1•LaminarBender•3m ago•1 comments

Human activity may be locking the Southwest into permanent drought

https://theconversation.com/climate-models-reveal-how-human-activity-may-be-locking-the-southwest...
2•PaulHoule•3m ago•0 comments

Trump calls video of bag being thrown from White House an 'AI-generated' fake

https://www.cnn.com/2025/09/02/politics/white-house-black-bag-video-mystery
3•frays•6m ago•2 comments

Single File No-Build Blog with Modern JavaScript

https://single-page-blog.ben-ca1.workers.dev
1•b_e_n_t_o_n•7m ago•1 comments

The World War Two bomber that cost more than the atomic bomb

https://www.bbc.com/future/article/20250829-the-bomber-that-became-ww2s-most-expensive-weapon
1•pseudolus•10m ago•0 comments

MUJI – Bucket

https://www.relvaokellermann.com/work/bucket/
1•mooreds•10m ago•0 comments

Electrical stimulation can reprogram immune system to heal the body faster

https://medicalxpress.com/news/2025-09-electrical-reprogram-immune-body-faster.html
2•birriel•10m ago•0 comments

Why I joined Mixpanel as CEO: A new era in analytics

https://mixpanel.com/blog/jen-taylor-ceo/
1•enahs-sf•11m ago•0 comments

Is the McDonald's ice cream machine broken?

https://mcbroken.com/
1•mooreds•11m ago•1 comments

Cherokee, Osage, and the Indigenous North American Type Collection

https://www.typotheque.com/blog/cherokee-osage-and-the-indigenous-north-american-type-collection
1•mooreds•11m ago•0 comments

Chinese cluster now top innovation hotspot: UN

https://www.yahoo.com/news/articles/chinese-cluster-now-worlds-top-070155491.html
1•Teever•12m ago•0 comments

How Europe's deforestation law could change the global coffee trade

https://theconversation.com/how-europes-deforestation-law-could-change-the-global-coffee-trade-26...
1•bikenaga•14m ago•0 comments

Summarize Hacker News with Hono and Cloudflare Tutorial

https://www.youtube.com/watch?v=Wuo7OOaSgmI
1•fallinditch•15m ago•0 comments

Lightcap: A Symbolic Mirror Forged in Algebra

https://lightcapai.medium.com/lightcap-a-symbolic-mirror-forged-in-algebra-80685044c345
1•WASDAai•17m ago•1 comments

Why Radiology AI Didn't Work and What Comes Next

https://www.outofpocket.health/p/why-radiology-ai-didnt-work-and-what-comes-next
1•randycupertino•25m ago•2 comments

Augmented Coding – A Pattern Language

https://gregorriegler.com/2025/07/12/augmented-coding-pattern-language.html
1•gregorriegler•33m ago•0 comments

Microsoft Tech Community Is Down

https://techcommunity.microsoft.com/
1•gpi•33m ago•0 comments

Are we living in a stupidogenic society?

https://substack.nomoremarking.com/p/are-we-living-in-a-stupidogenic-society
1•jger15•36m ago•1 comments

Japan Post Bank to issue yen deposit-backed digital currency in fiscal 2026

https://www.japantimes.co.jp/business/2025/09/02/companies/japan-post-bank-digital-currency/
4•mikhael•37m ago•0 comments

WhatsApp patches vulnerability exploited in zero-day attacks

https://www.bleepingcomputer.com/news/security/whatsapp-patches-vulnerability-exploited-in-zero-d...
2•akyuu•38m ago•0 comments

Prometheus just changed energy and fuels forever

https://prometheusfuels.com/news/prometheus-just-changed-energy-and-fuels-forever
1•modernerd•39m ago•0 comments

Process knowledge is crucial to economic development

https://www.programmablemutter.com/p/process-knowledge-is-crucial-to-economic
2•bookofjoe•41m ago•0 comments

Jevons' Paradox is good sometimes

https://andymasley.substack.com/p/jevons-paradox-isnt-always-bad
2•jger15•43m ago•0 comments

Making the Most of a Dumb Fax Switcher Box

http://rachelbythebay.com/w/2025/09/01/fax/
1•gjf•44m ago•0 comments

We send AI requests on every keystroke

https://cursor.com/en/security
2•sensahin•44m ago•1 comments

Stop Hosting Boring Tech Events

https://dx.tips/hosting
1•swyx•45m ago•0 comments

Sharks may be losing deadly teeth to ocean acidification

https://www.frontiersin.org/news/2025/08/27/ocean-acidity-sharks-tooth-damage
1•geox•48m ago•0 comments

YouTube now flagging accounts on family plans that aren't in the same household

https://www.androidpolice.com/youtubes-latest-crackdown-may-affect-your-family-plan/
2•josephcsible•48m ago•0 comments

File protection: anonymous, open source and fast

2•Gravyt1•49m ago•1 comments
Open in hackernews

AI web crawlers are destroying websites in their never-ending content hunger

https://www.theregister.com/2025/08/29/ai_web_crawlers_are_destroying/
163•CrankyBear•6h ago

Comments

k310•5h ago
> Cloud services company Fastly agrees. It reports that 80% of all AI bot traffic comes from AI data fetcher bots.

No kidding. An increasing number of sites are putting up CAPTCHA's.

Problem? CAPTCHAS are annoying, they're a 50 times a day eye exam, and

> Google's reCAPTCHA is not only useless, it's also basically spyware [0]

> reCAPTCHA v3's checkbox test doesn't stop bots and tracks user data

[0] https://www.techspot.com/news/106717-google-recaptcha-not-on...

ronsor•5h ago
I've just started clicking away from pages that are full of CAPTCHAs. Ironically this has resulted in me using AI more.
marginalia_nu•4h ago
Webmasters are really kinda stuck between a rock and a hard place with this one.

At least with what I'm doing poorly configured or outright malicious bots consume about 5000x the resources than human visitors, so having no bot mitigation means I've basically given up and decided I should try to make it as a vegetable farmer instead of doing stuff online.

Bot mitigation in practice is a tradeoff between what's enough of an obstacle to keep most of the bots out, while at the same time not annoying the users so much they leave.

I think right now Anubis is one of the less bad options. Some users are annoyed by it (and it is annoying), but it's less annoying than clicking fire hydrants 35 times and as long as you configure right it seems to keep most of the bots out, or at least drives them to behave in a more identifiable manner.

Probably won't last forever, but I don't know what would besides like going full anacap special needs kid and doing crypto microtransactions for each page request. Would unfortunately drive off not only the bots, but the human visitors as well.

timpera•4h ago
Anubis is extremely slow on low-end devices, it often takes >30 seconds to complete. Users deserve better, but I guess it's still a better experience than reCaptcha or Cloudflare.
danudey•1h ago
Well, >30 seconds to complete anubis is still better than >30 seconds to complete every single page load because AI bots are overloading the servers.
superkuh•4h ago
And because companies like Fastly only measure things via javascript execution and assume everything that doesn't execute JS correctly is a bot, that 80% contains a whole bunch of human persons.
ccgreg•4h ago
The Fastly report[1] has a couple of great quotes that mention Common Crawl's CCBot:

> Our observations also highlight the vital role of open data initiatives like Common Crawl. Unlike commercial crawlers, Common Crawl makes its data freely available to the public, helping create a more inclusive ecosystem for AI research and development. With coverage across 63% of the unique websites crawled by AI bots, substantially higher than most commercial alternatives, it plays a pivotal role in democratizing access to large-scale web data. This open-access model empowers a broader community of researchers and developers to train and improve AI models, fostering more diverse and widespread innovation in the field.

...

> What’s notable is that the top four crawlers (Meta, Google, OpenAI and Claude) seem to prefer Commerce websites. Common Crawl’s CCBot, whose open data set is widely used, has a balanced preference for Commerce, Media & Entertainment and High Tech sectors. Its commercial equivalents Timpibot and Diffbot seem to have a high preference for Media & Entertainment, perhaps to complement what’s available through Common Crawl.

And also there's one final number that isn't in the Fastly report but is in the EL Reg article[2]:

> The Common Crawl Project, which slurps websites to include in a free public dataset designed to prevent duplication of effort and traffic multiplication at the heart of the crawler problem, was a surprisingly-low 0.21 percent.

1: https://learn.fastly.com/rs/025-XKO-469/images/Fastly-Threat...

2: https://www.theregister.com/2025/08/21/ai_crawler_traffic/

nextworddev•5h ago
“I drink your milkshake” type sh
onetokeoverthe•5h ago
captchas have eliminated 25 years of browsing speed progress.
sidewndr46•4h ago
And so the mantle has been passed from the Javascript developer, to the Turing test author.
throw_m239339•5h ago
My cousin manages a dozens of mid-sized informational websites and communities, his former hosting provider kicked him out because he refused to pay the insane bills as a result of literally AI bots DDoS-ing his sites...

He unfortunately had no choice to put most of the content behind a login-wall (you can only see parts of the articles/forum posts when logged out) but he is strongly considering just hard pay-walling some content at that point... We're talking about someone who in good faith provided partial data dumps of content freely available for these companies to download, but, caching / etags? none of these AI companies, hiring "the best and the brightest" have ever heard of that, rate limiting? LOL what is that?

This is nuts, these AI companies are ruining the web.

pluc•5h ago
People who didnt respect basic ethics, legal copyrights and common sense aren't gonna stop because they're a nuisance. They'll keep at it until they've ruined what birthed them so they may replace it. Fuck AI.
krunck•5h ago
I just block them by User Agent string[1]. The rest that fake the UA get clobbered by rate limiting[2] on the web server. Not perfect, but our site is not getting hammered any more.

[1] https://perishablepress.com/ultimate-ai-block-list/

[2] https://github.com/jzdziarski/mod_evasive

braden_e•5h ago
There is a very large scale crawler that uses random valid user agents and a staggeringly large pool of ips. I first noticed it because a lot of traffic was coming from Brazil and "HostRoyale" (asn 203020). They send only a few requests a day from each ip so rate limiting is not useful.

I run a honeypot that generates urls with the source IP so I am pretty confident it is all one bot, in the past 48 hours I have had over 200,000 ips hit the honeypot.

I am pretty sure this is Bytedance, they occasionally hit these tagged honeypot urls with their normal user agent and their usual .sg datacenter.

kjkjadksj•4h ago
I wonder if you could implement a dummy rate limit? Half the time you are rate limited randomly. A real user will think nothing of it and refresh the page.
ronsor•4h ago
That will irritate real users half the time while the bots won't care.
candlemas•3h ago
My site has also recently been getting massively hit by Brazilian IPs. It lasts for a day or two, even if they are being blocked.
dizlexic•3h ago
I've written my own bots that do exactly this. My reason was mainly to avoid detection so as part of that I also severely throttled my requests and hit the target at random intervals. In other words, I wasn't trying to abuse them. I just didn't want them to notice me.

TLDR it's trivial to send fake info when you're the one who controls the info.

lysace•5h ago
From my pov (saas, 1k domains): the most destructive/DDoS/idiotic brute force crawling peaked around half a year ago.
giancarlostoro•5h ago
I'm not sure why they don't just cache the websites and avoid going back for at least 24 hours, especially in the case of most sites. I swear its like we're re-learning software engineering basics with LLMs / AI and it kills me.
kpw94•4h ago
Yeah the landscpe when there were many more Search engines must have been exactly the same...

I think the eng teams behind those were just more competent / more frugal on their processing.

And since there wasn't any AWS equivalent, they had to be better citizens since well-known IP range ban for the crawled websites was trivial.

ccgreg•4h ago
The blekko search engine index was only 1 billion pages, compared to Common Crawl Foundation's crawl of 3 billion webpages per month.
acdha•3h ago
Bandwidth cost more then, so the early search engines had an inventive not to massively increase their own costs if nothing else.
danudey•1h ago
It's worth noting that search engines back then (and now? except the AI ones) generally tended to follow robots.txt, which meant that if there were heavy areas of your site that you didn't want them to index you could filter them out and let them just follow static pages. You could block off all of /cgi-bin/ for example and then they would never be hitting your CGI scripts - useful if your guestbook software wrote out static files to be served, for example.

The search engines were also limited in resources, so they were judicious about what they fetched, when, and how often; optimizing their own crawlers saved them money, and in return it also saved the websites too. Even with a hundred crawlers actively indexing your site, they weren't going to index it more than, say, once a day, and 100 requests in a day isn't really that much even back then.

Now, companies are pumping billions of dollars into AI; budgets are infinite, limits are bypassed, and norms are ignored. If the company thinks it can benefit from indexing your site 30 times a minute then it will, but even if it doesn't benefit from it there's no reason for them to stop it from doing so because it doesn't cost them anything. They cannot risk being anything other than up-to-date, because if users are coming to you asking about current events and why space force is moving to Alabama and your AI doesn't know but someone else's does, then you're behind the times.

So in the interests of maximizing short-term profit above all else - which is the only thing AI companies are doing in any way shape or form - they may as well scrape every URL on your site once per second, because it doesn't cost them anything and they don't care if you go bankrupt and shut down.

add-sub-mul-div•4h ago
The people at the forefront of creating the shortcut machine are taking shortcuts. We're on a slow march towards the death of attention to detail.
giancarlostoro•4h ago
Slow march? It feels like we've been on that train a while honestly. It's embarrassing. We don't even have fully native GUIs they're all browser wrappers.
jsheard•4h ago
Once the crawler goes up, who cares what it brings down?

That's not my department! says Crawler von Braun

zwirbl•4h ago
That's gold, I've just stumbled on the original a week ago
robwwilliams•4h ago
This! Today I asked Claude Sonnet to read the Wikipedia article on “inference” and answer a few of my questions.

Sonnet responded: “Sorry, I have no access.” Then I asked it why and it was flummoxed and confused. I asked why Anthropic did not simply maintain mirrors of Wikipedia in XX different languages and run a cron job every week.

Still no cogent answer. Pathetic. Very much an Anthropic blindspot—to the point of being at least amoral and even immoral.

Do the big AI corporation that have profited greatly from Wikimedia Foundation give anything back? Or are they just large internet blood suckers without ethics?

Dario and Sam et al.: Contribute to the welfare of your own blood donors.

lawlessone•4h ago
you can even torrent all of wikipedia, and a whole bunch of other wikis.

Would be great if they did that and maybe seeded it too.

giancarlostoro•4h ago
> Sonnet responded: “Sorry, I have no access.” Then I asked it why and it was flummoxed and confused. I asked why Anthropic did not simply maintain mirrors of Wikipedia in XX different languages and run a cron job every week.

Even worse when you consider that you can download all of Wikipedia for offline use...

8organicbits•33m ago
> Then I asked it why

I'm still learning the landscape of LLMs, but do we expect an LLM to be able to answer that? I didn't think they had meta information about their own operation.

gowld•4h ago
Who says they don't?
numpad0•4h ago
imo when it kills somebody it justifies extreme means such as feeding them with fabricated truths such as LLM generated and artificially corrupted text /s
immibis•4h ago
It's because they don't give a shit whether the product works properly or not. By blocking AI scraping, sites are forcing AI companies to scrape faster before they're blocked. And faster means sloppier.
lovich•3h ago
There’s also the point that if the web site is down after you scraped it, then that’s 1 more sites data you’ve scraped that your competition now cant
verdverm•5h ago
At the same time, a lot of HN push back for new solutions like Signed Agents by CF

https://news.ycombinator.com/item?id=45066258

fruitworks•4h ago
Because it's a bad solution. The core problem is that the internet is vulnerable to DDoS attacks and the web has no native sybil resistance mechanism.

Cloudflare's solution to every problem is to allow them to control more of the internet. What happens when they have enough control to do whatever they want? They could charge any price they want.

add-sub-mul-div•4h ago
I'm more afraid of the orgs that are gaining enough control of knowledge, cognition, and creativity that they'll be able to charge any price for them once they've trained us out of practicing them ourselves.
fruitworks•4h ago
There Is No Moat
wongarsu•3h ago
Until crawling without being on people's whitelist becomes sufficiently difficult
marginalia_nu•4h ago
The idea itself has merit, even if the implementation is questionable.

Giving bots a cryptographic identity would allow good bots to meaningfully have skin in the game and crawl with their reputation at stake. It's not a complete solution, but could be part of one. Though you can likely get the good parts from HTTP request signing alone, Cloudflare's additions to that seem fairly extraneous.

I honestly don't know what is a good solution. The status quo is certainly completely untenable. If we keep going like we are now, there won't be a web left to protect in a few years. It's worth keeping in mind that there's an opportunity cost, and even a bad solution may be preferrable to no solution at all.

... I say operating an independent web crawler.

fruitworks•4h ago
I think the solution is some sort of PoW gateway like people are setting up now. Or a micropayments system where each page request costs a fraction of a penny.

You could combine that with some sort of IPFS/Bittorrent like system where you allow others to rehost your static content, indexed by the merkle hash of the content. That would allow users to donate bandwidth.

I really don't like the idea that you can get out of this by surveiling user agents more or distinguishing between "good" and "bad" bots which is a massive social problem.

ChrisArchitect•5h ago
Blogspam just linking to a bunch of prior reports and posts from earlier in the year.

Some ongoing recent discussion:

Cloudflare Radar: AI Insights

https://news.ycombinator.com/item?id=45093090

The age of agents: cryptographically recognizing agent traffic

https://news.ycombinator.com/item?id=45055452

That Perplexity one:

Perplexity is using stealth, undeclared crawlers to evade no-crawl directives

https://news.ycombinator.com/item?id=44785636

AI crawlers, fetchers are blowing up websites; Meta, OpenAI are worst offenders

https://news.ycombinator.com/item?id=44971487

tmaly•4h ago
What if content providers reduced the 30k word page for a recipe down to just the actual recipe, would this reduce the amount of data these bots are pulling down?

I don't see this slowing down. If websites don't adapt to the AI deep search reality, the bot will just go somewhere else. People don't want to read these massive long form pages geared at outdated Google SEO techniques.

Fade_Dance•4h ago
You're painting this as a problem that is somehow related to overly long form text based web pages. It isn't. If you host a local cleaning company site, or a game walkthrough site, or a roleplaying forum, the bots will flood the gates all the same.

You are right that it doesn't look like it is slowing down, but the developing result of this will not be people posting a shorter recipe, it will be a further contraction of the public facing, open internet.

hombre_fatal•4h ago
Kinda funny to see someone casually mention a roleplaying forum. That's what I run and it got 10x traffic overnight from AI bots.

Made it when I was a teenager and got stuck running it the rest of my life.

Of course, the bots go super deep into the site and bust your cache.

lawlessone•4h ago
you'd be within your right to serve the bots fake gibberish data.

Maybe they'll crawl less when it starts damaging models.

Analemma_•3h ago
That's only a stopgap measure, eventually they'll realize what's happening and use distributed IPs and fake user agents to look like normal users. The Tencent and Bytedance scrapers are already doing this.
gowld•4h ago
Text content at the sub-page level is approximately 0% of web traffic. It's a non-issue.
bdefore•4h ago
I created and maintain ProtonDB, a popular Linux gaming resource. I don't do ads, just pay the bills from some Patreon donations.

It's a statically generated React site I deploy on Netlify. About ten days ago I started incurring 30GB of data per day from user agents indicating they're using Prerender. At this pace almost all of that will push me past the 1TB allotted for my plan, so I'm looking at an extra ~$500USD a month for the extra bandwdith boosters.

I'm gonna try the robots.txt options, but I'm doubtful this will be effective in the long run. Many other options aren't available if I want to continue using a SaaS like Netlify.

My initial thoughts are to either move to Cloudflare Pages/Workers where bandwidth is unlimited, or make an edge function that parses the user agent and hope it's effective enough. That'd be about $60 in edge function invocations.

I've got so many better things to do than play whack-a-mole on user agents and, when failing, pay this scraping ransom.

Can I just say fuck all y'all AI harvesters? This is a popular free service that helps get people off of their Microsoft dependency and live their lives on a libre operating system. You wanna leech on that? Fine, download the data dumps I already offer on an ODbL license instead of making me wonder why I fucking bother.

gjsman-1000•4h ago
Your mistake is openly suggesting on HN that you're going to use Cloudflare, increasing the centralization of the internet and contributing to their attestation schemes, while society forces you to be a victim of the tragedy of the commons.
bdefore•4h ago
Please believe me that it is not a step I want to take.
gjsman-1000•4h ago
You don't need to apologize - HN needs to get their heads out of the sand that not everything is a tragedy of the commons, there's a reason why centralization exists, and the decentralized internet as it is now comes with serious drawbacks. We're never going to overcome the popularity of big tech if we can't be honest with the problems they solve.

Also, sue me, the cathedral has defeated the bazaar. This was predictable, as the bazaar is a bunch of stonecutters competing with each other to sell the best stone for building the cathedral with. We reinvented the farmer's market, and thought that if all the farmers united, they could take down Walmart. It's never happening.

hombre_fatal•2h ago
In this context, the farmers are trying to deal with rampant abuse that is inconceivable to handle on an individual level.

It's not clear to me what taking down Cloudflare/Walmart means in this context. Nor how banding together wouldn't just incur the very centralization that is presumably so bad it must be taken down.

azdle•3h ago
Another option that wouldn't contribute to more centralization might be neocities. They give you 3 TB for $5/month. That seems to be _the_ limit though. The dude runs his own CDN just for neocities, so it's not just reselling cloudflare or something.

P.S. Thank you for ProtonDB, it has been so incredibly helpful for getting some older games running.

immibis•4h ago
$500 for exceeding 1TB? The problem here isn't the crawlers, it's your price-gouging, extortionate hosting plan. Pick your favourite $5/month VPS platform - I suggest Hetzner with its 20TB limit (if their KYC process lets you in) or Digital Ocean if not (with only 1TB but overage is only a few bucks extra). Even freaking AWS, known for extremely high prices, is cheaper than that (but still too expensive so don't use it).
bdefore•3h ago
To clarify the math. Netlify bills $50 for each 100GB over the Pro plan limit of 1TB. Which is the barrel I'm looking down just this month before others get the same idea. So yes, I'm squeezed on both side unless I put the work in to rehost.
KyleTheDev•3h ago
> The problem here isn't the crawlers,

One of the worst takes I've seen. Yes, that's expensive, but the individuals doing insane amounts of unnecessary scraping are the problem. Let's not act like this isn't the case.

snerbles•3h ago
> The problem here isn't the crawlers, it's your price-gouging, extortionate hosting plan.

No, it's both.

The crawlers are lazy, apparently have no caching, and there is no immediately obvious way to instruct/force those crawlers to grab pages in a bandwidth-efficient manner. That being said, I would not be surprised if someone here will smugly contradict me with instructions on how to do just that.

In the near term, if I were hosting such a site I'd be looking into slimming down every byte I could manage, using fingerprinting to serve slim pages to the bots and exploring alternative hosting/CDN options.

alright2565•3h ago
I'm not sure what Netlify is doing, but the heaviest assets on your website are your javascript sources. Have you considered hosting those on GitHub pages, which has a generous free tier?

The images are from steamcdn-a.akamaihd.net, which I assume is already being hosted by a third-party (Steam)

bdefore•3h ago
I'd rather not involve Microsoft but I recognize there are other options. It is additional work/complexity I'll probably have to take on.
beckthompson•3h ago
Proton DB is an amazing website that I use all the time. Thank you for maintaining it!
bdefore•3h ago
Thanks. Appreciate your support, and very glad it brings you value.
pbowyer•3h ago
Do you have the ability to block ASNs? I help sysadmin a DIY building forum, and we cut 80% of the load from our server by blocking all Alibaba IPs in ASN 45102. Singapore was sending the most bot traffic.
dlcarrier•3h ago
Please use a default deny on the user agent. It can block a lot of accessability tools and makes privacy difficult.
danudey•1h ago
Did you mean to say don't use a default deny?
delduca•2h ago
Go for Cloudflare pages.
marvinblum•2h ago
Thank you for making ProtonDB! I use it a ton <3
thaumaturgy•3h ago
People outside of a really small sysadmin niche really don't grasp the scale of this problem.

I run a small-but-growing boutique hosting infrastructure for agency clients. The AI bot crawler problem recently got severe enough that I couldn't just ignore it anymore.

I'm stuck between, on one end, crawlers from companies that absolutely have the engineering talent and resources to do things right but still aren't, and on the other end, resource-heavy WordPress installations where the client was told it was a build-it-and-forget-it kind of thing. I can't police their robots.txt files; meanwhile, each page load can take a full 1s round trip (most of that spent in MySQL), there are about 6 different pretty aggressive AI bots, and occasionally they'll get stuck on some site's product variants or categories pages and start hitting it at a 1r/s rate.

There's an invisible caching layer that does a pretty nice job with images and the like, so it's not really a bandwidth problem. The bots aren't even requesting images and other page resources very often; they're just doing tons and tons of page requests, and each of those is tying up a DB somewhere.

Cumulatively, it is close to having a site get Slashdotted every single day.

I finally started filtering out most bot and crawler traffic at nginx, before it gets passed off to a WP container. I spent a fair bit of time sampling traffic from logs, and at a rough guess, I'd say maybe 5% of web traffic is currently coming from actual humans. It's insane.

I've just wrapped up the first round of work for this problem, but that's just buying a little time. Now, I've gotta put together an IP intelligence system, because clearly these companies aren't gonna take "403" for an answer.

gjsman-1000•3h ago
I might write a blog post on this, but I seriously believe we collectively need to rethink The Cathedral and the Bazaar.

The Cathedral won. Full stop. Everyone, more or less, is just a stonecutter, competing to sell the best stone (i.e. content, libraries, source code, tooling) for building the cathedrals with. If the world is a farmer's market, we're shocked that the farmer's market is not defeating Walmart, and never will.

People want Cathedrals; not Bazaars. Being a Bazaar vendor is a race to the bottom. This is not the Cathedral exploiting a "tragedy of the commons," it's intrinsic to decentralization as a whole. The Bazaar feeds the Cathedral, just as the farmers feed Walmart, just as independent websites feed Claude, a food chain and not an aberration.

thaumaturgy•3h ago
The Cathedral and the Bazaar meets The Tragedy of the Commons.

Let's say there's two competing options in some market. One option is fully commercialized, the other option holds to open-source ideals (whatever those are).

The commercial option attracts investors, because investors like money. The money attracts engineers, because at some point "hacker" came to mean "comfortable lifestyle in a high COL area". The commercial option gets all the resources, it gets a marketing team, and it captures 75% of the market because most people will happily pay a few dollars for something they don't have to understand.

The open source option attracts a few enthusiasts (maybe; or, often, just one), who labor at it in whatever spare time they can scrape together. Because it's free, other commercial entities use and rely on the open source thing, as long it continues to be maintained in something that, if you squint, resembles slave labor. The open source option is always a bit harder to use, with fewer features, but it appeals to the 25% of the market that cares about things like privacy or ownership or self-determination.

So, one conclusion is "people want Cathedrals", but another conclusion could be that all of our society's incentives are aligned towards Cathedrals.

It would be insane, after all, to not pursue wealth just because of some personal ideals.

gjsman-1000•3h ago
The answer is quite simply that where complexity exceeds the regular person's interest, there will be a cathedral.

It's not about capitalism or incentives. Humans have cognitive limits and technology is very low on the list for most. They want someone else to handle complexity so they can focus on their lives. Medieval guilds, religious hierarchies, tribal councils, your distribution's package repository, it's all cathedrals. Humans have always delegated complexity to trusted authorities.

The 25% who 'care about privacy or ownership' mostly just say they care. When actually faced with configuring their own email server or compiling their own kernel, 24% of that 25% immediately choose the cathedral. You know the type, the people who attend FOSDEM carrying MacBooks. The incentives don't create the demand for cathedrals, but respond to it. Even in a post-scarcity commune, someone would emerge to handle the complex stuff while everyone else gratefully lets them.

The bazaar doesn't lose because of capitalism. It loses because most humans, given the choice between understanding something complex or trusting someone else to handle it, will choose trust every time. Not just trust, but CYA (I'm not responsible for something I don't fully understand) every time. Why do you think AI is successful? I'd rather even trust a blathering robot than myself. It turns out, people like being told what to do on things they don't care about.

rurp•2h ago
This is pretty much a more eloquent version of what I was about to write. It's dangerous to take a completely results oriented view of a situation where the commercial incentives are so absurdly lopsided. The cathedral owners spend more than the GDP of most countries every year on various carrots and sticks to maintain something like the current ecosystem. I think the current world is far from ideal for most people, but it's hard to compete against the coordinated efforts of the richest and most powerful entities in the world.
AnthonyMouse•1h ago
> The Bazaar feeds the Cathedral

Isn't this the licensing problem? Berkeley release BSD so that everyone can use it, people do years of work to make it passable, Apple takes it to make macOS and iOS because the license allows them to, and then they have both the community's work and their own work so everyone uses that.

The Linux kernel is GPLv2, not GPLv3, so vendors distribute binary blob drivers/firmware with their hardware and then the hardware becomes unusable as soon as they stop publishing new versions because then to use the hardware you're stuck with an old kernel with known security vulnerabilities, or they lock the boot loader because v2 lacks the anti-Tivoization clause in v3.

If you use a license that lets the cathedral close off the community's work then you lose, but what if you don't do that?

jazzyjackson•3h ago
Couldn't it be addressed in front of the application with a fail2ban rule, some kind of 429 Too Many Requests quota on a per session basis? Or are the crawlers anonymizing themselves / coming from different IP addresses?
sc68cal•3h ago
They are spreading themselves across lots of different IP blocks
thaumaturgy•3h ago
Yeah, that's where IP intelligence comes in. They're using pretty big IP pools, so, either you're manually adding individual IPs to a list all day (and updating that list as ASNs get continuously shuffled around), or you've got a process in the background that essentially does whois lookups (and caches them, so you aren't also being abusive), parses the metadata returned, and decides whether that request is "okay" or not.

The classic 80/20 rule applies. You can catch about 80% of lazy crawler activity pretty easily with something like this, but the remaining 20% will require a lot more effort. You start encountering edge cases, like crawlers that use AWS for their crawling activity, but also one of your customers somewhere is syncing their WooCommerce orders to their in-house ERP system via a process that also runs on AWS.

loloquwowndueo•3h ago
Its called Anubis.
dylan604•2h ago
Isn't that the one that shows anime characters? Or is Anubis the "professional" version that doesn't show anime chars?
greazy•1h ago
Yes that's Anubis. And yes you pay to not show anime cat girl.
jay_kyburz•2h ago
This is probably a dumb question, but at what point do we put a simple CAPTCHA in front of every new user that arrives at a site, then give them a cookie and start tracking requests per second from that user?

I guess its a kind of soft login required for every session?

update: you could bake it into the cookie approval dialog (joke!)

thaumaturgy•1h ago
The post-AI web is already a huge mess. I'd prefer solutions that don't make it worse.

I myself browse with cookies off, sort of, most of the time, and the number of times per day that I have to click a Cloudflare checkbox or help Google classify objects from its datasets is nuts.

dragonwriter•1h ago
> The post-AI web is already a huge mess.

You mean the peri-AI web? Or is AI already done and over and no longer exerting an influence?

AnthonyMouse•1h ago
> meanwhile, each page load can take a full 1s round trip (most of that spent in MySQL)

Can't these responses still be cached by a reverse proxy as long as the user isn't logged in, which the bots presumably aren't?

thaumaturgy•23m ago
That would be nice! This doesn't work reliably enough for WP sites. Whether it's devs making changes and testing them in prod, or dynamic content loaded in identical URLs, my past attempts to cache html have caused questions and complaints. The current caching strategy hits a nice balance and hasn't bothered anyone, with the significant downside that it's vulnerable to bot traffic.

(If you choose to read this as, "WordPress is awful, don't use WordPress", I won't argue with you.)

everforward•20m ago
They're presumably not crawling the same page repeatedly, and caching the pages long enough to persist between crawls would require careful thinking and consultation with clients (e.g. if they want their blog posts to show up quickly, or an "on sale" banner or etc).

It'd probably be easier to come at it from the other side and throw more resources at the DB or clean it up. I can't imagine what's going on that it's spending a full second on DB queries, but I also don't really use WP.

dizlexic•3h ago
I was sysadmining a virtual art gallery thousands of "exhibits" including sound, video, and images.

We had never had any issue before and suddenly we get taken down 3 times in as many days. When I investigated it was all claude.

They were just pounding every route regardless of timeouts with no throttle. It was nasty.

They give web scrapers a bad rep.

dylan604•1h ago
Web scrapers earned their bad rep all on their own thank you very much. This is nothing new. Scrapers have no concern about whether a site is mostly static with stale text vs constantly updated. Most sites are not FB/Twitt..er,X/etc. Even retail sites not Amazon don't have new products being listed every minute. But that would involve someone on the scraper's side to pay attention and instead just let the computer run even if it is reading the same data every time.

Even if sites offered their content in a single downloadable file for bots, the bot creators would not trust it is not stale and out of date so they'd still continue to scrape ignoring the easy method.

RebeccaTheDev•3h ago
I'll add my voice to others here that this is a huge problem especially for small hobbyist websites.

I help administer a somewhat popular railroading forum. We've had some of these AI crawlers hammering the site to the point that it became unusable to actual human beings. You design your architecture around certain assumptions, and one of those was definitely not "traffic quintuples."

We've ended up blocking lots of them, but it's a neverending game of whack-a-mole.

0cf8612b2e1e•3h ago
This has been widely reported for months now. Anthropic just reported another $13B in funding. Clearly, the companies just do not care to invest any effort to improving their behavior.
idle_zealot•3h ago
This is something I have a hard time understanding. What is the point of this aggressive crawling? Gathering training data? Don't we already have massive repos of scraped web data being used for search indexing? Is this a coordination issue, each little AI startup having to scrape its own data because nobody is willing to share their stuff as regular dumps? For Wikipedia we have the official offline downloads, for books we have books3, but there's not an equivalent for the rest of the web? Could this be solved by some system where website operators submit text copies of their sites to a big database? Then in robots.txt or similar add a line that points to that database with a deep link to their site's mirrored content?

The obvious issues are: a) who would pay to host that database. b) Sites not participating because they don't want their content accessible by LLMs for training (so scraping will still provide an advantage over using the database). c) The people implementing these scrapers are unscrupulous and just won't bother respecting sites that direct them to an existing dumped version of their content. d) Strong opponents to AI will try poisoning the database with fake submissions...

Or does this proposed database basically already exist between Cloudflare and the Internet Archive, and we already know that the scrapers are some combination of dumb and belligerent and refuse to use anything but the live site?

drozycki•2h ago
I asked Google AI Mode “does Google ai mode make tens of site requests for a single prompt” and it showed “Looking at 69 sites” before giving a response about query fan-out.

Cloudflare has some large part of the web cached, IA takes too long to respond and couldn’t handle the load. Google/OpenAI and co could cache these pages but apparently don’t do it aggressively enough or at all

ccgreg•1h ago
I don't think you're correct about Google. Caching webpages is bread-and-butter for search engines, that's how they show snippets.
danudey•1h ago
They might cache it, but what if it changed in the last 30 seconds and now their information is out of date? Better make another request just in case.
ccgreg•53m ago
That's not how search engines work. They have a good idea of which pages might be frequently updated. That's how "news search" works, and even small startup search engines like blekko had news search.
watwut•1h ago
I suspect they simply so not care. Owners of these companies are exactly the sort of people that are genuinely puzzled and offended when someone wants them to think about anything but themselves

The attitude is visible in everything around AI, why would crawling be different?

dlcarrier•3h ago
It's really bad for anyone using anything other than Chrome to browse the web, or any accessability tools or privacy software, because a bunch of sites will now block you, assuming you're a web crawler.
mat_b•1h ago
Is this data being collected for training sets? That seems problematic. I can't be the only one who's noticed that the web is quickly filling up with AI generated clickbait (which has made using a search engine more difficult).
1vuio0pswjnm7•1h ago
"It used to be when search indexing crawler, Googlebot, came calling, I could always hope that some story on my site would land on the magical first page of someone's search results so they'd visit me, they'd read the story, and two or three times out of a hundred visits, they'd click on an ad, and I'd get a few pennies of income."

Perhaps the AI crawlers can "click on some ads"

neoromantique•1h ago
I think we are just reaping the delayed storm of the insanely inefficient web we have created over the past decades.

There is absolutely no need for vast majority of websites to use databases and SSR, most of the web can be statically rendered and cost peanuts to host, but alas WP is the most popular "framework"

badgersnake•1h ago
Hope we’re including this in the energy usage for LLMs.