frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

AI crawlers, fetchers are blowing up websites; Meta, OpenAI are worst offenders

https://www.theregister.com/2025/08/21/ai_crawler_traffic/
112•rntn•4h ago

Comments

breakyerself•3h ago
There's so much bullshit on the internet how do they make sure they're not training on nonsense?
bgwalter•3h ago
Much of it is not training. The LLMs fetch webpages for answering current questions, summarize or translate a page at the user's request etc.

Any bot that answers daily political questions like Grok has many web accesses per prompt.

8organicbits•2h ago
Is an AI chatbot fetching a web page to answer a prompt a 'web scraping bot'? If there is a user actively promoting the LLM, isn't it more of a user agent? My mental model, even before LLMs, was that a human being present changes a bot into a user agent. I'm curious if others agree.
bgwalter•2h ago
The Register calls them "fetchers". They still reproduce the content of the original website without the website gaining anything but additional high load.

I'm not sure how many websites are searched and discarded per query. Since it's the remote, proprietary LLM that initiates the search I would hesitate to call them agents. Maybe "fetcher" is the best term.

danaris•1h ago
But they're (generally speaking) not being asked for the contents of one specific webpage, fetching that, and summarizing it for the user.

They're going out and scraping everything, so that when they're asked a question, they can pull a plausible answer from their dataset and summarize the page they found it on.

Even the ones that actively go out and search/scrape in response to queries aren't just scraping a single site. At best, they're scraping some subset of the entire internet that they have tagged as being somehow related to the query. So even if what they present to the user is a summary of a single webpage, that is rarely going to be the product of a single request to that single webpage. That request is going to be just one of many, most of which are entirely fruitless for that specific query: purely extra load for their servers, with no gain whatsoever.

snowwrestler•2h ago
While it’s true that chatbots fetch information from websites in response to requests, the load from those requests is tiny compared to the volume of requests indexing content to build training corpuses.

The reason is that user requests are similar to other web traffic because they reflect user interest. So those requests will mostly hit content that is already popular, and therefore well-cached.

Corpus-building crawlers do not reflect current user interest and try to hit every URL available. As a result these hit URLs that are mostly uncached. That is a much heavier load.

shikon7•1h ago
But surely there aren't thousands of new corpuses built every minute.
bgwalter•1h ago
Why would the Register point out Meta and OpenAI as the worst offenders? I'm sure they do not continuously build new corpuses every day. It is probably the search function, as mentioned in the top comments.
prasadjoglekar•3h ago
By paying a pretty penny for non bullshit data (Scale Ai). That along with Nvidia are the shovels in this gold rush.
danaris•2h ago
I mean...they don't. That's part of the problem with "AI answers" and such.
shinycode•3h ago
In the same time it’s so practical to ask a question and it opens 25 pages to search and summarize the answer. Before that’s more or less what I was trying to do by hand. Maybe not 25 websites because of crap SEO the top 10 contains BS content so I curated the list but the idea is the same no ?
pm215•2h ago
Sure, but if the fetcher is generating "39,000 requests per minute" then surely something has gone wrong somewhere ?
miohtama•2h ago
Even if it is generating 39k req/minute I would expect most of the pages already be locally cached by Meta, or served statically by their respective hosts. We have been working hard on catching websites and it has been a solved problem for the last decade or so.
ndriscoll•1h ago
Could be serving no-cache headers? Seems like yet another problem stemming from every website being designed as if it were some dynamic application when nearly all of them are static documents. nginx doing 39k req/min to cacheable pages on an n100 is what you might call "98% idle", not "unsustainable load on web servers".

The data transfer, on the other hand, could be substantial and costly. Is it known whether these crawlers do respect caching at all? Provide If-Modified-Since/If-None-Match or anything like that?

mrweasel•24m ago
Many AI crawlers seems to go to great length to avoid caches, not sure why.
andai•2h ago
They're not very good at web queries, if you expand the thinking box to see what they're searching for, like half of it is nonsense.

e.g. they'll take an entire sentence the user said and put it in quotes for no reason.

Thankfully search engines started ignoring quotes years ago, so it balances out...

rco8786•2h ago
My personal experience is that OpenAI's crawler was hitting a very, very low traffic website I manage 10s of 1000s of times a minute non-stop. I had to block it from Cloudflare.
danaris•2h ago
Same here.

I run a very small browser game (~120 weekly users currently), and until I put its Wiki (utterly uninteresting to anyone who doesn't already play the game) behind a login-wall, the bots were causing massive amounts of spurious traffic. Due to some of the Wiki's data coming live from the game through external data feeds, the deluge of bots actually managed to crash the game several times, necessitating a restart of the MariaDB process.

mrweasel•26m ago
Wikis seems to attract AI bots like crazy, especially the bad kind that will attempt any type of cache invalidation available to them.
Leynos•1h ago
Where is caching breaking so badly that this is happening? Are OpenAI failing to use etags or honour cache validity?
Analemma_•1h ago
Their crawler is vibe-coded.
internet_points•2h ago
They mention anubis, cloudflare, robots.txt – does anyone have experiences with how much any of them help?
nromiun•2h ago
CDNs like Cloudflare are the best. Anubis is a rate limitor for small websites where you can't or won't use CDNs like Cloudflare. I have used Cloudflare on several medium sized websites and it works really well.

Anubis's creator says the same thing:

> In most cases, you should not need this and can probably get by using Cloudflare to protect a given origin. However, for circumstances where you can't or won't use Cloudflare, Anubis is there for you.

Source: https://github.com/TecharoHQ/anubis

bakugo•1h ago
robots.txt is obviously only effective against well-behaved bots. OpenAI etc are usually well behaved, but there's at least one large network of rogue scraping bots that ignores robots.txt, fakes the user-agent (usually to some old Chrome version) and cycles through millions of different residential proxy IPs. On my own sites, this network is by far the worst offender and the "well-behaved" bots like OpenAI are barely noticeable.

To stop malicious bots like this, Cloudflare is a great solution if you don't mind using it (you can enable a basic browser check for all users and all pages, or write custom rules to only serve a check to certain users or on certain pages). If you're not a fan of Cloudflare, Anubis works well enough for now if you don't mind the branding.

Here's the cloudflare rule I currently use (vast majority of bot traffic originates from these countries):

  ip.src.continent in {"AF" "SA"} or
  ip.src.country in {"CN" "HK" "SG"} or
  ip.src.country in {"AE" "AO" "AR" "AZ" "BD" "BR" "CL" "CO" "DZ" "EC" "EG" "ET" "ID" "IL" "IN" "IQ" "JM" "JO" "KE" "KZ" "LB" "MA" "MX" "NP" "OM" "PE" "PK" "PS" "PY" "SA" "TN" "TR" "TT" "UA" "UY" "UZ" "VE" "VN" "ZA"} or
  ip.src.asnum in {28573 45899 55836}
hombre_fatal•1h ago
CloudFlare's Super Bot Fight Mode completely killed the surge in bot traffic for my large forum.
ajsnigrutin•44m ago
And added captchas to every user with an adblock or sensible privacy settings.
rco8786•2h ago
OpenAI straight up DoSed a site I manage for my in-laws a few months ago.
muzani•1h ago
What is it about? I'm curious what kinds of things people ask that floods sites.
average_r_user•1h ago
I suppose that they just keep referring to the website in their chats, and probably they have selected the search function, so before every reply, the crawler hits the website
hereme888•2h ago
I'm absolutely pro AI-crawlers. The internet is so polluted with garbage, compliments of marketing. My AI agent should find and give me concise and precise answers.
lionkor•1h ago
The second I get hit with bot traffic that makes my server heat up, I would just slam some aggressive anti bot stuff infront. Then you, my friend, are getting nothing with your fancy AI agent.
mediumsmart•1h ago
so the fancy AI agent will have to get really fancy and mimic human traffic and all is good until the server heats up from all those separate human trafficionados - then what?
hereme888•1h ago
I've never ran any public-facing servers, so maybe I'm missing the experience of your frustration. But mine, as a "consumer" is wanting clean answers, like what you'd expect when asking your own employee for information.
mrweasel•17m ago
They just don't need to hammer sites into the ground to do it. This wouldn't be an issue if the AI companies where a bit more respectful of their data sources, but they are not, they don't care.

All this attempting to block AI scrapers would not be an issue if they respected rate-times, knew how to back of when a server starts responding to slowly, or caching frequently visited sites. Instead some of these companies will do everything, including using residential ISPs, to ensure that they can just piledrive the website of some poor dude that's just really into lawnmowers, or the git repo of some open source developer who just want to share their work.

Very few are actually against AI-crawlers, if they showed just the tiniest amount of respect, but they don't. I think Drew Devault said it best: "Please stop externalizing your costs directly into my face"

exasperaited•1h ago
Xe Iaso is my spirit animal.

> "I don't know what this actually gives people, but our industry takes great pride in doing this"

> "unsleeping automatons that never get sick, go on vacation, or need to be paid health insurance that can produce output that superficially resembles the output of human employees"

> "This is a regulatory issue. The thing that needs to happen is that governments need to step in and give these AI companies that are destroying the digital common good existentially threatening fines and make them pay reparations to the communities they are harming."

<3 <3

delfinom•1h ago
I run a symbol server, as in, PDB debug symbol server. Amazon's crawler and a few others love requesting the ever loving shit out of it for no obvious reason. Especially since the files are binaries.

I just set a rate-limit in cloudflare because no legitimate symbol server user will ever be excessive.

ack_complete•40m ago
I have a simple website consisting solely of static webpages pointing to a bunch of .zip binaries. Nothing dynamic, all highly cacheable. The bots are re-downloading the binaries over and over. I can see Bingbot downloading a .zip file in the logs, and then an hour later another Bingbot instance from a different IP in the same IP range downloading the same .zip file in full. These are files that were uploaded years ago and have never retroactively changed, and don't contain crawlable contents within them (executable code).

Web crawlers have been around for years, but many of the current ones are more indiscriminate and less well behaved.

xrd•1h ago
Isn't there a class action lawsuit coming from all this? I see a bunch of people here indicating these scrapers are costing real money to people who host even small niche sites.

Is the reason these large companies don't care because they are large enough to hide behind a bunch of lawyers?

outside1234•45m ago
Yes. There are one set of rules for us and another set of rules for anything with more than a billion dollars.
EgregiousCube•27m ago
Under what law? It's interesting because these are sites that host content for the purpose of providing it to anonymous network users. ebay won a case against a scraper back in 2000 by claiming that the server load was harming them, but that reasoning was later overturned because it's difficult to say that server load is actual harm. ebay was in the same condition before and after a scrape.

Maybe some civil lawsuit about terms of service? You'd have to prove that the scraper agreed to the terms of service. Perhaps in the future all CAPTCHAs come with a TOS click-through agreement? Or perhaps every free site will have a login wall?

timsh•1h ago
A bit off-topic but wtf is this preview image of a spider in the eye? It’s even worse than the clickbait title of this post. I think this should be considered bad practice.
lostmsu•1h ago
This article and the "report" look like a submarine ad for Fastly services. At no point does it mention the human/bot/AI bot ratio, making it useless for any real insights.
jasoncartwright•56m ago
I recently, for pretty much the first time ever in 30 years of running websites, had to blanket ban crawlers. I now whitelist a few, but the rest (and all other non-UK visitors) have to pass a Cloudflare challenge [1].

AI crawlers were downloading whole pages and executing all the javascript tens of millions of times a day - hurting performance, filling logs, skewing analytics and costing too much money in Google Maps loads.

Really disappointing.

[1] https://developers.cloudflare.com/cloudflare-challenges/

tehwebguy•25m ago
This is a feature! If half the internet is nuked and the other half put up fences there is less readily available training data for competitors.
bwb•15m ago
My book discovery website shepherd.com is getting hammered every day by AI crawlers (and crashing often)... my security lists in CloudFlare are ridiculous and the bots are getting smarter.

I wish there were a better way to solve this.

sct202•11m ago
I wonder how much of the rapid expansion of datacenters is from trying to support bot traffic.
pjc50•47s ago
Place alongside https://news.ycombinator.com/item?id=44962529 "Why are anime catgirls blocking my access to the Linux kernel?"

AI is going to damage society not in fancy sci-fi ways but by centralizing profit made at the expense of everyone else on the internet, who is then forced to erect boundaries to protect themselves, worsening the experience for the rest of the public. Who also have to pay higher electricity bills, because keeping humans warm is not as profitable as a machine which directly converts electricity into stock price rises.

AWS CEO says using AI to replace junior staff is 'Dumbest thing I've ever heard'

https://www.theregister.com/2025/08/21/aws_ceo_entry_level_jobs_opinion/
633•JustExAWS•2h ago•230 comments

Apple Watch wearable foundation model

https://arxiv.org/abs/2507.00191
31•brandonb•1h ago•4 comments

How Well Does the Money Laundering Control System Work?

https://www.journals.uchicago.edu/doi/10.1086/735665
92•PaulHoule•2h ago•65 comments

Weaponizing image scaling against production AI systems

https://blog.trailofbits.com/2025/08/21/weaponizing-image-scaling-against-production-ai-systems/
128•tatersolid•3h ago•31 comments

Using Podman, Compose and BuildKit

https://emersion.fr/blog/2025/using-podman-compose-and-buildkit/
149•LaSombra•4h ago•31 comments

Launch HN: Skope (YC S25) – Outcome-based pricing for software products

8•benjsm•33m ago•1 comments

D4d4

https://www.nmichaels.org/musings/d4d4/d4d4/
343•csense•4d ago•41 comments

Show HN: ChartDB Cloud – Visualize and Share Database Diagrams

https://app.chartdb.io
37•Jonathanfishner•2h ago•7 comments

Show HN: OS X Mavericks Forever

https://mavericksforever.com/
157•Wowfunhappy•2d ago•46 comments

Mark Zuckerberg freezes AI hiring amid bubble fears

https://www.telegraph.co.uk/business/2025/08/21/zuckerberg-freezes-ai-hiring-amid-bubble-fears/
345•pera•4h ago•354 comments

Show HN: Using Common Lisp from Inside the Browser

https://turtleware.eu/posts/Using-Common-Lisp-from-inside-the-Browser.html
42•jackdaniel•3h ago•7 comments

You Should Add Debug Views to Your DB

https://chrispenner.ca/posts/views-for-debugging
19•ezekg•3d ago•8 comments

Activeloop (YC S18) Is Hiring Member of Technical Staff – Back End Engineering

https://careers.activeloop.ai/
1•davidbuniat•3h ago

Margin debt surges to record high

https://www.advisorperspectives.com/dshort/updates/2025/07/23/margin-debt-surges-record-high-june-2025
121•pera•4h ago•142 comments

Sütterlin

https://en.wikipedia.org/wiki/S%C3%BCtterlin
44•anonu•3d ago•33 comments

Why is D3 so Verbose?

https://theheasman.com/short_stories/why-is-d3-code-so-long-and-complicated-or-why-is-it-so-verbose/
38•TheHeasman•5h ago•25 comments

In a first, Google has released data on how much energy an AI prompt uses

https://www.technologyreview.com/2025/08/21/1122288/google-gemini-ai-energy/
65•jeffbee•1h ago•59 comments

Unification (2018)

https://eli.thegreenplace.net/2018/unification/
62•asplake•3d ago•13 comments

AI crawlers, fetchers are blowing up websites; Meta, OpenAI are worst offenders

https://www.theregister.com/2025/08/21/ai_crawler_traffic/
112•rntn•4h ago•47 comments

Why are anime catgirls blocking my access to the Linux kernel?

https://lock.cmpxchg8b.com/anubis.html
706•taviso•1d ago•753 comments

Show HN: I replaced vector databases with Git for AI memory (PoC)

https://github.com/Growth-Kinetics/DiffMem
152•alexmrv•9h ago•36 comments

Show HN: I was curious about spherical helix, ended up making this visualization

https://visualrambling.space/moving-objects-in-3d/
822•damarberlari•1d ago•132 comments

A Conceptual Model for Storage Unification

https://jack-vanlightly.com/blog/2025/8/21/a-conceptual-model-for-storage-unification
10•avinassh•2h ago•0 comments

Home Depot sued for 'secretly' using facial recognition at self-checkouts

https://petapixel.com/2025/08/20/home-depot-sued-for-secretly-using-facial-recognition-technology-on-self-checkout-cameras/
277•mikece•1d ago•368 comments

Sixteen bottles of wine riddle

https://chriskw.xyz/2025/08/11/Wine/
38•chriskw•3d ago•11 comments

A statistical analysis of Rotten Tomatoes

https://www.statsignificant.com/p/is-rotten-tomatoes-still-reliable
201•m463•15h ago•114 comments

To Infinity but Not Beyond

https://meyerweb.com/eric/thoughts/2025/08/20/to-infinity-but-not-beyond/
36•roosgit•6h ago•2 comments

Epson MX-80 Fonts

https://mw.rat.bz/MX-80/
143•m_walden•4d ago•56 comments

Code review can be better

https://tigerbeetle.com/blog/2025-08-04-code-review-can-be-better/
345•sealeck•16h ago•203 comments

Python f-string cheat sheets (2022)

https://fstring.help/cheat/
120•shlomo_z•10h ago•25 comments