frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Anna's Archive: An Update from the Team

https://annas-archive.org/blog/an-update-from-the-team.html
522•jerheinze•3h ago•185 comments

Show HN: We started building an AI dev tool but it turned into a Sims-style game

https://www.youtube.com/watch?v=sRPnX_f2V_c
30•max-raven•43m ago•13 comments

My Retro TVs

https://www.myretrotvs.com/
78•the-mitr•2h ago•15 comments

How much do electric car batteries degrade?

https://www.sustainabilitybynumbers.com/p/electric-car-battery-degradation
41•xnx•1h ago•35 comments

FFmpeg Assembly Language Lessons

https://github.com/FFmpeg/asm-lessons
236•flykespice•5h ago•68 comments

Show HN: Whispering – Open-source, local-first dictation you can trust

https://github.com/epicenter-so/epicenter/tree/main/apps/whispering
63•braden-w•2h ago•17 comments

Show HN: I built an app to block Shorts and Reels

https://scrollguard.app/
350•adrianhacar•2d ago•133 comments

The Cutaway Illustrations of Fred Freeman

https://5wgraphicsblog.com/2016/10/24/the-cutaway-illustrations-of-fred-freeman/
37•Michelangelo11•2d ago•3 comments

TREAD: Token Routing for Efficient Architecture-Agnostic Diffusion Training

https://arxiv.org/abs/2501.04765
26•fzliu•2h ago•3 comments

The Weight of a Cell

https://www.asimov.press/p/cell-weight
57•arbesman•4h ago•20 comments

Launch HN: Reality Defender (YC W22) – API for Deepfake and GenAI Detection

https://www.realitydefender.com/platform/api
45•bpcrd•4h ago•22 comments

Web apps in a single, portable, self-updating, vanilla HTML file

https://hyperclay.com/
541•pil0u•12h ago•192 comments

Who Invented Backpropagation?

https://people.idsia.ch/~juergen/who-invented-backpropagation.html
126•nothrowaways•3h ago•63 comments

Typechecker Zoo

https://sdiehl.github.io/typechecker-zoo/
97•todsacerdoti•3d ago•17 comments

Electromechanical reshaping, an alternative to laser eye surgery

https://medicalxpress.com/news/2025-08-alternative-lasik-lasers.html
191•Gaishan•9h ago•83 comments

Finding a Successor to the FHS

https://lwn.net/SubscriberLink/1032947/67e23ce1a3f9f129/
14•firexcy•12h ago•6 comments

Turning an iPad Pro into the Ultimate Classic Macintosh (2021)

https://blog.gingerbeardman.com/2021/04/17/turning-an-ipad-pro-into-the-ultimate-classic-macintosh/
56•rcarmo•2h ago•7 comments

A gigantic jet caught on camera: A spritacular moment for NASA astronaut

https://science.nasa.gov/science-research/heliophysics/a-gigantic-jet-caught-on-camera-a-spritacular-moment-for-nasa-astronaut-nicole-ayers/
364•acossta•3d ago•87 comments

Image Fulgurator (2011)

https://juliusvonbismarck.com/bank/index.php/projects/image-fulgurator/2/
33•Liftyee•2d ago•2 comments

Vibe coding tips and tricks

https://github.com/awslabs/mcp/blob/main/VIBE_CODING_TIPS_TRICKS.md
146•mooreds•6h ago•70 comments

The lottery ticket hypothesis: why neural networks work

https://nearlyright.com/how-ai-researchers-accidentally-discovered-that-everything-they-thought-about-learning-was-wrong/
8•076ae80a-3c97-4•2h ago•0 comments

T-Mobile claimed selling location data without consent is legal–judges disagree

https://arstechnica.com/tech-policy/2025/08/t-mobile-claimed-selling-location-data-without-consent-is-legal-judges-disagree/
6•Bender•10m ago•0 comments

SystemD Service Hardening

https://roguesecurity.dev/blog/systemd-hardening
218•todsacerdoti•14h ago•80 comments

Countrywide natural experiment links built environment to physical activity

https://www.nature.com/articles/s41586-025-09321-3
29•Anon84•2d ago•17 comments

Sky Calendar

https://abramsplanetarium.org/SkyCalendar/index.html
51•NaOH•3d ago•3 comments

The Lives and Loves of James Baldwin

https://www.newyorker.com/magazine/2025/08/18/baldwin-a-love-story-nicholas-boggs-book-review
79•Caiero•20h ago•11 comments

AWS pricing for Kiro dev tool dubbed 'a wallet-wrecking tragedy'

https://www.theregister.com/2025/08/18/aws_updated_kiro_pricing/
69•rntn•2h ago•47 comments

8x19 Text Mode Font Origins

https://www.os2museum.com/wp/8x19-text-mode-font-origins/
62•userbinator•2d ago•21 comments

Walkie-Textie Wireless Communicator

http://www.technoblogy.com/show?2AON
117•chrisjj•2d ago•79 comments

Class-action suit claims Otter AI records private work conversations

https://www.npr.org/2025/08/15/g-s1-83087/otter-ai-transcription-class-action-lawsuit
129•nsedlet•5h ago•30 comments
Open in hackernews

Robots.txt is a suicide note (2011)

https://wiki.archiveteam.org/index.php/Robots.txt
70•rafram•2h ago

Comments

rafram•2h ago
I think this is kind of misguided - it ignores the main reason sites use robots.txt, which is to exclude irrelevant/old/non-human-readable pages that nevertheless need to remain online from being indexed by search engines - but it's an interesting look at Archive Team's rationale.
xp84•1h ago
Yes, and I'd add to that dynamically-generated URLs of infinite variability which have two separate but equally-important reasons for automated traffic to avoid:

1. You (bot) are wasting your bandwidth, CPU, storage on a literally unbounded set of pages

2. This may or may not cause resource problems for the owner of the site (e.g. Suppose they use Algolia to power search and you search for 10,000,000 different search terms... and Algolia charges them by volume of searches.)

The author of this angry rant really seems specifically ticked at some perceived 'bad actor' who is using robots.txt as an attempt to "block people from getting at stuff" but it's super misguided in that it ignores an entire purpose of robots.txt that is not even necessarily adversarial to the "robot."

This whole thing could have been a single sentence: "Robots.txt has a few competing vague interpretations and is voluntary; not all bots obey it, so if you're fully relying on it to prevent a site from being archived, that won't work."

paulddraper•1h ago
Correct.

That has been one of the biggest uses -- improve SEO by preventing web crawlers from getting lost/confused in a maze of irrelevant content.

rolph•1h ago
the archiveteam statements in the article are sure to win special attention, i think this could be footgunning, and .IF archiveteam .THEN script.exe pleasantries.
bonaldi•1h ago
Not sure the emotive language is warranted. Message appears to be “if you use robots.txt AND archive sites honor it AND you are dumb enough to delete your data without a backup THEN you won’t have a way to recover and you’ll be sorry”.

It also presumes that dealing with automated traffic is a solved problem, which with the volumes of LLM scraping going on, is simply not true for more hobbyist setups.

bigbuppo•1h ago
Or major web properties for that matter.
paulddraper•1h ago
> volumes of LLM scraping

FWIW I have not seen a reputable report on the % of web scraping in the past 3 years.

(Wikipedia being a notable exception...but I would guess Wikipedia to see a far larger increase than anything else.)

esseph•42m ago
It's hard because of attribution, but it absolutely is happening at very high volume. I actually got an alert this morning when I woke up from our monitoring tools that some external sites were being scraped. Happens multiple times a day.

A lot of it is coming through compromised residential endpoint botnets.

tolmasky•8m ago
Wikipedia says their traffic increased roughly 50% [1] from AI bots, which is a lot, sure, but nowhere near the amount where you'd have to rearchitect your site or something. And this checks out, if it was actually debilitating, you'd notice Wikipedia's performance degrade. It hasn't. You'd see them taking some additional steps to combat this. They haven't. Their CDN handles it just fine. They don't even both telling AI bots to just download the tarballs they specifically make available for this exact use case.

More importantly, Wikipedia almost certainly represents the ceiling of traffic increase. But luckily, we don't have to work with such coarse estimation, because according to Cloudflare, the total increase from combined search and AI bots in the last year (May 2024 - May 2025), has just been... 18% [2].

The way you hear people talk about it though, you'd think that servers are now receiving DDOS-levels of traffic or something. For the life of me I have not been able to find a single verifiable case of this. Which if you think about it makes sense... It's hard to generate that sort of traffic, that's one of the reasons people pay for botnets. You don't bring a site to its knees merely by accidentally "not making your scraper efficient". So the only other possible explanation would be such a larger number of scrapers simultaneously but independently hitting sites. But this also doesn't check out. There aren't thousands of different AI scrapers out there that in aggregate are resulting in huge traffic spikes [2]. Again, the total combined increase is 18%.

The more you look into this accepted idea that we are in some sort of AI scraping traffic apocalypse, the less anything makes sense. You then look at this Anubis "AI scraping mitigator" and... I dunno. The author contends that one if its tricks is that it not only uses JavaScript, but "modern JavaScript like ES6 modules," and that this is one of the ways it detects/prevents AI scrapers [3]. No one is rolling their own JS engine for a scraper such that they are being blocked from their inability to keep up with the latest ECMAScript spec. You are just using an existing JS engine, all of which support all these features. It would actually be a challenge to find an old JS engine these days.

The entire things seems to be built on the misconception that the "common" way to build a scraper is doing something curl-esque. This idea is entirely based on the google scraper which itself doesn't even work that way anymore, and only ever did because it was written in the 90s. Everyone that rolls their own scraper these days just uses Puppeteer. It is completely unrealistic to make a scraper that doesn't run JavaScript and wait for the page to "settle down" because so many pages, even blogs, are just entirely client-side rendered SPAs. If I were to write a quick and dirty scraper today I would trivially bypass Anubis' protections... by doing literally nothing and without even realizing Anubis exists. Just using standard scraping practices with Puppeteer. Meanwhile Anubis is absolutely blocking plenty of real humans, with the author for example telling people to turn on cookies so that Anubis can do its job [4]. I don't think Anubis is blocking anything other than humans and Message's link preview generator.

I'm investigating further, but I think this entire thing may have started due to some confusion, but want to see if I can actually confirm this before speculating further.

1. https://www.techspot.com/news/107407-wikipedia-servers-strug... (notice the clickbait title vs. the actual contents)

2. https://blog.cloudflare.com/from-googlebot-to-gptbot-whos-cr...

3. https://codeberg.org/forgejo/discussions/issues/319#issuecom...

4. https://github.com/TecharoHQ/anubis/issues/964#issuecomment-...

xena•4m ago
Hi, main author of Anubis here. How am I meant to store state like "user passed a check" without cookies? Please advise.
QuercusMax•40m ago
I just plain don't understand what they mean by "suicide note" in this case, and it doesn't seem to be explained in the text.

A better analogy would be "Robots.txt is a note saying your backdoor might be unlocked".

chao-•22m ago
I also cannot figure out from context what part of this is "suicide".

I don't even think it's a note saying your back door is unlocked? As myself and others shared in a sibling comment thread, that we have worked at places that implemented robots.txt in order to prevent bots from getting into nearly-infinite tarpits of links that lead to nearly-identical pages.

SCdF•1h ago
This wiki page was created in 2011, in case you're wondering how long they've held this position
tracerbulletx•1h ago
This is a screed that does not address a single point of the actual philosophical issue.

The issue is a debate between what the expectations are for content posted on the public internet. There is the viewpoint that it should be totally machine operable and programmatic and if you want it to be private you should gate it behind authentication, that the semantic web is an important concept and violating it is a breach of protocol. There's also the argument that it's your content, no one has a right to it, and you should be able to license its use anyway you want. There is a trade off between the implications of the two.

procaryote•1h ago
Not having things archived because you explicitly opted out of crawling is a feature, not a bug

Otherwise you can whitelist a specific crawler in robots.txt

rzzzt•1h ago
(I understand it is a different entity) archive .org at one point started to honor the robots.txt settings of the website's current owner, hiding archived copies you could browse in the past. I don't know whether they still do this.
hosh•1h ago
I absolutely will use a robots.txt on my personal sites, which will include a tarpit.

This has nothing to do with keeping my webserver from crashing, and has more to do with crawlers using content to train AI.

Anything I actually want to keep as a legacy, I’ll store with permanent.org

soiltype•1h ago
I have more complaints about this shitty article than it is worth. At least it's clearly a human screed, not LLM generated.

Just say you won't honor it and move on.

_Algernon_•1h ago
I mean the main reason is that robots.txt is pointless these days.

When it was introduced, the web was largely collaborative project within the academic realm. A system based on the honor system worked for the most part.

These days the web is adversarial through and through. A robots.txt file seems like an anachronistic, almost quaint museum piece, reminding us of what once was, while we stoop head first into tech feudalism.

RajT88•1h ago
In fact the problem of the "never ending September" has evolved into, "the never ending barrage of Septemberbots and AI vacuum bots".

The horrors of the 1990's internet is quaint by comparison to the society level problems we now have.

rolph•1h ago
its not a request anymore, its often a warning not to go any farther, lest ye be zipbombed or tarpitted into wasting bandwidth and time.
jawns•1h ago
Is a person not allowed to put up a "no trespassing" sign on their land unless they have a reason that makes sense to would-be trespassers?

I know that ignoring a robots.txt file doesn't carry the same legal consequences as trespassing on physical land, but it's still going against the expressed wishes of the site owner.

Sure, you can argue that the site owner should restrict access using other gates, just as you might argue a land owner should put up a fence.

But isn't this a weird version of Chesterton's Fence, where a person decides that they will trespass beyond the fenced area because they can see no reason why the area should be fenced?

layer8•1h ago
(2011)
rafram•1h ago
Thanks, added to title.
Sanzig•1h ago
Ugh. Yeah, this misses the point: not everyone wants their content archived. Of course, there are no feasible technical means to prevent this from happening, so robots.txt is a friendly way of saying "hey, don't save this stuff." Just because theres no technical reason you can't archive doesn't mean that you shouldn't respect someone's wishes.

It's a bit like going to a clothing optional beach with a big camera and taking a bunch of photos. Is what you're doing legal? In most countries, yes. Are you an asshole for doing it? Also yes.

xg15•1h ago
Set up a tarpit, put it in the robots.txt as a global exclusion, watch hilarity ensue for all the crawlers that ignore the exclusion.
madamelic•1h ago
robots.txt is the digital equivalent of "one piece per person" on an unwatched Halloween bowl.

The people who wouldn't don't need the sign, the people who want to do it anyway.

If you don't want crawling, there are other ways to prevent / slow down crawling than asking nicely.

blipvert•1h ago
Alternatively, it’s the equivalent of having a sign saying “Caution, Tarpit” and having a tarpit.

You’re welcome to ride if you obey the rules of carriage.

Don’t make me tap the sign.

notatoad•36m ago
>Alternatively, it’s the equivalent of having a sign saying “Caution, Tarpit”.

yeah, the fact that it is actually useful for blocking crawlers is kind of a misleading thing. it's called "robots.txt", it's there to help the robots, not to block them. you use it to help a robot crawl your site more efficiently, and tell them what not to bother looking at so they don't waste their time.

people seem to have forgotten really quickly that making your website as accessible as possible to crawlers was actually considered a good thing, and there was a whole industry around optimizing websites for search engines crawlers.

knome•1h ago
given this is from a group determined to copy and archive your data with or without your permission, their opinions on the usefulness of ROBOTS.TXT seem kind of irrelevant. of course they aren't going to respect it. they see themselves as 'rogue digital archivists', and being edgy and legally rather grey is part of their self-image. they're going to back it up, regardless of who says they can't.

for the rest of the net, ROBOTS.TXT is still often used for limiting the blast radius of search engines and bot crawl-delays and other "we know you're going to download this, please respect these provisions" type situations, as a sort of gentlemen's agreement. the site operator won't blackhole your net-ranges if you abide their terms. that's a reasonably useful thing to have.

kazinator•1h ago
If you don't obey someone's robots.txt, your bot will end up in their honeypot: be prepared for zip bombs, or generated infinite recursions and whatnot. You better have good countercountermeasures.

robots.txt is helping you identify which parts of the website the author believes are of interest for search indexing or AI training or whatever.

fetching robots.txt and behaving in a conforming manner can open doors for you. If I spot a bot like that in my logs, I might whitelist them, and feed them a different robots.txt.

paulddraper•1h ago
tbf most bots do that nowadays.
giancarlostoro•1h ago
Its mostly for search engines to figure out how to crawl your website. Use it sparingly.
dang•1h ago
Related:

ROBOTS.TXT is a suicide note - https://news.ycombinator.com/item?id=13376870 - Jan 2017 (30 comments)

Robots.txt is a suicide note - https://news.ycombinator.com/item?id=2531219 - May 2011 (91 comments)

rglover•1h ago
I see old stuff like this and it starts to become clear why the web is in tatters today. It may not be respected, but unless you have a really silly config (I'm hard-pressed to even guess what you could do short of a weird redirect loop), it won't be doing any harm.

> What this situation does, in fact, is cause many more problems than it solves - catastrophic failures on a website are ensured total destruction with the addition of ROBOTS.TXT.

Of course an archival pedant [1] will tell you it's a bad idea (because it makes their archival process less effective)—but this is one of those "maybe you should think for yourself and not just implement what some rando says on the internet" moments.

If you're using version control, running backups, and not treating your production env like a home computer (i.e., you're aware of the ephemeral nature of a disk on a VPS), you're fine.

[1] Archivists are great (and should be supported), but when you turn it into a crusade, you get foolish, generalized takes like this wiki.

bigstrat2003•1h ago
I really lost a lot of respect for the team when I read this page. No matter how good their intentions are, by deliberately ignoring robots.txt they are behaving just as badly as the various AI companies (and other similar entities) that scrape data against the wishes of the site owner. They are, in other words, directly contributing to the destruction of the commons by abusing trust and ensuring that everyone has to treat each other as a potential bad actor. Dick move, Archive Team.
akk0•55m ago
Mind you're reading a 14 year old page. I honestly don't see any value in this being posted on HN.
hyperpape•1h ago
Regarding silly configurations: https://danluu.com/googlebot-monopoly/.
snowwrestler•1h ago
Copying my comment from a previous discussion of ignoring robots.txt, below. I actually don’t care if someone ignores my robots.txt, as long as their crawler is well run. But the smug attitude is annoying when so many crawlers are not.

————

We have a faceted search that creates billions of unique URLs by combinations of the facets. As such, we block all crawlers from it in robots.txt, which saves us AND them from a bunch of pointless indexing load. But a stealth bot has been crawling all these URLs for weeks. Thus wasting a shitload of our resources AND a shitload of their resources too. Whoever it is, they thought they were being so clever by ignoring our robots.txt. Instead they have been wasting money for weeks. Our block was there for a reason.

paulddraper•1h ago
Yes, this has been the traditional reason for robots.txt -- protects the bot as much as it does the site.
nonethewiser•58m ago
Wasting your money too right?

I guess another angle on this is putting trust in people to comply with ROBOTS.txt. There is no guarantee so we should probably design with the assumption that our sites will be crawled however people want.

Also im curious about your use case.

>We have a faceted search that creates billions of unique URLs by combinations of the facets.

Are we talking about a search that has filters like (to use ecommerce as an example), brand, price range, color, etc. And then all these combinations make up a URL (hence bilions)? How does a crawler discover these? They are just designed to detect all these filters and try all combinations? That doesn't really jive with my understanding with crawlers but otherwise IDK how it would be generating billions of unique URLs. I guess maybe they could also be included in sitemaps but I doubt that.

freedomben•34m ago
I don't know anything about your specific use case, so take this with a grain of salt, but I've experienced this as well and digging in it is usually vulnerability scanning
chao-•29m ago
I have experienced the same situation with facet-like pages in the past. Links leading to (for example) the same product with a different color pre-selected on page load. Or within in a listing of a category products, a link might change from price descending to price ascending. All else equal, even the crawlers don't want to re-index the same page in eighty different ways if they could avoid it. They simply don't know better, and have decided to ignore our attempt to teach them (robots.txt).

In the past, we've used this behavior as a signal to identify block bad bots. These days, they will try again from 2000 separate residential IPs before they give up. But there was a long time where egregious duplicate page views of these faceted pages (against the advice of robots.txt) made detecting certain bad bots much easier.

spaceport•1h ago
Renaming my robots.txt to reeeebots.txt and writing a justification line by line on why XYZ shouldn't be archived is now on my todo. Along with adding a tarpit.
gmuslera•1h ago
robots.txt takes as assumptions that are well-meant, and respectful to the site intentions, major players, that try to index/mirror sites while avoiding overwhelming it and accessing only what is supposed to be freely accessed. Using a visible user-agent, having a clearly defined IP block for doing those scans, and a method of scanning goes in the same direction of cooperating with the site owner to both get visibility while not affecting (a lot) functionality.

But that doesn't mean that there aren't bad players, that ignore the robots.txt, give random user agent strings, or connects from IPs from all the world to avoid being blocked.

LLMs has changed a bit the landscape, mostly because far more players want to get everything or have automated tools to search your information on specific requests. But that doesn't rule out that still exist well-behaved players.

btilly•45m ago
Whatever we think of archive.org's position, modern AI companies have clearly taken the same basic position. And are willing to devote a lot more resources to vacuuming up the internet than crawlers did back in 2011.

See https://news.ycombinator.com/item?id=43476337 for a random example of a discussion about this.

My personal position is that robots.txt is useless when faced with companies who have no sense of shame about abusing the resources of others. And if it is useless, there isn't much of a point in having it. Just make sure that nothing public facing is going to be too expensive for your server. But that's like saying that the solution to thieves is to not carry money around. Yes, it is a reasonable precaution. But it doesn't leave me feeling any better about the thieves.

Bender•32m ago
Any time I think about robots.txt, I think about a quote from Pirates of the Caribbean. [1] "The only rules that really matter are these: what a man can do and what a man can't do." except that I replace man with bot. Everything should be designed to handle pirates, given the hostile nature of the internet.

To me, robots.txt is a friendly way to say, "Hey bots, this is what I allow. Stay in these lanes including crawl-delay and I won't block you." Step outside and I can put you on an exercise wheel. I know very few support crawl-delay but that is not my problem. Blocking bots or making them waste a lot of cycles or get dummy data or wildly reordering packets or adding random packet loss or slowing them to 2KB/s is more fun for me than playing Doom.

[1] - https://www.youtube.com/watch?v=B4zwh26kP8o [video][2 mins]

karaterobot•26m ago
> Precisely one reason comes to mind to have ROBOTS.TXT, and it is, incidentally, stupid - to prevent robots from triggering processes on the website that should not be run automatically

Counter-point: I have a blog I don't want to appear on search engines because it has private stuff on it. 25 years ago I added two lines to robots.txt file, and I've never seen it show up on any search engine ever since.

I'm not pretending nobody has indexed my blog and kept a copy of the results. I'm just saying the blog I started in college doesn't show up when you search for my name on Google, which is all I care about.