frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Wttr: Console-oriented weather forecast service

https://github.com/chubin/wttr.in
86•saikatsg•3h ago•36 comments

“Reading Rainbow” was created to combat summer reading slumps

https://www.smithsonianmag.com/smithsonian-institution/to-combat-summer-reading-slumps-this-timeless-childrens-television-show-tried-to-bridge-the-literacy-gap-with-the-magic-of-stories-180986984/
179•arbesman•9h ago•69 comments

ESA's Moonlight programme: Pioneering the path for lunar exploration

https://www.esa.int/Applications/Connectivity_and_Secure_Communications/ESA_s_Moonlight_programme_Pioneering_the_path_for_lunar_exploration
8•nullhole•2d ago•0 comments

Ex-Waymo engineers launch Bedrock Robotics to automate construction

https://techcrunch.com/2025/07/16/ex-waymo-engineers-launch-bedrock-robotics-with-80m-to-automate-construction/
335•boulos•16h ago•251 comments

Code Execution Through Email: How I Used Claude to Hack Itself

https://www.pynt.io/blog/llm-security-blogs/code-execution-through-email-how-i-used-claude-mcp-to-hack-itself
40•nonvibecoding•3h ago•18 comments

I want an iPhone Mini-sized Android phone (2022)

https://smallandroidphone.com/
262•asimops•12h ago•358 comments

Original Xbox Hacks: The A20 CPU Gate (2021)

https://connortumbleson.com/2021/07/19/the-xbox-and-a20-line/
59•mattweinberg•6h ago•9 comments

Altermagnets: The first new type of magnet in nearly a century

https://www.newscientist.com/article/2487013-weve-discovered-a-new-kind-of-magnetism-what-can-we-do-with-it/
345•Brajeshwar•18h ago•89 comments

I was wrong about robots.txt

https://evgeniipendragon.com/posts/i-was-wrong-about-robots-txt/
91•EPendragon•9h ago•78 comments

Metaflow: Build, Manage and Deploy AI/ML Systems

https://github.com/Netflix/metaflow
36•plokker•13h ago•2 comments

Inside the box: Everything I did with an Arduino starter kit

https://lopespm.com/hardware/2025/07/15/arduino.html
77•lopespm•2d ago•6 comments

Show HN: A 'Choose Your Own Adventure' written in Emacs Org Mode

https://tendollaradventure.com/sample/
120•dskhatri•11h ago•15 comments

Pgactive: Postgres active-active replication extension

https://github.com/aws/pgactive
304•ForHackernews•1d ago•76 comments

Intel's retreat is unlike anything it's done before in Oregon

https://www.oregonlive.com/silicon-forest/2025/07/intels-retreat-is-unlike-anything-its-done-before-in-oregon.html
157•cbzbc•14h ago•242 comments

Mistakes Microsoft made in the Xbox security system (2005)

https://xboxdevwiki.net/17_Mistakes_Microsoft_Made_in_the_Xbox_Security_System
60•davikr•9h ago•27 comments

Artisanal handcrafted Git repositories

https://drew.silcock.dev/blog/artisanal-git/
168•drewsberry•14h ago•43 comments

A 1960s schools experiment that created a new alphabet

https://www.theguardian.com/education/2025/jul/06/1960s-schools-experiment-created-new-alphabet-thousands-children-unable-to-spell
51•Hooke•1d ago•50 comments

A bionic knee integrated into tissue can restore natural movement

https://news.mit.edu/2025/bionic-knee-integrated-into-tissue-can-restore-natural-movement-0710
34•gmays•2d ago•1 comments

Open, free, and ignored: the afterlife of Symbian

https://www.theregister.com/2025/07/17/symbian_forgotten_foss_phone_os/
20•mdp2021•1h ago•5 comments

Show HN: Improving search ranking with chess Elo scores

https://www.zeroentropy.dev/blog/improving-rag-with-elo-scores
156•ghita_•19h ago•52 comments

How and where will agents ship software?

https://www.instantdb.com/essays/agents
128•stopachka•16h ago•59 comments

A Rust shaped hole

https://mnvr.in/rust
94•vishnumohandas•1d ago•221 comments

Roman dodecahedron: 12-sided object has baffled archaeologists for centuries

https://www.livescience.com/archaeology/romans/roman-dodecahedron-a-mysterious-12-sided-object-that-has-baffled-archaeologists-for-centuries
67•bookofjoe•2d ago•106 comments

Blue Pencil no. 18–Some history about Arial

https://www.paulshawletterdesign.com/2011/09/blue-pencil-no-18%e2%80%94some-history-about-arial/
35•Bluestein•2d ago•9 comments

Scanned piano rolls database

http://www.pianorollmusic.org/rolldatabase.php
56•bookofjoe•4d ago•14 comments

Show HN: Linux CLI tool to provide mutex locks for long running bash ops

https://github.com/bigattichouse/waitlock
30•bigattichouse•4h ago•13 comments

Show HN: 0xDEAD//TYPE – A fast-paced typing shooter with retro vibes

https://0xdeadtype.theden.sh/
90•theden•4d ago•23 comments

Chain of thought monitorability: A new and fragile opportunity for AI safety

https://arxiv.org/abs/2507.11473
119•mfiguiere•19h ago•55 comments

Signs of autism could be encoded in the way you walk

https://www.sciencealert.com/signs-of-autism-could-be-encoded-in-the-way-you-walk
132•amichail•15h ago•172 comments

Task Runner Census 2025

https://aleyan.com/blog/2025-task-runners-census/
11•aleyan•2d ago•2 comments
Open in hackernews

I was wrong about robots.txt

https://evgeniipendragon.com/posts/i-was-wrong-about-robots-txt/
91•EPendragon•9h ago

Comments

orionblastar•7h ago
This is a mistake that many websites make, trying to block all robots, and the robots that serve their blog posts to users can't function anymore.
xnx•7h ago
Agree. If you don't want it out there, put it in your journal or require a login.
reaperducer•7h ago
Not every web site is a blog. Not every web site can be legally put behind a login.
dylan604•6h ago
What kind of information legally cannot be put behind a login?
therein•6h ago
Maybe he is talking about stuff you're required by law to disclose but you don't really want to be seen too much. Like code of conduct, terms of service, retractions or public apologies.
PeterStuer•4h ago
Worst offenders I come across: official government information that needs to be public, placed behind Cloudflare, preventing even their M2M feeds (RSS, Atom, ...) to be accessed
bryanhogan•5h ago
Yes, there's often not much reason to block bots that abide by the rules. It just makes your site not show up on other search indexes and introduces problems for users. Malicious bots won't care about your robots.txt anyway.
happymellon•4h ago
Most bots don't serve the blog post to users.
PaulKeeble•7h ago
The problem is the robots that do follow robots.txt its all the bots that don't. Robots.txt is largely irrelevant now they don't represent most of the traffic problem. They certainly don't represent the bots that are going to hammer your site without any regard, those bots don't follow robots.txt.
anonnon•6h ago
Not sure why you were downvoted. I have zero confidence that OpenAI, Anthropic, and the rest respect robots.txt however much they insist they do. It's also clear that they're laundering their traffic through residential ISP IP addresses to make detection harder. There are plenty of third-parties advertising the service, and farming it out affords the AI companies some degree of plausible deniability.
chneu•1h ago
Nobody has any confidence in ai to not ddos. That's why there have been dozens of posts about how bandwidth is becoming an issue for many websites as bots continuously crawl their sites for new info.

Wikipedia has backups for this reason. AI companies ignore the readily available backups and instead crawl every page hundreds of times a day.

I think debian also recently spoke up about it.

zarzavat•4h ago
That's what honeypots are for.

Deny /honeypot in your robots.txt

Add <a href="/honeypot" style="display:none" aria-hidden="true">ban me</a> to your index.html

If an IP accesses that path, ban it.

kwar13•2h ago
I like this. Adding now. Thanks!
jpc0•1h ago
> <a href="/honeypot" style="display:none" aria-hidden="true">ban me</a>

Unrelated meta question but is the aria tag necessarily since display: none; should be removing the content from the flow?

s-mon•7h ago
Having worked on bot detection in the past. Some really simple old fashioned attacks happened by doing the opposite of what the robots.txt file says.

While I doubt it does much today, that file really only matters to those that want to play by the rules which on the free web is not an awful lot of the web anymore I’m afraid.

dumbfounder•7h ago
I created a search engine that crawled the web way back in 2003. I used a proper user agent that included my email address. I got SO many angry emails about my crawler, which played as nice as I was able to make it play. Which was pretty nice I believe. If it’s not Google people didn’t want it. That’s a good way to prevent anyone from ever competing with Google. It isn’t just about that preview for LinkedIn, it’s about making sure the web is accessible by everyone and everything that is trying to make its way. Sure, block the malicious ones. But don’t just assume that every bot is malicious by default.
tomrod•7h ago
> But don’t just assume that every bot is malicious by default.

I'll bite. It seems like a poor strategy to trust by default.

ronsor•6h ago
I'll bite harder. That's how the public Internet works. If you don't trust clients at all, serve them a login page instead of content.
__loam•6h ago
It sucks that we're living in a landscape where bad actors take advantage of that way of doing things.
sltkr•6h ago
The really bad actors are going to ignore robots.txt entirely. You might as well be nice to the crawlers that respect robots.txt.
PeterStuer•4h ago
Even if you want to play nice, robots.txt is a catch-22, as accessing it is taken as a signal you are a 'bot' by malconfigured anti-bot 'solutions'.
chasebank•5h ago
Bad actors will always exploit whatever systems are available to them. Always have, always will.
KTibow•5h ago
It sucks more that Cloudflare/similar have responded to this with "if your handshake fingerprints more like curl than like Chrome/Firefox, no access for you".
edoceo•5h ago
Or getting a CAPTCHA from Chrome when visiting a site you've been to dozens of times (Stack Overflow). Now I just skip that content, probably in my LLM already anyway.
realusername•4h ago
It's the same thing as the anti pirate ads, you only annoy legit customers, this agressive captcha campaign just makes Stackoverflow drop down even faster than it would normally by making it lower quality.
codingminds•4h ago
Keep in mind that those LLMs are one of the bigger reasons why we see more and more anti bot behaviour on sites like SO.

That aggressive crawling to train those on everything is insane.

Perz1val•1h ago
Because if they play by the rules, they won't be bad actors
tickettotranai•5h ago
In fairness this appears to be the direction we are headed anyway
TylerE•6h ago
That's easy to say when it's your bot, but I've been on the other side to know that the problem isn't your bot, it's the 9000 other ones just like it, none of which will deliver traffic anywhere close to the resources consumed by scraping.
kijin•4h ago
True. Major search engines and bots from social networks have a clear value proposition: in exchange for consuming my resources, they help drive human traffic to my site. GPTBot et al. will probably do the same, as more people use AI to replace search.

A random scraper, on the other hand, just racks up my AWS bill and contributes nothing in return. You'd have to be very, very convincing in your bot description (yes, I do check out the link in the user-agent string to see what the bot claims to be for) in order to justify using other people's resources on a large scale and not giving anything back.

An open web that is accessible to all sounds great, but that ideal only holds between consenting adults. Not parasites.

NackerHughes•3h ago
> GPTBot et al. will probably do the same, as more people use AI to replace search.

It really won’t. It will steal your website’s content and regurgitate it back out in a mangled form to any lazy prompt that gets prodded into it. GPT bots are a perfect example of the parasites you speak of that have destroyed any possibility of an open web.

komali2•2h ago
I'm confused why scraping is so resource intensive - it hits every URL your site serves? For an individual ecommerce site that's maybe 10,000 hits?
xnorswap•1h ago
And the thousands of other bots also hitting those, together is far more than the legitimate traffic for many sites.
TylerE•48m ago
Yeah, there were times, even running a fairly busy site, that the bots would outnumber user traffic 10:1 or more, and the bots loved to endlessly troll through things like archive indexs that could be computationally (db) expensive. At one point it got so bad that I got permission to just blackhole all of .cn and .ru, since of course none of those bots even thought of half obeying robots.txt. That literally cut CPU load on the database server by more than half.
Jach•4h ago
I guess back in 2003 people would expect an email to actually go somewhere, these days I would expect it to either go nowhere or just be part of a campaign to collect server admin emails for marketing/phishing purposes. Angry emails are always a bit much, but I wonder if they aren't sent as much anymore in general or if people just stopped posting them to point and laugh at and wonder what goes through people's minds to get so upset to send such emails.

My somewhat silly take on seeing a bunch of information like emails in a user agent string is that I don't want to know about your stupid bot. Just crawl my site with a normal user agent and if there's a problem I'll block you based on that problem. It's usually not a permanent block, and it's also usually setup with something like fail2ban so it's not usually an instant request drop. If you want to identify yourself as a bot, fine, but take a hint from googlebot and keep the user agent short with just your identifier and an optional short URL. Lots of bots respect this convention.

But I'm just now reminded of some "Palo Alto Networks" company that started dumping their garbage junk in my logs, they have the audacity to include messages in the user agent like "If you would like to be excluded from our scans, please send IP addresses/domains to: scaninfo@paloaltonetworks.com" or "find out more about our scans in [link]". I put a rule in fail2ban to see if they'd take a hint (how about your dumb bot detects that it's blocked and stops/slows on its own accord?) but I forgot about it until now, seems they're still active. We'll see if they stop after being served nothing but zipbombs for a while before I just drop every request with that UA. It's not that I mind the scans, I'd just prefer to not even know they exist.

mytailorisrich•2h ago
It's just that people are suspicious of unknown crawlers, and rightly so.

Since it is impossible to know a priori which crawler are malicious, and many are malicious, it is reasonable to default to considering anything unknown malicious.

knorker•59m ago
The most annoying thing about being a good bot owner, in my experience, is when you get complaints about it misbehaving, only to find that it was actually somebody malicious who wrote their own abusive bot, but is using your bot's user agent.
Falkon1313•6h ago
This is kinda amusing.

robots.txt main purpose back in the day was curtailing penalties in the search engines when you got stuck maintaining a badly-built dynamic site that had tons of dynamic links and effectively got penalized for duplicate content. It was basically a way of saying "Hey search engines, these are the canonical URLs, ignore all the other ones with query parameters or whatever that give almost the same result."

It could also help keep 'nice' crawlers from getting stuck crawling an infinite number of pages on those sites.

Of course it never did anything for the 'bad' crawlers that would hammer your site! (And there were a lot of them, even back then.) That's what IP bans and such were for. You certainly wouldn't base it on something like User-Agent, which the user agent itself controlled! And you wouldn't expect the bad bots to play nicely just because you asked them.

That's about as naive as the Do-Not-Track header, which was basically kindly asking companies whose entire business is tracking people to just not do that thing that they got paid for.

Or the Evil Bit proposal, to suggest that malware should identify itself in the headers. "The Request for Comments recommended that the last remaining unused bit, the "Reserved Bit" in the IPv4 packet header, be used to indicate whether a packet had been sent with malicious intent, thus making computer security engineering an easy problem – simply ignore any messages with the evil bit set and trust the rest."

pi_22by7•5h ago
So it did the same work that a sitemap does? Interesting.

Or maybe more like the opposite: robots.txt told bots what not to touch, while sitemaps point them to what should be indexed. I didn’t realize its original purpose was to manage duplicate content penalties though. That adds a lot of historical context to how we think about SEO controls today.

JimDabell•4h ago
> I didn’t realize its original purpose was to manage duplicate content penalties though.

That wasn’t its original purpose. It’s true that you didn’t want crawlers to read duplicate content, but it wasn’t because search engines penalised you for it – WWW search engines had only just been invented and they didn’t penalise duplicate content. It was mostly about stopping crawlers from unnecessarily consuming server resources. This is what the RFC from 1994 says:

> In 1993 and 1994 there have been occasions where robots have visited WWW servers where they weren't welcome for various reasons. Sometimes these reasons were robot specific, e.g. certain robots swamped servers with rapid-fire requests, or retrieved the same files repeatedly. In other situations robots traversed parts of WWW servers that weren't suitable, e.g. very deep virtual trees, duplicated information, temporary information, or cgi-scripts with side-effects (such as voting).

— https://www.robotstxt.org/orig.html

Quarrel•2h ago
> It was mostly about stopping crawlers from unnecessarily consuming server resources.

Very much so.

Computation was still expensive, and http servers were bad at running cgi scripts (particularly compared to the streamlined amazing things they can be today).

SEO considerations came way way later.

They were also used, and still are, by sites that have good reasons to not want results in search engines. Lots of court files and transcripts, for instance, are hidden behind robots.txt.

MiddleMan5•5h ago
It should be noted here that the Evil Bit proposal was an April Fools RFC https://datatracker.ietf.org/doc/html/rfc3514
tbrownaw•4h ago
> And you wouldn't expect the bad bots to play nicely just because you asked them.

Well, yes, the point is to tell the bots what you've decided to consider "bad" and will ban them for. So that they can avoid doing that.

Which of course only works to the degree that they're basically honest about who they are or at least incompetent at disguising themselves.

gbalduzzi•3h ago
I think it depends on the definition of bad.

I always consider "good" a bot that doesn't disguise itself and follows the robots.txt rules. I may not consider good the final intent of the bot or the company behind it, but the crawler behaviour is fundamentally good.

Especially considering the fact that it is super easy to disguise a crawler and not follow the robots conventions

atoav•2h ago
Well you as the person running a website can define unilaterally what you consider good and bad. You may want bots to crawl everything, nothing or (most likely) something inbetween. Then you judge bots based on those guidelines. You know like a solicitor that rings your bell that has a text above it saying "No solicitors", certain assumptions can be made about those who ignore it.
pjmlp•1h ago
Some people just believe that because someone says so, everyone will nicely obey and follow the rules, don't know maybe it is a cultural thing.
vintagedave•1h ago
Or a positive belief in human nature.

I admit I'm one of those people. After decades where I should perhaps be a bit more cynical, from time to time I am still shocked or saddened when I see people do things that benefit themselves over others.

But I kinda like having this attitude and expectation. Makes me feel healthier.

tuyiown•1h ago
I deeply agree with you, and I'd like to add:

Trust by default, also by default, never ignoring suspicious signals.

Trust is not being naïve, I find the confusion of both very worrying.

Sammi•21m ago
You don't have to go as far as to straight up "trust by default". You can instead "give a chance" by default, which is the middle path.

Actually Veritasium has a great video about this. It's proven as the most effective strategy in monte carlo simulation.

EDIT: This one: https://youtu.be/mScpHTIi-kM

0xEF•1h ago
It's easy to believe, though, and most of us do it every day. For example, our commute to work is marked by the trust that other drivers will cooperate, following the rules, so that we all get to where we are going.

There are varying degrees of this through our lives, where the trust lies not in the fact that people will just follow the rules because they are rules, but because the rules set expectations, allowing everyone to (more or less) know what's going on and decide accordingly. This also makes it easier to single out the people who do not think the rules apply to them so we can avoid trusting them (and, probably, avoid them in general).

pjmlp•53m ago
In Southern Europe, and countries with similar cultures, we don't obey rules because someone says so, we obey them when we see that is actually reasonable to do so, hence my remark regarding culture as I also experienced living in countries where everyone mostly blindly follow the rules, even if they happen to be nonsense.

Naturally I am talking about cultures where that decision has not been taken away from their citizens.

pmezard•9m ago
As a French, being passed by the right by Italian drivers on the highway really makes me feel the superiority of Southern Europeans judgment over my puny habit of blindly following rules. Or does it?

But yes, I do the same. I just do not come here to pretend this is virtue.

pjmlp•5m ago
Not sure if it is a virtue, but standing in an empty street at 3 AM waiting for a traffic light to turn green doesn't make much sense either.

It should be a matter of judgement and not following rules just because.

nullc•1h ago
> That's about as naive as the Do-Not-Track header, which was basically kindly asking companies whose entire business is tracking people to just not do that thing that they got paid for.

It's usually a bad default to assume incompetence on the part of others, especially when many experienced and knowledgeable people have to be involved to make a thing happen.

The idea behind the DNT header was to back it up with legislation-- and sure you can't catch and prosecute all tracking, but there are limitations on the scale of criminal move fast and break things before someone rats you out. :P

franga2000•26m ago
I still see the value in robots.txt and DNT as a clear, standardised way of posting a "don't do this" sign that companies could be forced to respect through legal means.

The GDPR requires consent for tracking. DNT is a very clear "I do not consent" statement. It's a very widely known standard in the industry. It would therefore make sense that a court would eventually find companies not respecting it are in breach of the GDPR.

That was a theory at least...

ceautery•5h ago
LinkedIn is by far the worst offender in post previews. The doctype tag must be all lowercase. The HTML document must be well-formed (the meta tags must be in an explicit <head> block, for example). You must have OG meta tags for url, title, type, and image. The url meta tag gets visited, even if it's the same address the inspector is already looking at.

Fortunately, the post inspector helps you suss out what's missing in some cases, but c'mon, man, how much effort should I spend helping a social media site figure out how to render a preview? Once you get it right, and to quote my 13 year old: "We have arrived, father... but at what cost?"

babuloseo•5h ago
isnt LinkedIn dead.
kookamamie•4h ago
One can dream.
nxpnsv•4h ago
It can still be more dead
PeterStuer•4h ago
I feel it is morphing into Twitter/Facebook/Instagram more each day.

It used to be this ultrafake eternal job interview site, but people now seem uninhibited to go on wild political rants even there.

yodon•5h ago
What astounds me is there are no readily available libraries crawler authors can reach for to parse robots.txt and meta robots tags, to decide what is allowed, and to work through the arcane and poorly documented priorities between the two robots lists, including what to do when they disagree, which they often do.

Yes, there's an ancient google reference parser in C++11 (which is undoubtedly handy for that one guy who is writing crawlers in C++), but not a lot for the much more prevalent Python and JavaScript crawler writers who just want to check if a path is ok or not.

Even if bot writers WANT to be good, it's much harder than it should be, particularly when lots of the robots info isn't even in the robots.txt files, it's in the index.html meta tags.

JimDabell•4h ago
robots.txt support is built into the Python stdlib as urllib.robotparser: https://docs.python.org/3/library/urllib.robotparser.html

rel=nofollow is a bad name. It doesn’t actually forbid following the link and doesn’t serve the same purpose as robots.txt.

The problem it was trying to solve was that spammers would add links to their site anywhere that they could, and this would be treated by Google as the page the links were on endorsing the page they linked to as relevant content. rel=nofollow basically means “we do not endorse this link”. The specification makes this more clear:

> By adding rel="nofollow" to a hyperlink, a page indicates that the destination of that hyperlink should not be afforded any additional weight or ranking by user agents which perform link analysis upon web pages (e.g. search engines).

> nofollow is a bad name […] does not mean the same as robots exclusion standards

— https://microformats.org/wiki/rel-nofollow

yodon•3h ago
Thanks for this!
codingminds•3h ago
I don't see a reason why a good bot operator couldn't build a parser lib in a different language and put it on a public repo.

Shouldn't be that hard if someone WANT to be good.

elric•3h ago
Sure, but it's always easier to use a tool that's been tried and tested.
donohoe•5h ago
I try to stay away from negative takes here, so I’ll keep this as constructive as I can:

It’s surprising to see the author frame what seems like a basic consequence of their actions as some kind of profound realization. I get that personal growth stories can be valuable, but this one reads more like a confession of obliviousness than a reflection with insight.

And then they posted it here for attention.

zem•4h ago
it's mostly that they didn't think of the page preview fetcher as a "crawler", and did not intend for their robots.txt to block it. it may not be profound but it's at the least not a completely trivial realisation. and heck, an actual human-written blog post can okay improve the average quality of the web.
archievillain•25m ago
The bots are called "crawlers" and "spiders", which to me evokes the image of tiny little things moving rapidly and mechanically from one place to another, leaving no niche unexplored. Spiders exploring a vast web.

Objectively, "I give you one (1) URL and you traverse the link to it so you can get some metadata" still counts as crawling, but I think that's not how most people conceptualize the term.

It'd be like telling someone "I spent part of the last year travelling." and when they ask you where you went, you tell them you commuted to-and-fro your workplace five times a week. That's technically travelling, although the other person would naturally expect you to talk about a vacation or a work trip or something to that effect.

spookie•4h ago
They posted it here because they wouldn't appear on Google otherwise (:
coolgoose•53m ago
I agree, and I am also confused on how this got on the frontpage of all things. It's like reading a news article of 'water is wet'.

You block things -> of course good actors will respect and avoid you -> of course bad actors will just ignore it as it's a piece of "please do not do this" not a firewall blocking things.

kookamamie•4h ago
You shouldn't worry about LinkedIn, the cancer of the internet.
acosmism•4h ago
if you are hosting a house party that invites the entire world robots.txt is a neon sign to guide guests to where the beers are, who's cooking what kind of burgers and on what grill; rules of the house etc - you'll still have to secure your gold chains and laptop in a safe somewhere or decide to even keep them in the same house yourself
dwaite•4h ago
This doesn't seem like a new discovery at all - this is what news publications have been dealing with ever since they went online.

You aren't going to get advertising without also providing value - be that money or information. Google has over 2 trillion in capitalization based primarily on the idea of charging people to get additional exposure, beyond what the information on their site otherwise would get.

NackerHughes•3h ago
This article could have been two lines. It takes some serious stretching of school-essay-writing muscles to inflate it to this many pages of waffle.
jarofgreen•3h ago
Hey OP,

1)

You consider this about the Linkedin site but don't stop to think about other social networks. This is true about basically all of them. You may not post on Facebook, Bluesky, etc, but other people may like your links and post them there.

I recently ran into this as it turns out the Facebook entries in https://github.com/ai-robots-txt/ai.robots.txt also block the crawler FB uses for link previews.

2)

From your first post,

> But what about being closer to the top of the Google search results - you might ask? One, search engines crawling websites directly is only one variable in getting a higher search engine ranking. References from other websites will also factor into that.

Kinda .... it's technically true that you can rank in Google if you block them in robots.txt but it's going to take a lot more work. Also your listing will look worse (last time I saw this there was no site description, but that was a few years back). If you care about Google SEO traffic you maybe want to let them on your site.

alexey-salmin•1h ago
This reminds me of an old friend of mine who wrote long revelation posts on how he started using the "private" keyword in C++ after compiler helped him to find why a class member changed unexpectedly and how he no longer drives car with the clutch half-pressed because it burns the clutch.
jjcob•1h ago
I really think that most people should not use robots.txt

If you don't want people to crawl your content, don't put it online.

There are so many consequences of disallowing robots -- what about the Internet Archive for example?

franga2000•19m ago
The problem with robots.txt is the reliance on identity rather purpose of the bots.

The author had blocked all bots because they wanted to get rid of AI scrapers. Then they wanted to unblock bots scraping for OpenGraph embeds so they unblocked...LinkedIn specifically. What if I post a link to their post on Twitter or any of the many Mastodon instances? Now they'd have to manually unblock all of their UA, which they obviously won't, so this creates an even bigger power advantage to the big companies.

What we need is an ability to block "AI training" but allow "search indexing, opengraph, archival".

And of course, we'd need a legal framework to actually enforce this, but that's an entirely different can of worms.