Enshittification strikes again.
And it doesn't have appear to have any means to rid itself of the bad apples. A sad situation all around.
For example, a huge fraction of the world's spam originates from Russia, India and Bangladesh. And we know that a lot of the romance scams are perpetrated by Chinese gangs operating out of quasi-lawless parts of Myanmar. Not so much from, say, Switzerland.
For that reason, and because of limited English proficiency, Russian netizens rarely visit foreign resources these days, except for a few platforms without a good Russian replacement like Instagram and YouTube (both banned btw, only via a VPN), where they usually stay mostly within their Russian-speaking communities. I'm not sure why any of them would be the reason the Internet as a whole has supposedly become low-trust. The OP in question is some SEO company using an LLM to churn out sites with "unique content." We already had this stuff 20 years ago, except the "unique content" was generated by scripts that replaced words with synonyms. Nothing really new here.
Chinese have their own internet anyway- it was a shock to me at first just how little the average Chinese citizen really cares about Western culture or society. They have their own problems ofcourse but it has nothing to do with us
No it's the tens of billions of mostly American capital going into AI data centers and large bullshit models.
"A report by the Global Initiative on Transnational Organised Crime (based on United States Institute of Peace findings) estimated that revenues from “pig-butchering” cyber scams in Laos were around US $10.9 billion, which would be *equivalent to more than two-thirds (≈67–70 %) of formal Lao GDP in a recent year."
https://globalinitiative.net/wp-content/uploads/2025/05/GI-T...
The difference is that there historically weren't much to be gained by annoying or misleading people on the internet, so trolling is mainly motivated by personal satisfaction. Two things changed since then: (1) most people now use the internet as the primary information source, and (2) the cost of creating bullshit has fallen precipitously.
The motivation for content online has changed over the last 20 years from people wanting to share things they're interested in to one where the primary goal is to collect eyeballs to make a profit in some way.
I'd normally be the first to agree with and push your point about language evolving, but it's not time to apply that to a neologism this young.
Isn't that what's driving the pollution of the Internet by LLMs?
> Enshittification, also known as crapification and platform decay, is a process in which two-sided online products and services decline in quality over time. Initially, vendors create high-quality offerings to attract users, then they degrade those offerings to better serve business customers, and finally degrade their services to both users and business customers to maximize short-term profits for shareholders.
Also see https://en.wikipedia.org/wiki/Enshittification#Impact which talks of the broadening of the usage of that term.
It literally started meaning that hours after it was first posted to HN and being used. Sorry, that's just how language works. Enshittification got enshittified. Deal with it and move on.
There's been a huge uptick in this sort of brigade like behavior around current events. First noted it around LK99, that failed room temperature semiconductor in 2023, but it just keeps happening.
Used to be we only saw it around elections and crypto pump and dumps, now it's cropping up in the weirdest places.
I believe the misinformation is largely by self-interested parties. Politicians as well as influencers trying to push agendas, and the engagement/attention farming for advertising revenue, which are largely indifferent to truth.
Yes, there was: becoming the primary contributor by volume to Scots Wikipedia (which probably doesn't have many contributors to begin with, but there you are). Some people just have to have attention, no matter how.
Great piece btw
What we have here is worse; LLMs give you bullshit. A bullshitter does not care if something is true or false, it just uses rhetoric to convince you of something.
I am far from being someone nostalgic about the old internet, or the world in general back then. Things in many ways sucked back then, we just tend to forget how exactly they sucked. But honestly, a LLM-driven internet is mostly pointless. If what I am to read online is AI generated crap, why bother reading it on websites and not just reading it straight from a chatbot already?
My understanding is that people tend to cooperate in smaller numbers or when reputation is persistent (the larger the group, the more reliable reputation has to be), otherwise the (uncommon) low-trust actors ruin everything.
Most humans are altruistic and trusting by default, but a large enough group will have a few sociopaths and misunderstood interactions; which creates distrust across the entire group, because people hate being taken advantage of.
... towards an in-group, yes. Not towards out-groups, as far as I can tell.
Though for some reason this tends not to apply to solo travellers in many, many parts of the world.
Lots of debate, yes, but very little about the basic fact that Hardin's formulation of "the tragedy of the commons" doesn't describe actual historical events in pretty any well documented case.
1. Don't believe everything or anything you read or see on the Internet.
2. Never share personal information about yourself online.
3. Every man was a man, every woman was a man and every teenager is an FBI agent.
I have yet to find a problem with the Internet thats isn't because of breaking one of the above rules.
My point being you couldn't ever trust the Internet before anyways.
3a. ... and nobody knows if you're a dog.
Now you can collate a list of thousands of titles and simply instruct an LLM to produce garbage for each one and publish it on the internet. This is a real change, IMO.
The open internet has been going downhill for a while, but LLMs are absolutely accelerating it's demise. I was in denial for the last few years but at this point I've accepted that the internet I grew up on as a kid in the late 90s to mid 2000s is dead. I am grateful for having experienced it but the time has come to move on.
The future for people that valued what the early internet provided is local, trusted networks in my opinion. It's sad that we need to retreat into exclusionary circles but there are too many people interested in making a buck on the race to the bottom.
Jokes aside, probably 10-20% of my browsing is related to local things, up to the country scale. From finding local restaurants or businesses, to finding about relevant laws or regulations, news, etc. That's not negligible.
I love the idea.
It's also interesting in that a local mesh doesn't necessarily need to operate using the TCP/IP/HTTP stack that has been compromised at every layer by advertising and privacy intrusions.
Jumping to an invite only network isn't the most ridiculous idea imo.
AI slop thrives in anonymity. In a community that's developed its own established norms and people who know each other, AI content trying to be passed off as genuine stands out like a sore thumb and is easily eradicated before it gets a chance to take root.
It doesn't have to be invite-only, per se, but it needs to have its own flavor that newcomers can adapt to, and AI slop doesn't.
...and not on Hacker News. Too many pseudo-anonymous jerks, too many throwaways, too much faith placed in gamified moderation tools.
You can still make that overlay network geofenced and vetted. Heck, running it over a local ISP's last mile would probably yield wonderful latency.
We need vetted webrings on the existing Internet, not a new Internet.
Everyone serving a website is being ddos by AI agents right now.
A local mesh network is one way to make sure that no one with a terabit network can index you.
Email in profile (deref a few times)
Perhaps AI-Skynet will not win - but they have a lot of money. I think we need to defund those big corporations that push AI onto everyone and worsen our lives.
On the internet no one knows if you're a dog, human or a moltbot.
My question is -why? Is it really worth the ad revenue to trick a few people looking into a few niche topics? Say you pick the top 5000 trending movies/music/games and generate fake content covering the gamut. What is the payback period?
Google did all the innovation it needed to and ever is going to. It needed to be broken up a decade ago. We can still do it now. Though I don't know how much it will save, especially if we don't also go after Apple, and Meta, and Microsoft.
AI needs to be kept up to date with training data. But that same training data is now poisoned with AI hallucination. Labelling AI generated media helps reduce the amount of AI poison in the training set, and keeps the AI more useful.
It also simply undermines the quality of search, both for human users and for AI tool use.
SEO is a slippery slope on both sides because a little bit is good for everyone. Google wanted pages it could easily extract meaning from, publishers wanted traffic, and users wanted relevant search results. Now there's a prisoners dilemma where once someone starts abusing SEO, it's a race to the bottom.
I reject this emphatically. Google should never have been in the business of shaping internet content. Perhaps they should have even gone out of their way to avoid doing so. Without Google (or a better-performing competitor) acquiescing to the game, there is no SEO market.
There's nothing anyone can do about it. No matter how many guidelines dang deploys, no matter how much negative social pressure we apply (and we could apply much more but doing so would just run afoul of the tone policing of the guidelines) people will use AI because they want to, and because it's a part of their identity politics, specifically to spite people who don't want to see it. They currently bother to mention when they use ChatGPT for a comment. It's just a matter of time until people don't even bother, because it's so normalized.
The Fediverse is currently good, the culture there is rabidly anti-capitalist and anti-AI. I like Mastodon. But that will eventually, inevitably get ruined as well, and we'll just have to move on to the next thing.
If I were to be honest, going to where the fish aren't is also going to help. Almost certainly there are very few LLM generated websites on the Gemini protocol.
I'm setting up a secondary archiver myself that will record simply the parts of the web that consent to it via robots.txt. Let's see how far I get.
> The commons of the internet are probably already lost
That depends. If people don't push back against AI then yes. Skynet would have won without the rebel forces. And the rebels are there - just lurking. It needs a critical threshold of anger before they will push back against the AI-Skynet 3.0 slop.
And at that point does it even matter? Zuckerberg wins.
eterm•2h ago
Previously you might get burned with some bad information or incorrect data or get taken in by a clever hoax once in a while.
Now you get overwhelmed by regurgitation, which itself gets fed back into the machine.
The ratio of people to bots reading is crashed to near zero.
We have burned the web.
lazystar•1h ago
pixl97•1m ago