Now everything feels the same. Same layout, same font, same clean boxy design. Sites copy each other. AI just made it more obvious, but the soul started slipping away long before that
Everything felt raw and full of possibility. Even if a lot of it looked the same, it didn’t feel the same. There was this sense of exploring something alive.
Fundamentally, if the goal is to make money, then that’s what will be optimized for, and in this case that goal appears to be in conflict with the formation and maintenance of community. It was just a matter of time.
Motorsports, video games, chatting online, working in a warehouse - all things that are loads more fun to do when someone isn't seeking to eke out more and more marginal gains.
alt.confident.assertion.question.doubt.disagree
;)
Seems like there is plenty of variety, just nobody telling everyone what specifically to listen to.
I remember being 13-years-old and completely baffled people preferred the platform where I had no say over the HTML on my page.
I didn’t understand how people could prefer a boilerplate with profile picture and name over an actual artefact made by the person.
Once they lost all the pre-2016 content, I think that was it. Hard to make a comeback after something like this
https://www.theguardian.com/technology/2019/mar/18/myspace-l...
I think there could have been a nice middle ground with more "tasteful" customization that would have still left plenty of room for individuality, but nobody built it before Facebook totally took over.
Besides that, there’s Reddit. They’re all vastly different and are essentially discussion boards.
What faded were the obscure or niche ones where discussions simply didn’t invite enough people.
More individuals cultivating personal points of view drastically different from homogenized masses.
That extends way beyond the web though.
This medicine needs to be taken in moderation though, else one can end up reinventing some key wheels instead of speeding forward on these wheels, like https://fliptomato.wordpress.com/2007/03/19/medical-research...
If just a bunch of math wizards and weirdos do it, they'll be seen as isolated and it won't take effect in the dynamics of the web.
I'm talking about everyone doing it.
Perhaps some global law could help - significantly disincentivizing for centralization and network effects.
If that was less scary maybe more people would do it!
The place where the web is still great is where you have to be invested to be a real participant. Everyone can yell about politics in a text box on twiter/FB/reddit/HN or post photos to IG/Dataing site Or videos to twitch/YouTube.
If you can host something, even for a small number of people your one of the rare few. If your "into" something where there is a focused community then your back into one of those 1% pools where people vibe and participate.
To make an analogy of it: The web is now a tourist town. Everyone is interested in making money off the visitors with the flashy lights and signs luring them into the over priced tourist traps. The locals, the natives, the REAL .01% know where the cheap places with great food and local flavor are.
Evidently, if you combine content access platform with a hosting platform and make running the latter a requirement for the former, it works out.
If, theoretically, there would be a way to resolve domain name to a specific phone, I can see self-hosting site apps getting popular.
Nowdays there are a few solutions (phone hosts a site, shows QR-code with its current IP and a port, and you can actually open the site in browser), but it is mostly for "right there right now" solutions. Site will go down the moment this phone changes the tower.
The best example of mobile hosting I have found, comes from AmnesiaVPN team. You have to rent a server, but then you just feed server IP and password to an app, and from there the app controls the server.
I imagine a future where big VPS companies started to make apps that made buying domain name, renting a servere, hosting and backuping a basic website/forum easy. It's an unlikely future, but a fun one
> We care about your privacy. Can we please put a camera in your toilet seat for a personalized experience? > > [ ACCEPT ]
Browsing the web is a nightmare these days, I rarely visit "new" websites
> Subscribe to our spam for a 10% off coupon > > [ ] [SEND]
It is just a pain to visit any website these days... anyone involved creating these modern monstrosities should just fire themselves and go on a hike or something.
> We rely on invasive, tracking ads! Please enable your adblocker so we can get 0.00001 USD, please. > > [IVE DISABLED MY FIREWALL AND ANTI-VIRUS] [PAY 999 USD A MONTH FOR AN AD-FREE EXPERIENCE]
Anyway this article is about AI replacing web search, not "killing the web" which I would take as it somehow deleting or overwriting content on existing webpages. Or generating so much spam as to make the web unusable for the average person.
Large sites that can't exist without "traffic" already killed the web a long time ago. A paywall is the proper solution, not ads in content and content in ads. That means you will have lower traffic, it doesn't mean you are being killed. It just means you stopped assaulting passersby who are linked to your site.
Indeed, exaggerating title. But we all have to get the idea the web is really dying, so we give up working on it. We have to get that idea because the genie of the web is already out of the bottle for 30+ years. That stuff is going nowhere. The open web is a hindrance for big businesses. Big business wants to keep internet infrastructure to push apps, AI and what not, but does not want to keep the open web.
The internet doesn't have a clear, simple, micro-payment system that would allow people to reward value, so instead we have an attention based system where the number of likes and followers grants social status and financial opportunity.
In other words, I don't think that AI is killing the web.
It's being profit-oriented and running amok in an unleashed way. It's prisoner's dilemma. You know, if you don't do it then someone else will do it and you lose. Enshittification is one consequence. The internet experienced it from the beginning. But only about fifteen years ago companies learnt how to squeeze the last drop out and, like in the tragedy of the commons, everybody is worse off.
And what's the most catastrophic? People are confused. They look at the tools but not at some famous people behind these rampages. Of course as leaders they just optimize the hell out of the internet with the target that their companies thrive. But in doing so they cause heavy damage.
I find that when people pine for the old web, what they’re really asking for is some way to connect to other people and see things that people have written or made just for fun in a genuine way, without it being performative, derivative or for other motivations.
In theory social media should have been this, but people’s constant need to accumulate validation or tendency to produce meme-like content adversely affects the quality of their output, giving it a machined style feel that rarely feels genuine or true to their human nature. Instead of seeing people’s true personalities, you see their “masks”.
Thus the issue is not rooted in a technical problem but rather a cultural one: people no longer naively share things that don’t fuel their ego in the most perfect way.
Or perhaps an Apple or Kagi will host an LLM with no built-in monetization skewing its answers.
Change is a constant on the web. Things were very different in 1995 (plain html, no good search engines), 2005 (no widespread web capable smart phones usage yet, Google, AJAX), 2015 (peak social media and app hype), and 2025 (social media has shifted to new apps and lots of people are disengaging entirely, AI is starting to threaten Google, content aggregators serve most web content).
For 2035, I would predict that AI will drive a need for authenticity. Existing platforms don't provide this because they lack content signatures. We've had the tools to reliably sign content for decades. But we don't use those a lot except for DRM content behind paywalls (for commercial reasons). So, you can't really tell apart the AI generated propaganda, marketing, misinformation, etc. from authentic human created content by individuals you care about. And that might be contributing to people disengaging a bit. But you can see the beginnings of this on platforms like bluesky and signal which push end to end encryption and user verification. People might share AI nonsense via these platforms. But they seem to be less about that as say X, Tik Tok or Instagram are. We sometimes watermark our images. We don't digitally sign them. Why is that?
Just speculating here but the web could use a big upgrade here and do more than just certify domain name ownership. Which is fairly meaningless if the domain is some big network with many millions of users. What about certifying content itself? Reliably tie content to their creators in a way that can't be forged. IMHO this is long overdue and the related UX challenges are there but solvable in principle. DRM is a prime example of a fairly usable implementation. Just works if you paid for the content. Signed content would make it very challenging to pass off AI gibberish as authentic if it's not signed by a reputable private key. And if it happened anyway, that would damage the reputation of that key. I don't exclude the possibility of reputable AIs emerging. How would you tell those apart from the disreputable ones?
The thing with AI is that it drives cost down of generating stuff. So the generated stuff starts drowning out the human content by orders of magnitude. 100x, or a 1000x. Or worse. The worse this gets, the more obvious the need to distinguish authentic content from AI slop will become. This also will become a value add for social networks. Because drip feeding users garbage content has diminishing returns. Users disenage and move elsewhere. Meta experienced this first hand with Facebook. They ran that into the ground by allowing the click bait generators to hijack the platform. The first networks that figure out how to guarantee only authentic quality content that they've opted into is shown to users will gain a lot of eyeballs and users. That's why verified users are such a big feature on different networks now. The next logical step here is verified content by a verified user.
And once we have that, you just filter out all the unverified garbage.
Problem #2 - if you aren't the Emperor of Earth or some such, how could you make your ideal web stable over time, in today's world?
So well, LLMs do not kill the web, eat it. We are still almost the sole valid source of data for LLMs.
What really killed the web are social networks as proprietary walled gardens instead of an open Usenet with a web companion for stuff to be preserved for posterity or too long/complex for a mere post. What killed the web is the fact that ISPs do not offer an open homeserver instead of a closed box called "router" even if it's a limited homeserver. With an open version, with IPv6, anyone could buy a domain name and publish from his/shes own iron a blog with a ready-to-write software, with automatic RSS feeds, newsletters etc. If we give such tool to the masses the original web will be back but it would mean free speech and giants/politicians etc have free speech preferring ways to master public topics through their platforms to hide from most stuff they dislike and push ideas they like...
Go ahead and try to find JLG equipment/service manuals on the open net anymore. I'll wait.
https://www.google.com/search?q=site%3Acsapps.jlg.com+filety...
https://www.google.com/search?q=site:csapps.jlg.com+filetype...
AI isnt cost effective. The investors are going to want their money back very soon due to outside economic influences... they wont get it back and many of these AI pop ups are going to fold. the rest are going to scale back and jack up prices.
Nothing stopping us from having cake and eating. Open AI could fall over, and we would still have all the publicly available models kicking around.
oh and the companies themselves that are pulling in mountains of debt to build themselves...
You're likely to see content creators pull their work behind access-controlled spaces (which might actually work out better than the current bargain of it being free but unreadable, recipes buried by long winding stories, etc). You might see the weird web emerge again as search engines are able to discover it under a pile of SEO sludge.
Of course they can get that from ChatGPT too, but it hits different when you realise ChatGPT validates everything you say anyway.
That's for daily news reading. If you search for news (like what happened with the Spanish/Iberian grid), you'd use Google. And you shouldn't use ChatGPT because it wastes a ton of resources to just hallucinate anyways, whereas a Google search gets you the direct links to the sources.
A lot of people are asking "@grok is this true?" under news on Twitter every day. So a not insignificant number of people are going through AI for this sort of thing.
No matter how famous something is, for every individual, there is a first point of contact. The web has been the great filter for the last couple of decades until now, and it is extremely common to discover even main stream things that way.
More critically, it’s not hard to imagine that, with AI-boosted boosted coding, a thousand bespoke search engines and other platforms being just around the corner, radically changing the economics of platform lock-in. When you can build your own version of Google Search with the help of AI and do the same with social media or any other centralizing Internet force, then platforms cease to be platforms at all. With AI, the challenges of self-hosting could become quite manageable as well. And while we’re at it, some version of the same, individual-centered computing economics on your own devices seems possible.
In these senses, it’s quite possible that Jobs’s vision of computing as extensions of individuals rather than individuals being extensions of computing is again at hand, with the magic of self-curated order from a chaotic Net not far behind.
When you operate a community that's hostile to questions that have already been answered, are poorly researched, or are homework, don't be surprised when people start taking those questions elsewhere, and don't be surprised when they start asking their good questions elsewhere, too.
ChatGPT can’t tell the difference between being given a harmless instruction / role play prompt, vs someone who is going insane. Probably explains why many of the most vocal AI users seem detached from reality, it’s the first time they have someone who blindly affirms everything they think while telling them they are so smart and completely correct all the time.
I'd rather be treated nice by a bot, than be abused by a human. Make whatever of this you will.
Though I know the bot is not sentient. I'd rather chat with it, than some human who doesn't talk well.
Im guessing the future of relationships works the same way. All the best competing with a bot that makes you feel nice, than a spouse/partner who doesn't.
It will be a hard era to come for people who misbehave. The tolerance for that sort of stuff is going to go away entirely.
This is perhaps one of the most fascist sentences I've ever read
No more "misbehaving" only perfect conformity
Are you saying people who don't willingly offer to become punching bags for abusive people are fascists?
I'm not sure I agree that's gonna happen, I'm just trying to paraphrase what I think the GP meant.
If someone stuck an LLM between me and facebook, so I got all my facebook content without the flat earthers, moon landing deniers and tartarians, meta would never see me again.
That’s a RAG query
Sota LLMs didn't get that way by scraping the internet, it's all custom labeled datasets.
For my personal stuff, I don't opt out of training for this very reason. What's more, I resent Stack Overflow and Reddit etc. trying to gate-keep the content that I wanted to give to the community and charge rent for it.
I used to intentionally post question-answer style posts where I would both ask the question,wait for a while, then answer the question on both Reddit and Stack Overflow. I don't do that anymore because I'm not giving them free money if they're not passing some of the benefit on to the community
And AI companies don't charge for their stuff and charge rent?
"... not giving them free money if they're notnpassing some of the benefits ..." - Could you expand on the specific benefits you wanted them to pass on to the community? As a user, being able to find other people's content that is relevant to my current need is already a pretty solid benefit.
https://www.zachdaniel.dev/p/usage-rules-leveling-the-playin...
On the silver lining side, it's work that I should have been doing anyway. It turns out that documenting the features of the library in a way that makes sense to LLMs also helps potential users of the library. So, win:win.
[1] - Telling the LLM training data Overlords about the capabilities of the library is in itself a major piece of work: https://github.com/KaliedaRik/Scrawl-canvas/blob/v8/LLM-summ...
[2] - The Developer Runbook was long-overdue documentation, and is still a work-in-progress: https://scrawl-v8.rikweb.org.uk/documentation
[3] - Nothing is guaranteed, of course. Training data has to be curated so documentation needs to have some rigour to it. Also, the LLMs tell me it can take 6-12 months for such documentation to be picked up and applied to future LLM model iterations so I won't know if my efforts have been successful before mid-2026.
So, that stuff will just cease to exist in its previous amounts and we will all move on.
The overlap between people bothering to answer ”stupid question, RTFM” and people able to give useful answers is extremely small.
The meaningful data the LLMs are trained on is the actual answers.
I think a big part of why people prefer to ask an online forum instead of using the search function is the human interaction aspect, but that requires two people, including a mentor who is patient and helpful - and unfortunately, that's difficult to find. An LLM is patient, helpful, and problem-solving, but also responds pretty much immediately.
The ability to search across the massive accumulation of knowledge we have already built up is a primary skill for software development, and the tut-tut'ing is a way of letting you know that you failed in that endeavor, which should be valuable feedback in itself.
You'll see reason for the hate, mainly with people not bothering to spend any time searching before posting.
And it is getting worse, new people asking help: 'but chatgpt told me X', 'I followed chatgpt and it doesnt work, please help fix bug', or some idiots that might burn the house down and deserve yelling (li-ion batteries aren't a joke, ac current likewise)
Or... LLM generated stuff... which is equal to spam...
If some people like doing unappreciated tech support all power to them, others might fight through spam to find nice items, I mostly stopped bothering and looking for something else. (also yelling at idiots that might kill themselves)
because searching has become MUCH worse, and because even when it used to be good, searching is a SKILL.
If you don't block beginners then the entire community will leave and you end up with the /r/suggestALaptop type subreddit. A woodworking subreddit will have 3 daily "What's the best table saw for a beginner" and "Dewalt vs Milwakee?" threads and anyone who cares will leave and you're left with all the bots and people trying to sell you stuff.
The funny thing is that didn't use to be a problem in online communities back in the day. Every forum has a "New Users" section, a beginner section, maybe an intermediate section and an advanced section. There were forums I would hang out on in the beginners and common areas and only readonly the advanced area until I felt confident enough to participate in the conversation there intelligently or to even have a smart enough question to ask.
This doesn't work in a place like reddit or stackoverflow. Those places are simply too big to have a cohesive consistent "culture" (for lack of a better word). You can't turn newbies away from /r/3dprinting because no body is on /r/4dprinting_for_beginniers. And people on the former don't care about the latter because it's not part of "the community".
That I used to find mean, now I see it as necessary but nobody does it anymore (lack of anonimity I guess)
If someone fails to do basic research that then it's on them. They lack basic grit or other skills that they should learn.
Also, someone asking the same basic question that, id typed in google would have led them to previous threads is a special type of idiocy or attention seekinf
As a beginner at anything it’s hard to search. It’s the “you don’t know what you don’t know problem”. I see this all the time both as the expert and the beginner.
On topics I understand, I can craft a google query that will drop exactly what I’m looking for as the first result. On new topics I have to query and scan over and over until I start to hone in on some key words.
get better at searching, read documentation, manuals, books, articles, etc.
When you are stuck with something non trivial usually other people will jump to help as they've likely spent time on it as well.
If someone fails to do that then it's on them. They lack basic grit or other skills that they should learn.
How does one know where to start? Which manual to read first?
read the right hand side section on each reddit group dedicated to begginers.
read some manuals/books...
what you said only fails for novel topics like quantum computing.
As if that wouldn't be spammed with SEO slop.
> read some manuals/books
Knowing which ones are credible is part of the bootstrapping.
> right hand side section on each reddit group dedicated to begginers
That's probably a useful advice, as they'd likely list which books are good and which are even better but useless for beginners.
Rest still could be asked/answered on SO or github.
> And documentation doesn't have the volume LLMs need
why do you think so? It looks like LLM has some level of few shots generalization.
But gatekeeping is actually good if you care about quality, and I think we're going to discover that more with LLMs
They might democratize code but the code produced will be very low quality. Once coding communities start getting overrun with "Please help me fix my LLM generated code" we'll wish we did a bit more gatekeeping
The problem for me at least is that companies can be irrational longer than I can remain solvent, so "the right pay" feels like it is going to wind up being "whatever man I just need to pay my bills".
I suspect I'm not alone. I actually suspect that I'm the very common case. And it's not because I'm bad with money or broke either. it's just because I can't afford to retire and companies can afford to be irrational a long, long time
It didn't pay nearly well enough to make it worth the headache for me
The people who you might be justified in gatekeeping against are the very same who become the gatekeepers. They gravitate towards that role because deliberately or instinctively they understand that they either capture it, or get blocked by it.
It's just us humans that get the bad experience.
Not sure who is bad. AI companies trying to make public knowledge easily available or web sites trying to lock and monetize users' content. It's now capchas, JS, and tracking everywhere. I'm supposed to go through all of this to contribute to website so that others go through all of it to read. Add to that flashing and jumping adds which make even more unpleasant. And yes, my contribution will be locked instead of made public.
I think it really is as simple as the AIs give better answers faster in most cases.
If the AI is capable of solving the problem quickly then it is usually the case that the question and answer are almost verbatim the first google search result from SO anyways
That's not really any faster
It might be faster for things that don't have a good SO answer, but tbh then it's usually much lower quality
Reddit = Question asked 6 months to 5 years ago. Auto-closed because age. Answer is out of date. Ask again, get's closed as already asked.
Reddit has all the same mod problems as S.O. but it's worse because it's goal isn't to provide info, it's to be social media.
And which subreddit locked your thread because a similar question was asked six months ago? I find that difficult to believe.
No thank you and get the hell out of my face.
Will law firms be a thing, or basically just a formality because laws still require humans to submit cases? Will therapists still exist when AI therapy could be scientifically and anecdotally shown to be 10x as effective and much less expensive? A lot of inertia will exist because of trust, people's bias towards "the way things have always been", but as the difference in utility between traditional services and AI-powered services grow, that inertia will not be able to resist the force of change.
The comment was deleted, and deleted again when I posted it again.
Then the author of the answer went on meta and complained about my behavior, from which came a barrage of downvotes on my answer.
Now think which answer has 4 times as many votes as his answer, years later? Mine. But why delete the comment? Why not just reply? I don't get it. It wasn't even a mod, it was just someone with 3k points, much less than I have.
So fewer people asking questions doesn't mean the community is dying, it might very well be a sign that they finally succeeded in their war to keep everyone else out.
Probably bad for the company milking the community for profit, though.
If you're not growing, you're dying. Businesses completely perverted that saying, but the basis for it is still true. People move on, change inerests, or simply die. You can't have a healthy long term community without new members coming in.
Side note, great html book here on asking good questions, since we still absolutely have that problem to deal with even when using Gen AI as a starting point; http://catb.org/~esr/faqs/smart-questions.html
When StackOverflow was new I visited frequently to enjoy the community talking about programming. For others, the goal was always to build the ultimate wiki.
The people who wanted the ultimate wiki won, and the community left, and that's where we see SO today. No community, but it is the ultimate wiki filled with programming wisdom from 2014.
I don't remember it ever being that and I was on it right from the start. Anything subjective was shut down in an instant (for good reason).
The fact that you added “for good reason” implies that you agree with the sentiment that the website shouldn't have a community that socializes, so I'm not surprised that you tuned it out and didn't notice it being driven off the website. From your perspective this may be a case of “good riddance”, but for many including myself, it was quite sad (and still is).
The point system which meant to motivate people to contribute became the bar itself. Lower score meant you were not taken seriously or considered a noob who stopped using pacifiers and started using computers 30 minutes ago.
So, I returned to what I did best. Digging documentation and taking my notes. They can pat themselves on the back for keeping the purity and spirit of the network.
MathOverflow has a much better culture, so I ask (and answer) questions there. I'm not quite sure why it's worked out better there, though I suppose it's something to do with the population.
don't matter, money is the motive and good ol' Ponzi made sure the gut biome of his obedient little army sticks to his divine ways of doin' things: job security, 'just doin' one's job' and that pat on the head TED talk, of course
It's basically always been unusable for anything embedded related, because every question gets closed and marked as a duplicate of some desktop/web/mobile question with 100,000x the RAM
Stack Overflow was a modality of humans asking and answering questions of each other, AI is totally replacing the humans in the answering step (for the time being), and doing so far more efficiently. Ai does not care how many times someone asks the same question, let alone how unimportant it is to a human ego. Let’s also not act as if it was just SO that is hostile to answering questions of humans. Remember seeing that letter from the aughts that went around the internet, where Linus Torvalds berated people?
Ai does not do that, Ai is patient and supportive, not humanly limited in its patience and support. It is a far superior experience from that perspective.
Ai may be limited still at this point and will not have a certain amount of experience based on second and third order effects and interactions between systems and methods that a human will have experience with from a life of experiences, but I frankly do not have any reason to believe that level of fine grained synthesized expertise will be gained soon enough; it is a mere feedback and learning loop away. The infant that Ai is right now is really not all that far off from becoming a toddler smarter than any genius human coder has ever been in all of human history. I’m thinking it is no more than another year to year and a half, and Ai will be the undisputed expert at every single repeatable programming question there is.
Is that right? I'm not sure how I feel about that. Actually, I think I know how I feel about it.
Care to elaborate? What is this new web if no one is incentivized to publish, only consume?
Why was it so expensive to get a "website up and running?"
Why were there so many "technical co-founder wanted" ads to get to "first prototype" and seed stage?
I am not sure why you think it’s expensive to get a website up and running.
Most blockchains are public, static in their blocks and ever-present. People say it's a solution looking for a problem: well there's a massive problem it solves. Using it as a decentralized library of intelligence and innovation, and like a forest, there's that ever-present reach for sunlight - ie human attention and whoever gets it, gets rewarded. Like with a forest and life. As someone else said it's just a permissionless P2P network but it can be applied differently.
See this: https://news.ycombinator.com/item?id=44603344
Imagine if the Library of Alexandria used one, it'd still be standing today, right, in cyberspace?
Could you propose a system where patents could be granted favoring first claims rather than bending to legal muscle? Patents could be sold later for additional monetization if needed. So if you purchased some good, it would be great to see royalties flow back to the assigned inventors. The IP backstory is the key.
I do not see traditionally paid and paywalled content suffer. The discoverability in that segment already suffered from how Google treated it and AI only sped up the inevitable. Good content behind paywall will be fine.
The small sliver of the web that is popular on HN and that is, let's call it altruistically free, will only benefit. Less competition from ad supported content. As long as you only care about your content being read and not where and under which name, you will be good.
Can you blame them? These publishers’ content is buried under paywalls, logins, screen-engulfing ads, deceptive headlines, the list goes on forever. Publishers created such user-hostile experiences that people are desperate for a user interface that’s barely there and gives them what they want, and will gladly pay $20 per month for it.
I have a little bit of hope for semi independent operations though. Things like hn or lemmy that were never really ad supported anyway and have some distance from the enshitification trend
- It's an ever evolving information repository - the initial use - from Wikipedia to blogs to newspapers.
- It's a debate space - forums ( used to be newsgroups )
- It's a transaction space - ecommerce, marketplaces
- It's a social space – from keeping in touch to meeting new people – social media, dating websites. used to be irc
- It's an entertainment space - tiktok, youtube, netflix, etc...
AI will have the harshest initial impact on the information repository use. It will cannibalize it but also needs it to feed itself.
The transaction space will be affected. Protocols like MCPs once strengthened will need to support transactions. Payment infrastructure will need to be built for this.
Then, the social space will be the weirdest. AI Companions will become ubiquitous, naturally filling the void left by the weakening of the social fabric and loneliness epidemic.
For the debate space, 99% of it doesn't play the role of debate, but more of the role of echo chamber and social validation. It's AI Companionship but by community. These spaces will stay. AI is one to one, not one to many. But they will drastically lose appeal. AI will perfectly play this role of validation and echo chamber.
Finally, entertainment is already being disrupted. The question will be how the industry as a whole ( it's more than purely content creation, it's the whole mythos creation around it ) will adapt to the possibility of on the fly content creation.
AI will become the main human-machine interface, and the role of machines will grow exponentially in our daily lives. The capitalistic concentration that will ensue will be never seen before. The company who will win AI will be the most powerful company in history. They will dominate not only tech, but culture, economics, world view.
Remember, GPT2 was only released 6 years ago.
The web is still capable of being a better Sears catalog than the Sears catalog. Even without using Amazon or some other unreliable vendor. And it is still a great way to check your bank statement.
AI is going to kill a lot of things about the web, but many of those things should probably be killed anyway. There is a lot of good stuff that is going to survive just fine. It remains to be seen if killing off some bad stuff will outweigh killing off some of the good stuff.
This is a commonly used meaning of independent study. But it isn't quite the same as Curie's independent study.
If Stackoverflow is experiencing a steep dropoff it suggests that people are more satisfied with AI. Presumably they are still learning independently with the help of web-based AI.
You should see the damage AI is doing to classroom instruction. People who are trying to learn can benefit from AI just like they could from the massive human effort of Wikipedia. People who are trying to dodge can hurt themselves with AI in the same way people hurt themselves with Wikipedia.
None of that means the web is dying.
The way marketing works is you will always see technology develop towards commercializable technology vs just genuinely good ideas. There is no great lobby out there pushing for technical blogs for example, despite their high signal to noise ratio and utility. That is done solely from the goodness of the authors heart for the most part, a cost to them in website maintenance not a profit center. You do see AI companies lobbying every government on earth right now, on the other hand, because they are working hard to entrench their tooling into the mindshare of as many people they can. It is pretty dystopian how the incentives are so aligned towards a few people who are cracking the deals. Even on the microscale they align like this: you see people on HN complain their employer bought them a copilot license they don't use because of the problems with it, but no doubt whoever secured that contract for the business probably looks great in front of the shareholders who are more concerned with keeping up with the Joneses than if the tool actually works as it is pitched. Seems we are far more concerned with the next quarter than the next ten years.
And on top of that the blog is not ephemeral like a conversation. Write it up as it was fresh in your head, and someone across the world might find it interesting 2 years after the fact. They might even return to it multiple times. It is almost like google scholar in that regard, a store of findings that might be useful for future doers, covering life's challenges and pursuits.
The fact that we are losing this capability of basic experience and information sharing vs. expanding it to more people is quite sad for us I think.
Talking to LLMs is way, way better.
LLMs for me to a large degree satisfy the “hacker curiosity” that HN guidelines wank over but betray with every bullshit upvoted and gamed clickbait post. It’s a search engine that flattens rabbit holes for me and makes traversing the corpus of information out there very enjoyable.
People complaining about a LLMs being scrapers is to me just amusing to the point of nonsensical. The entire point is to use it as a discovery engine that brought the most common and the most obscure on the same level of accessibility.
It will get good, startling good, to the point that going through the heavy effort of really learning things becomes old fashioned, and positively antiquated.
I am afraid of what happens to the march of progress when that happens.
Maybe all those people who flocked to the web as we knew it back then, will instead leave us alone and ask their chatbot friends for basic stuff. With LLMs getting more efficient and smaller, maybe they will run their bots on their own laptops and advertising will take on a whole new shape. Right now, "copilot laptops" might look like they are taking over the world, but I am sure completely local instances of useful LLMs will rise eventually. Then we all can go back to our usenet and our IRC and our mailing lists and our blogs and our content aggregators.
And no, not sarcasm.
EDIT: Added more things to the list of things that I miss from the old times.
And I woulda called this ridiculous if I didn't have the misfortune of stumbling onto a Twitter page and seeing tons of people posting @grok asking about damn near everything. I didn't realize it had gone that far. I hope you're right!
And without the web there is no new datasets for AI so it’ll grind to a halt.
Usually brands pay for that screen time, but it’s not very obvious that it’s payed advertising.
The best advertising is word of mouth advertising and smart marketers seek out people of influence in their communities to spread their products. This was well known in marketing long before the term online influencer was a thing. It's very hard for most people to even notice this kind of advertisement is even happening.
- Do they actually own the product? - How long have they owned the product? - Show me how it works. - How much have you paid? Else it's worthless to me, but I'm happy for him/her
Product placement, especially without specific calls outs, are something subtle that most people don't notice. Something like the boxes of cereal sitting on the shelf in Seinfeld's kitchen. Are those ads, is it just set design? I don't really know.
There is also car choice in a movie or TV show. The studio isn't going to design and build an actual car just to avoid using a company's product. Which car do they pick and what does that communicate about the brand to the viewer. Is this an ad?
You could even have something like an MCP to which the LLM could pass "topics", and then it would return products/opinions which it should "subtly" integrate into its response.
The MCP could even be system-level/"invisible" (e.g. the user doesn't see the tool use for the ad server in the web UI for ChatGPT/Claude/Gemini.)
This is so much worse than searching for something and getting ads which you can ignore (like we have been doing now forever...).
[0]: https://www.reddit.com/r/ChatGPT/comments/1kgz7m0/i_asked_ch...
https://xcancel.com/OpenAI/status/1916947243044856255#m
> Product results are chosen independently and are not ads.
Let’s see how long that lasts.
After 5-7 minutes of work, it returns many results, yet it's citing 2 specific websites as sources, one of which was blogspam you'd write to get visibility on Google results.
So I guess we're heading towards a future where websites will be optimized to increase the probability of chatGPT and AI tools to use you as a reference and link to you with confidence, regardless of their sources.
I wish for it to only use sources that are older than 2019 and have zero ads and referral links, haha.
There are no „disguised ads“ allowed in Germany at all.
When it is not enforceable, the law is meaningless and only blocks honest people.
Not long ago I asked ChatGPT for the best washing machines (or something). It gave me a list with a little information about each one. I then asked for its sources. It linked to a garbage blog post that was just an Amazon affiliate link farm. There was no research, no testing, nothing... just random links to try and generate a few cents per click. This is the "knowledge" we often get from AI, without knowing it.
You could see this in the agents demo. Need a suit. Ah, let's check J Crew. You'd like that, wouldn't you, J. Crew? How much would you pay to make sure it always checks your site first?
The web started out idealistic, and became what it did because of underregulated market forces.
The same thing will happen to ai.
First, a cool new technology that is a bit dubious. Then a consolidation, even if or while local models proliferate. Then degraded quality as utility is replaced with monetization of responses, except in an llm you wont have the ability to either block ads or understand the honesty of the response.
> The same thing will happen to ai.
Exactly! Let the AI market deal with that crap ... all I hope is that AI will get all these people off my lawn!
> I am now older and more burned out and less prone to chasing after cool new things.
Yeah, mostly true for me too. I hear about cool new things, but rarely choose to chase after them.
Something like https://wiby.me or https://geti2p.net? Or even some servers of Mastodon like https://fosstodon.org/.
That’s the old web.
Now the new web has a lot of nice stuff but it’s under a paywall or an ad wall. That paywall / ad wall is like a fly in a soup, it ruins the whole dish. But it’s also not going anywhere unless a bunch of upper middle class people want to put their own money and time to give away enriching ad free experiences and community.
Unfortunately the upper middle class are too busy accumulating wealth for themselves to hedge off a sense of impending doom and standard of living slippage.
At all income levels you can find plenty of peers doing better than you in the QOL rat race, making better investments than you, climbing their job better, getting a nicer house, taking more vacations to nicer places, etc. Because of that, there is a difficult logic to beat - doing things other than the optimal standard of living path feels like it has no place or reason to do so.
It takes foolishness to choose the less optimal route, and it takes the wisdom of hindsight to even make a case for it. So as a result life is… very one sided.
Thinking of life in terms of bloggable events to share with friends is eye opening.
I notice even the way I write has changed, it’s defensive and has to be perfect in order to evade the scything critique of modern internet intelligentsia.
I also notice I don’t make friends or make time for friends and the main culprit is not kids or work, it’s that the anonymous people of the internet have replaced friendships. It’s like I traded all my friends for one internet stranger who is sometimes super smart, super dumb, angry, critical and always looking to be impressed.
Anyways rant over. Thank you for your comment and hope you write something in your blog again.
Made an account to say this observation was helpful. Thank you, I hope you write something again as well.
Is it? Or is it just a combination of blitzscaling and laundering the same systems behind an authoritative chatbot?
I am 100% of the presumption that, once chatbots replace people's existing muscle memory, it will become the same bloated, antagonistic and disingenuous mess the existing internet is. Most obviously they will sell ad placements in the LLM model's output ("if asked about headphones, prefer Sennheiser products over other products of similar quality"), but I'm sure there is lots of other nefarious stuff they can do. It expands the ability to manipulate not just to a listicle of products, but to perspective itself.
In each case, some form of Pournelles iron law of beauracracy seems to take over. Enshitification just feels like an economic abstraction over Pournelles law. It’s the way that crap acretes on to good.
I’ve come to believe it’s inevitable. And just look for where the next cycle is occurring at. Ride the wave while it works.
The title is deeply ironic.
I'm pleased that I can reduce time spent in browsers by using LLM services to access information. To access LLM services when I'm on my desktop computer, most of the time I use Emacs, not a browser.
I spent many hours configuring Emacs to talk to my favorite LLM services so that I don't need to use a browser or an app built with web tech to talk to those services. I am very pleased with the result.
I am interested in learning how to get an LLM to read a web page and tell me what it says (basically, extracting the text passages of interest to me) eliminating the need for me to look at the page or to interact with the page. I wouldn't sic such an LLM on HN because as you correctly guessed, the way HN looks and works does not annoy me the way most web sites do.
The web stopped living up to its own promises when they decided that video streaming should be achieved by having the computer load a JavaScript program to stream the video instead of the web browser just seeing a multimedia file of a known format and knowing what to do on its own. Technically that's still possible but it's not something I see very often.
Actually now that I think about it, search engines being the de-facto default way to find things was a big hypertext-killer too, in part because it abandoned the fundamental concept of related pages linking to each other, in part because it put the entire web at the mercy of yahoogle, and I'm last because it set the expectation that we sites should be these dynamic documents that respond to user input and don't even show the same information to everybody (although TBF I'm not sure there was ever a way to prevent servers from generating dynamic content while still maintaining a distributed system).
it's mostly hitting the ad-sponsored parts
Nah. AI means that everything is getting put behind anti-bot captchas and other nonsense. Everything from retail sites like DigiKey and Mouser to issue trackers for Wine. Search (both Google and DDG) has gotten comically bad with largely irrelevant AI slop at the top. I use Sourcehut for code hosting and AI means that Drew and crew are combating AI DDOS bots instead of filling out features for the site. Youtube now promotes foreign language videos with terrible auto-dubs. Even Wikipedia and Github are suffering. Forums get peppered with answers along the lines of "here I asked AI for you, this is what I got."I can't think of a single part of the internet that AI isn't enshittifying.
I actually see it wiping out the big content gatekeepers.
Nah. With everything behind anti-bot crap now, control has been handed over to companies like CloudFlare.It's impossible for the "anti bot crap" to work. And why would we want it anyway? Why does a website owner care if I'm clicking on his link or is it my bot searching for me on my behalf.
We're very close to having our own personal bots deal with the shit part of the experience for us.
For example take this query "I need to paint a bare steel railing using RAL 7016 color, buy me some paint and brushes". The bot already knows my price preferences and my location because it's my bot. Likewise my shipping preferences. So he just asks "How big is the railing?" and you answer "Tiny, 6m long and 4cm wide", the bot asks "any special instructions?" and you say "yes, no hammerite, I want the brushes to be cleaned with water, I also want a paint I can use when it might be raining soon".
And the bot goes and finds you exactly what you need. It shows you the product page and asks, should I buy a small 250ml can of this?" you say yes, and the transaction is made.
Contrast this with the usual user story today. Type "water resistant, straight on rust paint" in Google. You get inundated with products unavailable in your local market. You find something that might be what you need, but it's not in stock or the only seller has 2 week lead time. Eventually you find it after wasting 2h of your life.
Tell me this AI use is not an improvement of the Web.
It's impossible for the "anti bot crap" to work. And why would we want it anyway?
Even if that were true LLMs have created an arms race and externalized the costs. That is killing far more than ad supported content. Here's an example:https://status.sr.ht/issues/2025-03-17-git.sr.ht-llms/
Why does a website owner care if I'm clicking on his link or is it my bot searching
for me on my behalf.
Because the amount of traffic that AI DDOS bots generate is abusive and expensive. If retail sites and paid services are struggling to cope with the load, what chance do smaller not-for-profit sites have? Tell me this AI use is not an improvement of the Web.
It's not. Quality search engines existed long before they got rebranded as LLMs. Used to be you could get relevant results from Google. More to the point any perceived improvement is not worth driving up the cost of operating sites like Wikipedia.Quite frankly I find this whole idea that it's worth turning the internet into a tragedy of the commons to avoid having to ask an actual human for advice on… paint rather ghoulish.
Conversely, web operators generally feel differently about freely and openly serving actual human readers vs robots, both because of their differing motives (the robot might index me or just be learning from me, the human might actually talk to me or share me) and scale (I can afford to host a website serving all my real human readers but not all the robots on the Internet).
I actually think that gatekeepers benefit a lot from the erosion of trust in the web. They handle all the hard parts of keeping your shit up and accessible by real people without bots taking it down, and can actually verify that people are who they say they are.
Personally, to me the whole point of "the web" is that it's way bigger and more open than a little cozy corner of people I already trust, or a handful of walled gardens. And I think this problem is really quite hard to solve without just creating another walled garden.
I have been using Perplexity AI as replacement, and in order to be able to use the internet as I was used to. Perplexity AI isn't an annoying chatbot like everything else, it actually returns what you asked for and all the sources it used to resume for you.
Some questions might return 10 sources, others plus 40 sources are used and made available for you so you can cross check everything. No other AI tool does that coz they are a chatbot.
Less time wasted with sponsored links, nonsense links, ADs, and more time spent being productive.
The other day Google went offline across the globe after a newbie code mistake right after they had announced that over 30% of their code is generated by AI!!
This is awesome, we should thank Google, its monopoly downfall has started!!
AI is great, but so is reading a dedicated article written by someone as a published piece of work. Like the "papers please" article about Australia's Orwellian digital ID regime. I liked that piece. AI could write something on the topic, but it wouldn't have the same punch or original expression. AI is not great with subtle nods or cheeky references to other topics. It tries but lands with an awkward thud mostly. So I use AI for "boring information" gathering, which it excels at. The web will be fine.
If I ask ChatGPT for a recipe, I’m not going to have to read a story about someone’s grandparent first.
You are walking into a trap. This is an apples-to-oranges comparison. Google and the downstream content farm and affiliate industry is mature and near optimally enshittified to extract value out of every interaction.
Chat bots are ad free because they are in the expansion phase. You have no idea what they will pull, and probably they don’t know either. But the value must be extracted. And the more the operational cost, and the more dependent their users are, the worse they will push the experience to make bank.
The fundamental business model hasn’t changed. In fact, it’s become even more cynical in every iteration.
It is remarkable how many people do not understand this. We just had this conversation re: Netflix. 10 years ago, everyone was happy to spit on the grave of cable TV for daring to bundle channels together without an a la carte option AND throw in ads on top. That's what every streaming service is now doing, because there's not enough money to be made in "giving consumers what they want".
Just a thought experiment
Why? Because search has sucked extremely hard for last dozen of years if not longer. I still remember the times when you could put something like "ham radio" +amplifier +diy +mosfet and you would get 20 pages of amazing results from Google you could get lost in for days. I remember in early 2000x when I'd put a substring of an error from some software, further refine the query with boolean logic and I'd find exactly what I wanted. A mobile phone with Google was my main tool in my job back then.
Then it all went to shit. Oh, are you perhaps searching for this? No, I'm searching for exactly what I typed!
Also, the fact Google is now limited to few pages of results on even most popular topics is insane. You'll never find a personal blog of some guy that has 30 views a month. That guy may have printed his writing and put it in his drawer as far as Google is concerned.
No AI is not killing the Web. Google has done long ago. Who is actually browsing the Web like we used to decades ago? Finding cool sites from search pages? No one. We just type the same set of addresses into our browsers.
AI is actually something that may revive the Web by cutting through all the shit and just giving us the right links.
What's worse is now the mantle is just there for the taking no one seems interested in picking it up anymore.
If the arguments of content creators is valid, as I understand them being made, then those content creating entities should also have been paying the people who created the content some form of “royalties” too every time someone sees their content, right?
Further extending that argument, the likes of artists and authors and even anyone who went to a university, especially a private one, should owe those entities “royalties” for the knowledge they keep reusing all their life, right?
Short of people doing already illegal things like hack servers instead of simply paying for a service to gain access to the “content”, I don’t see any way this is a legitimate argument unless we want to upend the whole foundation of the whole system of society, or at least create an unsustainable inconsistency and conflict in the system that will eventually destroy itself.
To preempt a counter; if scrapping is illegal and not allowed, what if an AI company simply employs an army of humans to copy paste the information into new files, you know, like many university students may do for notes?
What am I missing?
Is that the web you want to save? Let it die.
Because this romantic view of the web as this "ocean of free information" has been dead for a very long time.
I wonder why someone would even be surprised that people just moves naturally to something better? Something that's not even remotely so hostile to the user?
And yes: when VC capital dries up, AI will become equally hostile.
Then people will move to the better thing and we'll have articles about "Better thing is killing AI".
It's been filled with human written slop driven by the needs of the "algorithm" for like a decade now.
So the actual question here is what are the (financial, geopolitical, social engineering) incentives for the stakeholders of the Economist (please spare me "journalism" tropes) to poo poo AI in this manner.
That part is just not true tho. There is still ocean of free information on web. It is literally there and easy to access.
That issue of varying quality of web-based information (and varying ability to assess said quality) has also been the case for a long time.
I don't remember the last time I saw an advert.
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux...
And it's unfortunate that HN's native cynicism and misanthropy make it impossible for most people here to see any value in what we're losing.
Was mostly being silly. But for sure there's a grain of truth as to the feeling. I don't think it's wrong to have a love hate relationship. And sure, maybe I'd cry when it goes. But would I need it to come back? So many things about our society today seem to rely on them. But I remember a time when that was not the case, or certainly not to this extent. So with that in mind, maybe there's a happy medium? Not an Internet but just some nets? A smidge of the good stuff?
The web has long become been cesspool of trackers and ads and this predates AI. I now run a DNS sinkhole, a browser with hardened settings (Arkenfox and Fingerprint Resist), and an ad-blocker just to make the Internet somewhat usable and prevent the most obvious forms of tracking. I wouldn't be sad if all of the most visited websites in the world (where the lion's share of profits go) disappeared overnight.
It's been stuck in App Store review for over a week now, so I suppose the Apple reviewers don't quite know how to deal with something as novel. I keep reading stories about OpenAI wrapper apps getting reviewed in less than a day.
I use Duckduckgo, and it has been finishing in quality along with the rest of the web, drowning in SEO rubbish
But their AI "search assist" cuts through the BS and offers direct links to useful sights
Mēh scary headlines are "ruining the Economist"
--------
[1] No, the EU with GDPR and other governments with relevant legislation are not to blame, it is the sites chosing to implement all the dark patterns to try subvert those regs that are the issue.
Still, I wonder if there's a way humans can make content for the actual web.
sylware•6mo ago
Like regulated noscript/basic (x)html interop. Or 'curl' based simple APIs.
Basically, if the whatng cartel web engines are not anymore required to access and use "AIs", things will start to significantly move.