One can also interpret this as search was such shit that the summaries are allowing users to skip that horrible user experience.
They don’t care about discoverability. It’s all ads as quickly as possible. Coming soon is ad links in summaries. That’s what they’re getting to here.
It has become shockingly common to see people sharing a screenshot of an AI response as evidence to back up their argument. I was once busy with something so I asked my partner if he could look up something for me, he confidently shared a screenshot of an AI response from Google. It of course was completely wrong and I had to do my own searching anyways (annoyingly needing to scroll past and ignore the AI response that kept trying to tell me the same wrong information).
We have to remember that google is incentivized to keep people on Google more. Their ability to just summarize stuff instead of getting people off of google as quickly as possible is a gold mine for them, of course they are going to push it as hard as possible.
Isn't that expected from "higher quality clicks"?
edit: AI doesn't even have a corrupting, disgusting physical body, of course it should be recommending clean diets and clean spirits!
Any response will be 'reasonable' by that standard.
Nope
But if I just simply remove the "why" it clearly states "Rum is an alcoholic beverage that does not have any significant health benefits."
Man I love so much that we are pushing this technology that is clearly just "garbage in, garbage out".
Side Note: totally now going to tell my doctor I have been drinking more rum next time we do blood work if my good cholesterol is still a bit low. I am sure he is going to be thrilled. I wonder if I could buy rum with my HSA if I had a screenshot of this response... (\s if really necessary)
Asking AI to tell reality from fiction is a bit much when the humans it gets its info from can’t, but this is at least not ridiculous.
I agree with that, but the problem is that it is being positioned as a reliable source of information. And is being treated as such. Google's disclaimer "AI responses may include mistakes. Learn more" only shows up if you click the button to show more of the response, is smaller text, a light gray, and clearly overshadowed by the button with lights rotating around it to do a deep dive.
The problem is just how easy it is to "lead on" one of these models. By simply stating a search like "why is rum healthy" implies that I already think it is healthy so of course it leads into that but that is why this is so broken. But "is rum healthy" actually provides a more factual answer:
> Rum is an alcoholic beverage that does not have any significant health benefits. While some studies have suggested potential benefits, such as improved blood circulation and reduced risk of heart disease, these findings are often based on limited evidence and have not been widely accepted by the medical community.
That's because of SEO. Top results are assumed reliable, because there is - currently - no other way to ascertain reliability in an efficient and scalable way, and the top results are sites that have optimized their content and keywords to be in the top results.
Of course that doesn't mean we should 100% stop doing so, I believe that the occasional social gathering with some alcohol probably has more positives all together than staying home depressed (though of course this is not an exclusive or situation), but we should definitely not lie that it is somehow healthy. It's a risk factor, and people do regularly take risks, like driving a car.
The whole "Why is (false statement)?" Is an old issue and I'm not entirely convinced the Gemini lite model doing AI overviews is who we hope to fix that.
o3 gives the following response:
> Short answer: it isn’t. Rum isn’t “healthy” any more than other booze.
It goes on to give a bulleted list on why it's not healthy. Full output: https://chatgpt.com/share/6894871f-be9c-800e-8cc9-9e5ec2d5d6...
A sibling comment speculated that Google's response is expected because it's just a summary of bad search results. To test this, I ran a fresh query against o3 with the Web Search option enabled. The results:
> Rum is a tasty way to get ethanol into your body, but it is not a health food. Any modest upsides you may have read about (a bump in “good” HDL cholesterol, slight blood-thinning, a bit of antioxidant pickup from barrel aging) are the same things you’d get from any spirit, and recent large studies show those benefits are either tiny or disappear once you control for lifestyle. Meanwhile the well-documented downsides—cancer risk, liver disease, high blood pressure, weight gain, addiction, injuries—kick in from the first drink.
It goes on to give an extensive debunking of all the nonsense health benefits in the Google summary. It looks like it saw the same search results but was able to course correct and view those results with a more skeptical eye. Full output: https://chatgpt.com/share/6894885d-51dc-800e-ab8c-55af0a5bb2...
This post is about Google, so it's fair to focus on their search product, but I'd be careful to generalize to other LLMs. There are huge differences in quality between different models.
People are also more likely to click into web content that helps them learn more — such as an in-depth review, an original post, a unique perspective or a thoughtful first-person analysis
So... not the blog spam that was previously prioritized by Google Search? It's almost as if SEO had some downsides they are only just now discovering.
1) Clicking on search results doesn't bring $ to Google and takes users off their site. Surely they're thinking of ways to address this. Ads?
2) Having to click off to another site to learn more is really a deficiency in the AI summary. I'd expect Google would rather you to go into AI mode where they control the experience and have more opportunities to monetize. Ads?
We are in the "early uber" and "early airbnb" days ... enjoy it while it's great!
https://news.ycombinator.com/item?id=44798215
From that article
Mandatory AI summaries have come to Google, and they gleefully showcase hallucinations while confidently insisting on their truth. I feel about them the same way I felt about mandatory G+ logins when all I wanted to do was access my damn YouTube account: I hate them. Intensely.
But why listen to a third party when you can hear it from the horses mouth.They're not claiming anything about the quality of AI summaries. They are analyzing how traffic to external sites has been affected.
With AI Overviews and more recently AI Mode, people are able to ask questions they could never ask before. And the response has been tremendous: Our data shows people are happier with the experience and are searching more than ever as they discover what Search can do now.I’m sick of having to feel violated every step I take on the Web these days.
> "what is the type of wrench called for getting up into tight spaces"
> AI search gives me an overview of wrench types (I was looking for "basin wrench")
> new search "basin wrench amazon"
> new search "basin wrench lowes"
> maps.google.com "lowes"
Notably, the information I was looking for was general knowledge. The only people "losing out" here are people running SEO-spammish websites that themselves (at this point) are basically hosting LLM-generated answers for me to find. These websites don't really need to exist now. I'm happy to funnel 100% of my traffic to websites that are representing real companies offering real services/info (ship me a wrench, sell me a wrench, show me a video on how to use the wrench, etc).
Agreed. The web will be better off for everyone if these sites die out. Google is what brought these into existence in the first place, so I find it funny Google is now going to be one of the ones helping to kill them. Almost like they accidentally realized SEO got out of control so they have to fix their mistake.
Then "content marketing" took over, and the content itself was now also used to sell a product or service, sort of an early form of influencer marketing and that is when I think it all started to go down hill. We stopped seeing the more in depth content which actually taught something, and more surface level keywords that were just used to drive you to their product/service.
OTOH, the early web was also full of niche forums, most viewable without an account and indexable, of about any topic you could imagine where you could interact with knowledgeable folks in that niche. Google would have been more helpful to users by surfacing more of those forums vs. the blogs.
Those forums are IMO the real loss here. Communities have moved into discord, or another closed platform that doesn't appear on the web, and many that require accounts or even invitations to just view read only.
Or was it, "Live by the AdWord..."
Now an LLM just knows all the content you painstakingly gathered on your site. (It could also be, and is likely that it was also collected from other hard to find sites across the internet).
The original web killed off the value of a certain kind of knowledge (encyclopedias, etc.) and LLMs will do the same.
There are plenty of places to place the blame, but this is a function of any LLM, and a funcamental way LLMs work, not just a problem created and profited from by Google- for example the open-weight models, where no-one is actually profiting directly.
First time learning that scraping and training on data that they have often been explicitly disallowed to obtain for free or for that purpose by the rights holders is "fundamental to how LLMs work". If not, then there is no reason those who gathered the information wouldn't stand to profit by selling LLMs this data.
I don't think there is any reason they couldn't do that
Gathering that knowledge is work, and if anyone should capture that value, it's the people doing the work. Seeing bug tech slurp it all up, insert inself in the middle and capture all the value is heartbreaking.
I hate that AI is destroying the economics of the independent web, and people cheer for it because they landed on Forbes one too many times. It's insulting to all the people who did their job right, and still get their work slurped up by an uncaring corporation with no way to stop them.
We will get the web that we deserve.
I wonder how many people will decide to just stop sharing technical knowledge because of that, and how much we will lose because of it.
Eg: instead of writing a blog post, you'll submit a knowledge article to an AI provider and that'll go into the AI's training set and it'll know "you" told it. And maybe (even more skeptical) pay you for it.
Again: highest degree of skepticism, but at the same time, that's the only way I could see people continuing to write content that teaches anything.
Even then, you'll get malicious compliance. The best case scenario would be a bit like Spotify: everyone getting fractions of a penny.
One problem with monopolies is their massive multiplicative effects on otherwise manageable problems.
I have zero interest in providing free labor for LLM companies with no human actually reading my words. I don't think I'm alone in that stance.
Similarly, I used to help people on forums for free. My reward was getting respect of my peers, the feeling of helping another human being and sometimes them being grateful, rare side job opportunities thanks to people finding my specialist posts. That was fun, being anonymous question-answering bot for AI to scrape is not.
Expectation of these in particular makes your blogging a product.
> with no human actually reading my words
But it - or at least the idea - is being ultimately read by humans, as long as some article is in the top results for relevance to some LLM prompts. It just may be summarized, or particular parts extracted first, or reworded for clarity (like I may ask for an "eli5" if I encounter something that's interesting but I find concepts going over my head), or otherwise preprocessed in order to fulfill the prompt parameters. All actions which the very human users may have to do manually anyway in order to efficiently consume your content (or give up and move on if they can't be bothered), is now automatically done by LLM agents.
Of course they won't be nearly as many as those who publish with expectations, but also history has shown that whenever there's a gap, someone tends to fill it just because. I'm not worried about content gaps at all. What I see is an overall increase in quality as no-expectation sources float to the top of search results.
I can also confirm that I will do a lot less of it since it's threatening the parts of my business that supported me and gave me so much free time to release things for free. It almost halved my audience, so I have to do the same amount of work for nearly half the pay, half the community, and half the credit.
lol what. That only works when the demand is actually paying. You are talking about FREE content!
>They literally have no expectations
I'm sorry but this is completely false.
If that were the case why do we have so many licenses? Why GPL/MIT/Apache, CC BY, CC BY-NC, CC BY-ND, CC BY-SA, CC BY-NC-SA, CC BY-NC-ND, and so on, when it's either "free or not"? Surely we don't need all this fluff when the only thing free for real is Public Domain?
Just consider this: we're living in an age where even people who publish MEMES on Reddit watermark them because they don't 9gag/Instagram/Facebook pages reposting it without permission/credit. And they are MEMES!! Even I find this cringe. But it proves that the author has some expectation. Even if you don't agree with their expectation, it proves that the expectation EXISTS.
What is next? Are you going to extend this to say that all web comics accessible for free on the web are "free free" so you should be allowed to remove watermarks to repost them on Facebook? You are filling the gap of having a single place where people can read funny comics for free, except you didn't make any of the comics and you have no right to post them. In fact, this is a great example. How is ChatGPT different from a guy that just reposts comics and memes on Facebook? It's literally the same thing.
And then next you are going to say that all videos posted on Youtube/TikTok are "free free" so you should be allowed to rip them off too.
I feel like you're just going to make an enemy out of everyone who publishes anything for free on the Internet if you start thinking like this.
You should see the internet 20+ years ago. It was rich in forums, interest sites, etc where people just shared because they had this interesting thing they wanted to put out there. Reddit still has a bit of it, though it's mostly a mess now.
> why do we have so many licenses?
Because the way how the law works in some jurisdictions is that permission must be explicitly granted by the author via license, and some authors (who are aware of the legal requirement) just do up a thing without checking to see if there's something that fits their desires. And some want to tweak a license in some way to account for some other thing and end up creating a new license. See also CC0, 0BSD, WTFPL, Unlicense, and others [0].
BTW I'll clarify that it's not "expectations", but specifically "expectations of return". It's OK, and expected, for instance, that someone putting something out there for free wouldn't want to be held legally responsible if that thing is used in an illegal manner.
[0] https://en.m.wikipedia.org/wiki/Public_domain_equivalent_lic...
There is a difference between working for free for your community and working for free for a trillion dollar company's investors. Doubly so when you strip consent or attribution from the equation.
I hope that you understand that this is a metaphor for free labour intended for a community being exploited by big corporations with no way of stopping them.
(Though unfortunately in the Wild West Web of today it seems it does, practically speaking.)
And anybody who creates original content and wishes -- not just to be paid for that content -- but for people to actually see that content and engage with it. IOW the very people who fed the LLM revolution.
People who want to engage with original content will; people just won’t be forced into appearing they are engaging with content just to find answers to simple questions or find the specific information they are looking for before they leave the site…like always.
The result is likely to be more time on site with lower numbers of users; a more genuine reflection of an actual user base instead of search-fed propping of ad revenue numbers through effectively fake impressions.
To that point: There is not a single website or blog, ever, that I started visiting regularly by having ended up there from a search result. Literally, ever. From something a friend shared? Something I saw on HN? Something I found through a recommendation, an article that was posted on a different site, etc? Absolutely.
Discovery isn't search, but search can be a form of discovery, despite your experiences. They don't match mine.
[1] https://www.npr.org/2025/07/31/nx-s1-5484118/google-ai-overv...
[2] https://digiday.com/media/the-winners-and-losers-of-googles-...
[3] https://www.reuters.com/legal/litigation/googles-ai-overview...
[4] https://www.reuters.com/legal/googles-ai-previews-erode-inte...
You confuse original content creators with news conglomerates that always cry wolf about how they will be “put out of businesses”. Of course, ignoring the fact that there have never been more paid content creators in history, you choose to only be concerned about the one type who complains most frequently.
It’s really nothing new, but hey, benefit of the doubt, maybe it’s your first time.
My original comment regarded people who create original content and want to be paid for it. That includes substack creators, CNN, and lots of enterprises in between. They all have the same problem with large LLMs either taking advantage of the tragedy of the commons or ignoring their robots.txt files and scraping their content even if they choose to not participate.
I haven't forgotten that news organizations said the same thing about Twitter, Facebook, etc. If you haven't noticed, news (especially local news) has been declining steadily for at least the last 25 years and several news organizations (again, especially local ones) _have_ either gone out of business or been bought out and gutted by hedge funds. Some of this for sure is due to miscalculations by those orgs, but the nature of those miscalculations matters. It's worth reading up on the history of media's financial relationship with social and search. It will help inform a lot on how it's going to go with LLMs and AI unless they find a way to make some deals. It behooves both sides.
Now AI searches that: search, pull various pages to examine real contents, continue searching etc, then summarise/answer is realistically the only way to filter through all of said bullshit.
AI searches help with the clickbait problem as well, since even "reputable" news outlets are engaging in that fuckery now.
It's either; we use AI to sift through the dead carcass of our old friend, or we enforce rules for a clean Internet - which I can definitely not see happening.
Wikipedia as search engine, no Javascript, no "AI"
Query string "wrench tight spaces"
Basin_wrench is the #1 result
usage: sh 1.sh wrench tight spaces > 1.htm;firefox ./1.htm
#!/bin/sh
ns=$(
for x in \
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 \
100 101 \
118 119 \
710 711 \
828 829 \
2300 2301 2302 2303
do
printf \&ns$x=1
done
)
export x=https://en.wikipedia.org
echo "url=$x/w/index.php?search=$@$ns&fulltext=1&offset=0" \
|(echo user-agent=\"\" ;
echo header Accept:;
tr \\40 +) \
|curl -K/dev/stdin \
|(echo "<base href=$x />";cat)The "ns" parameters are significant
If use the URL in the comment with this query string then Basin_wrench is #3 result
Monkey_wrench is #1 and Wrench is #2
Basin_wrench is #3 result
Delete preposition: "of"
"type wrench for tight spaces"
Basin_wrench is #2 result
Delete preposition: "for"
"type of wrench tight spaces"
Basin_wrench is #2 result
Delete prepositions: "of", "for"
"type wrench tight spaces"
Basin_wrench is #2 result
Delete unnecessary noun "type" and preposition "of"
"wrench for tight spaces"
Basin_wrench is #1 result
Delete preposition "for"
"wrench tight spaces"
Basin_wrench is #1 result
"tight spaces wrench"
Basin_wrench is #1 result
"tight wrench spaces"
Basin_wrench is #1 result
Much less typing, no unnecessary nouns and prepositions
I like it, others might not
I also tried a number of less popular search engines with a non-AI search from the command line
Query string "type of wrench for tight spaces"
For several of them the #1 result was for star-plumbing.com
https://star-plumbing.com/what-kind-of-wrench-is-used-in-tig...
The latter is what I used to do before AI summary was a thing, so I would logically assume that it should reduce the clicks to individual sites?
I'm sure Google knows this, and also knows that that many of these "AI" answers wouldn't pass any prior standard of copyright fair use.
I suspect Google were kinda "forced" into it by the sudden popularity of OpenAI-Microsoft (who have fewer ethical qualms) and the desire to keep feeding their gazillion-dollar machine rather than have it wither and become a has-been.
"If we don't do it, everyone else will anyway, and we'll be less evil with that power than those guys." Usually that's just a convenient selfish rationalization, but this time it might actually be true.
Still, Google is currently ripping off and screwing over the Web, in a way that they still knew was wrong as recently as a few years ago, pre-ChatGPT.
if I just need a basic fact or specific detail from an article, and being wrong has no real world consequences, I'll probably just gamble it and take the AI's word for it most of the time. Otherwise I'm going to double check with an article/credible source
if anything, I think aimode from google has made it easier to find direct sources for what I need. A lot of the times, I am using AI for "tip of the tongue" type searches. I'll list a lot of information related to what I am trying to find, and the aimode does a great job of hunting it down for me
ultimately though, I do think some old aspects of google search are dying - some good, some bad.
Pros: don't fee the need to sift through blog spam, I don't need to scroll past paid search results, I can avoid the BS part of an article where someone goes through their entire life story before the actual content (I'm talking things like cooking website)
Cons: Google is definitely going to add ads to this tool at some point, some indie creators on the internet will have a harder time getting their name out.
my key takeaway from all this is that people will only stop at your site if they think your site will have something to offer that the AI can't offer. and this isn't new. people have been steeling blog content and turning into videos for ever. people will steel paid tutorials and release the content for free on a personal site. people will basically take content from site-X and repost in a more consumable format on site-Y. and this kind of theft is so obvious and no one liked seeing the same thing reposted a 1000 times. I think this long term is a win
I've seen many outrageously wrong summaries that were contradicted sometimes by articles on the first page of regular search. Are people happy with the slop? Maybe, but I could see people getting bored by it very quickly. There already is a healthy comment backlash against ChatGPT-generated voice over narratives in YouTube videos.
This attribute exists, but this value comes from a bootstrap plugin that you have to install separately. It generated quite a few clicks and high-quality searches from me.
If we don’t actively archive, incentivize, or reimagine those spaces, AI-generated content may become a sterile echo chamber of what’s “most likely,” not what’s most interesting. The risk isn’t that knowledge disappears — it’s that flavor, context, and dissent do.
This will be able to come again to the fore once SEO'd spam dies off due to click starvation.
Use DuckDuckGo, use Kagi, use virtually anything OTHER than Google.
Dumb AI is one thing. Not autocompleting "Donald Trump assassination attempt" (or any number of other things) is a choice
Having an optional digest of the SERP makes link selection easier, especially if I have only a rough idea of what I'm looking for.
Because the search results are crap.
> and higher quality clicks
Because the users are fooled by some results and click on them, only to lose time and curse.
bediger4000•6mo ago
inetknght•6mo ago
caconym_•6mo ago
It does kind of contradict my own assumption that most people just take what the chatbot says as gospel and don't look any deeper, but I also generally think it's a bad idea to assume most people are stupid. So maybe there's a bit of a contradiction there.
bediger4000•6mo ago
But I also share your assumption about "most people".
nsonha•6mo ago
For me at least with Perplexity, Grok and ChatGPT, all results come back with citations in every paragraph, so I haven't had to do that.
edding4500•6mo ago