One can also interpret this as search was such shit that the summaries are allowing users to skip that horrible user experience.
They don’t care about discoverability. It’s all ads as quickly as possible. Coming soon is ad links in summaries. That’s what they’re getting to here.
It has become shockingly common to see people sharing a screenshot of an AI response as evidence to back up their argument. I was once busy with something so I asked my partner if he could look up something for me, he confidently shared a screenshot of an AI response from Google. It of course was completely wrong and I had to do my own searching anyways (annoyingly needing to scroll past and ignore the AI response that kept trying to tell me the same wrong information).
We have to remember that google is incentivized to keep people on Google more. Their ability to just summarize stuff instead of getting people off of google as quickly as possible is a gold mine for them, of course they are going to push it as hard as possible.
Isn't that expected from "higher quality clicks"?
edit: AI doesn't even have a corrupting, disgusting physical body, of course it should be recommending clean diets and clean spirits!
Any response will be 'reasonable' by that standard.
But if I just simply remove the "why" it clearly states "Rum is an alcoholic beverage that does not have any significant health benefits."
Man I love so much that we are pushing this technology that is clearly just "garbage in, garbage out".
Side Note: totally now going to tell my doctor I have been drinking more rum next time we do blood work if my good cholesterol is still a bit low. I am sure he is going to be thrilled. I wonder if I could buy rum with my HSA if I had a screenshot of this response... (\s if really necessary)
Asking AI to tell reality from fiction is a bit much when the humans it gets its info from can’t, but this is at least not ridiculous.
I agree with that, but the problem is that it is being positioned as a reliable source of information. And is being treated as such. Google's disclaimer "AI responses may include mistakes. Learn more" only shows up if you click the button to show more of the response, is smaller text, a light gray, and clearly overshadowed by the button with lights rotating around it to do a deep dive.
The problem is just how easy it is to "lead on" one of these models. By simply stating a search like "why is rum healthy" implies that I already think it is healthy so of course it leads into that but that is why this is so broken. But "is rum healthy" actually provides a more factual answer:
> Rum is an alcoholic beverage that does not have any significant health benefits. While some studies have suggested potential benefits, such as improved blood circulation and reduced risk of heart disease, these findings are often based on limited evidence and have not been widely accepted by the medical community.
That's because of SEO. Top results are assumed reliable, because there is - currently - no other way to ascertain reliability in an efficient and scalable way, and the top results are sites that have optimized their content and keywords to be in the top results.
The whole "Why is (false statement)?" Is an old issue and I'm not entirely convinced the Gemini lite model doing AI overviews is who we hope to fix that.
People are also more likely to click into web content that helps them learn more — such as an in-depth review, an original post, a unique perspective or a thoughtful first-person analysis
So... not the blog spam that was previously prioritized by Google Search? It's almost as if SEO had some downsides they are only just now discovering.
1) Clicking on search results doesn't bring $ to Google and takes users off their site. Surely they're thinking of ways to address this. Ads?
2) Having to click off to another site to learn more is really a deficiency in the AI summary. I'd expect Google would rather you to go into AI mode where they control the experience and have more opportunities to monetize. Ads?
We are in the "early uber" and "early airbnb" days ... enjoy it while it's great!
https://news.ycombinator.com/item?id=44798215
From that article
Mandatory AI summaries have come to Google, and they gleefully showcase hallucinations while confidently insisting on their truth. I feel about them the same way I felt about mandatory G+ logins when all I wanted to do was access my damn YouTube account: I hate them. Intensely.
But why listen to a third party when you can hear it from the horses mouth.They're not claiming anything about the quality of AI summaries. They are analyzing how traffic to external sites has been affected.
With AI Overviews and more recently AI Mode, people are able to ask questions they could never ask before. And the response has been tremendous: Our data shows people are happier with the experience and are searching more than ever as they discover what Search can do now.
I’m sick of having to feel violated every step I take on the Web these days.
> "what is the type of wrench called for getting up into tight spaces"
> AI search gives me an overview of wrench types (I was looking for "basin wrench")
> new search "basin wrench amazon"
> new search "basin wrench lowes"
> maps.google.com "lowes"
Notably, the information I was looking for was general knowledge. The only people "losing out" here are people running SEO-spammish websites that themselves (at this point) are basically hosting LLM-generated answers for me to find. These websites don't really need to exist now. I'm happy to funnel 100% of my traffic to websites that are representing real companies offering real services/info (ship me a wrench, sell me a wrench, show me a video on how to use the wrench, etc).
Agreed. The web will be better off for everyone if these sites die out. Google is what brought these into existence in the first place, so I find it funny Google is now going to be one of the ones helping to kill them. Almost like they accidentally realized SEO got out of control so they have to fix their mistake.
Then "content marketing" took over, and the content itself was now also used to sell a product or service, sort of an early form of influencer marketing and that is when I think it all started to go down hill. We stopped seeing the more in depth content which actually taught something, and more surface level keywords that were just used to drive you to their product/service.
OTOH, the early web was also full of niche forums, most viewable without an account and indexable, of about any topic you could imagine where you could interact with knowledgeable folks in that niche. Google would have been more helpful to users by surfacing more of those forums vs. the blogs.
Those forums are IMO the real loss here. Communities have moved into discord, or another closed platform that doesn't appear on the web, and many that require accounts or even invitations to just view read only.
Now an LLM just knows all the content you painstakingly gathered on your site. (It could also be, and is likely that it was also collected from other hard to find sites across the internet).
The original web killed off the value of a certain kind of knowledge (encyclopedias, etc.) and LLMs will do the same.
There are plenty of places to place the blame, but this is a function of any LLM, and a funcamental way LLMs work, not just a problem created and profited from by Google- for example the open-weight models, where no-one is actually profiting directly.
First time learning that scraping and training on data that they have often been explicitly disallowed to obtain for free or for that purpose by the rights holders is "fundamental to how LLMs work". If not, then there is no reason those who gathered the information wouldn't stand to profit by selling LLMs this data.
I don't think there is any reason they couldn't do that
The latter is what I used to do before AI summary was a thing, so I would logically assume that it should reduce the clicks to individual sites?
I'm sure Google knows this, and also knows that that many of these "AI" answers wouldn't pass any prior standard of copyright fair use.
I suspect Google were kinda "forced" into it by the sudden popularity of OpenAI-Microsoft (who have fewer ethical qualms) and the desire to keep feeding their gazillion-dollar machine rather than have it wither and become a has-been.
"If we don't do it, everyone else will anyway, and we'll be less evil with that power than those guys." Usually that's just a convenient selfish rationalization, but this time it might actually be true.
Still, Google is currently ripping off and screwing over the Web, in a way that they still knew was wrong as recently as a few years ago, pre-ChatGPT.
if I just need a basic fact or specific detail from an article, and being wrong has no real world consequences, I'll probably just gamble it and take the AI's word for it most of the time. Otherwise I'm going to double check with an article/credible source
if anything, I think aimode from google has made it easier to find direct sources for what I need. A lot of the times, I am using AI for "tip of the tongue" type searches. I'll list a lot of information related to what I am trying to find, and the aimode does a great job of hunting it down for me
ultimately though, I do think some old aspects of google search are dying - some good, some bad.
Pros: don't fee the need to sift through blog spam, I don't need to scroll past paid search results, I can avoid the BS part of an article where someone goes through their entire life story before the actual content (I'm talking things like cooking website)
Cons: Google is definitely going to add ads to this tool at some point, some indie creators on the internet will have a harder time getting their name out.
my key takeaway from all this is that people will only stop at your site if they think your site will have something to offer that the AI can't offer. and this isn't new. people have been steeling blog content and turning into videos for ever. people will steel paid tutorials and release the content for free on a personal site. people will basically take content from site-X and repost in a more consumable format on site-Y. and this kind of theft is so obvious and no one liked seeing the same thing reposted a 1000 times. I think this long term is a win
I've seen many outrageously wrong summaries that were contradicted sometimes by articles on the first page of regular search. Are people happy with the slop? Maybe, but I could see people getting bored by it very quickly. There already is a healthy comment backlash against ChatGPT-generated voice over narratives in YouTube videos.
This attribute exists, but this value comes from a bootstrap plugin that you have to install separately. It generated quite a few clicks and high-quality searches from me.
If we don’t actively archive, incentivize, or reimagine those spaces, AI-generated content may become a sterile echo chamber of what’s “most likely,” not what’s most interesting. The risk isn’t that knowledge disappears — it’s that flavor, context, and dissent do.
Use DuckDuckGo, use Kagi, use virtually anything OTHER than Google.
Dumb AI is one thing. Not autocompleting "Donald Trump assassination attempt" (or any number of other things) is a choice
Having an optional digest of the SERP makes link selection easier, especially if I have only a rough idea of what I'm looking for.
bediger4000•12h ago
inetknght•11h ago
caconym_•11h ago
It does kind of contradict my own assumption that most people just take what the chatbot says as gospel and don't look any deeper, but I also generally think it's a bad idea to assume most people are stupid. So maybe there's a bit of a contradiction there.
bediger4000•11h ago
But I also share your assumption about "most people".
nsonha•10h ago
For me at least with Perplexity, Grok and ChatGPT, all results come back with citations in every paragraph, so I haven't had to do that.