One can also interpret this as search was such shit that the summaries are allowing users to skip that horrible user experience.
They don’t care about discoverability. It’s all ads as quickly as possible. Coming soon is ad links in summaries. That’s what they’re getting to here.
It has become shockingly common to see people sharing a screenshot of an AI response as evidence to back up their argument. I was once busy with something so I asked my partner if he could look up something for me, he confidently shared a screenshot of an AI response from Google. It of course was completely wrong and I had to do my own searching anyways (annoyingly needing to scroll past and ignore the AI response that kept trying to tell me the same wrong information).
We have to remember that google is incentivized to keep people on Google more. Their ability to just summarize stuff instead of getting people off of google as quickly as possible is a gold mine for them, of course they are going to push it as hard as possible.
Isn't that expected from "higher quality clicks"?
edit: AI doesn't even have a corrupting, disgusting physical body, of course it should be recommending clean diets and clean spirits!
But if I just simply remove the "why" it clearly states "Rum is an alcoholic beverage that does not have any significant health benefits."
Man I love so much that we are pushing this technology that is clearly just "garbage in, garbage out".
Side Note: totally now going to tell my doctor I have been drinking more rum next time we do blood work if my good cholesterol is still a bit low. I am sure he is going to be thrilled. I wonder if I could buy rum with my HSA if I had a screenshot of this response... (\s if really necessary)
Asking AI to tell reality from fiction is a bit much when the humans it gets its info from can’t, but this is at least not ridiculous.
I agree with that, but the problem is that it is being positioned as a reliable source of information. And is being treated as such. Google's disclaimer "AI responses may include mistakes. Learn more" only shows up if you click the button to show more of the response, is smaller text, a light gray, and clearly overshadowed by the button with lights rotating around it to do a deep dive.
The problem is just how easy it is to "lead on" one of these models. By simply stating a search like "why is rum healthy" implies that I already think it is healthy so of course it leads into that but that is why this is so broken. But "is rum healthy" actually provides a more factual answer:
> Rum is an alcoholic beverage that does not have any significant health benefits. While some studies have suggested potential benefits, such as improved blood circulation and reduced risk of heart disease, these findings are often based on limited evidence and have not been widely accepted by the medical community.
People are also more likely to click into web content that helps them learn more — such as an in-depth review, an original post, a unique perspective or a thoughtful first-person analysis
So... not the blog spam that was previously prioritized by Google Search? It's almost as if SEO had some downsides they are only just now discovering.
1) Clicking on search results doesn't bring $ to Google and takes users off their site. Surely they're thinking of ways to address this. Ads?
2) Having to click off to another site to learn more is really a deficiency in the AI summary. I'd expect Google would rather you to go into AI mode where they control the experience and have more opportunities to monetize. Ads?
We are in the "early uber" and "early airbnb" days ... enjoy it while it's great!
https://news.ycombinator.com/item?id=44798215
From that article
Mandatory AI summaries have come to Google, and they gleefully showcase hallucinations while confidently insisting on their truth. I feel about them the same way I felt about mandatory G+ logins when all I wanted to do was access my damn YouTube account: I hate them. Intensely.
But why listen to a third party when you can hear it from the horses mouth.They're not claiming anything about the quality of AI summaries. They are analyzing how traffic to external sites has been affected.
With AI Overviews and more recently AI Mode, people are able to ask questions they could never ask before. And the response has been tremendous: Our data shows people are happier with the experience and are searching more than ever as they discover what Search can do now.
I’m sick of having to feel violated every step I take on the Web these days.
> "what is the type of wrench called for getting up into tight spaces"
> AI search gives me an overview of wrench types (I was looking for "basin wrench")
> new search "basin wrench amazon"
> new search "basin wrench lowes"
> maps.google.com "lowes"
Notably, the information I was looking for was general knowledge. The only people "losing out" here are people running SEO-spammish websites that themselves (at this point) are basically hosting LLM-generated answers for me to find. These websites don't really need to exist now. I'm happy to funnel 100% of my traffic to websites that are representing real companies offering real services/info (ship me a wrench, sell me a wrench, show me a video on how to use the wrench, etc).
Agreed. The web will be better off for everyone if these sites die out. Google is what brought these into existence in the first place, so I find it funny Google is now going to be one of the ones helping to kill them. Almost like they accidentally realized SEO got out of control so they have to fix their mistake.
The latter is what I used to do before AI summary was a thing, so I would logically assume that it should reduce the clicks to individual sites?
I'm sure Google knows this, and also knows that that many of these "AI" answers wouldn't pass any prior standard of copyright fair use.
I suspect Google were kinda "forced" into it by the sudden popularity of OpenAI-Microsoft (who have fewer ethical qualms) and the desire to keep feeding their gazillion-dollar machine rather than have it wither and become a has-been.
"If we don't do it, everyone else will anyway, and we'll be less evil with that power than those guys." Usually that's just a convenient selfish rationalization, but this time it might actually be true.
Still, Google is currently ripping off and screwing over the Web, in a way that they still knew was wrong as recently as a few years ago, pre-ChatGPT.
An example earlier this year was a search I did overseas for an airport terminal for a certain airline and it showed the wrong one, even though scrolling through the results right below the AI summary (including the airport authority website) showed the correct information. (https://pbs.twimg.com/media/GiBOA2Ab0AA1x0Q?format=jpg&name=...)
In next few years the SEO industry is going to rewrite its playbook to take advantage of such weaknesses, pushing garbage and misinformation into the AI summaries. This in turn will require users (or their agents) to do extra due diligence.
if I just need a basic fact or specific detail from an article, and being wrong has no real world consequences, I'll probably just gamble it and take the AI's word for it most of the time. Otherwise I'm going to double check with an article/credible source
if anything, I think aimode from google has made it easier to find direct sources for what I need. A lot of the times, I am using AI for "tip of the tongue" type searches. I'll list a lot of information related to what I am trying to find, and the aimode does a great job of hunting it down for me
ultimately though, I do think some old aspects of google search are dying - some good, some bad.
Pros: don't fee the need to sift through blog spam, I don't need to scroll past paid search results, I can avoid the BS part of an article where someone goes through their entire life story before the actual content (I'm talking things like cooking website)
Cons: Google is definitely going to add ads to this tool at some point, some indie creators on the internet will have a harder time getting their name out.
my key takeaway from all this is that people will only stop at your site if they think your site will have something to offer that the AI can't offer. and this isn't new. people have been steeling blog content and turning into videos for ever. people will steel paid tutorials and release the content for free on a personal site. people will basically take content from site-X and repost in a more consumable format on site-Y. and this kind of theft is so obvious and no one liked seeing the same thing reposted a 1000 times. I think this long term is a win
I've seen many outrageously wrong summaries that were contradicted sometimes by articles on the first page of regular search. Are people happy with the slop? Maybe, but I could see people getting bored by it very quickly. There already is a healthy comment backlash against ChatGPT-generated voice over narratives in YouTube videos.
bediger4000•2h ago
inetknght•58m ago
caconym_•51m ago
It does kind of contradict my own assumption that most people just take what the chatbot says as gospel and don't look any deeper, but I also generally think it's a bad idea to assume most people are stupid. So maybe there's a bit of a contradiction there.
bediger4000•46m ago
But I also share your assumption about "most people".