https://arstechnica.com/google/2025/01/just-give-me-the-fing...
Instructions are here: https://support.mozilla.org/en-US/kb/add-custom-search-engin...
The "URL with %s in place of search term" to add is:
https://www.google.com/search?q=%s&client=firefox-b-d&udm=14
This is a new line of business that provides them with more ad space to sell.
If the overview becomes a trusted source of information, then all they need to do is inject ads in the overviews. They already sort of dye that. Imagine it as a sort of text based product placement.
You might think that's the correct way to do it, but there is likely much more to it than it seems.
If it wasn't tricky at all you'd bet they would've done it already to maximize revenue.
It never will. By disincentivizing publishers they're stripping away most of the motivation for the legitimate source content to exist.
AI search results are a sort of self-cannibalism. Eventually AI search engines will only have what they cached before the web became walled gardens (old data), and public gardens that have been heavily vandalized with AI slop (bad data).
My guess is that Google/OpenAI are eyeing each other - whoever does this first.
Why would that work? It's a proven business model. Example: I use LLMs for product research (e.g. which washing machine to buy). Retailer pays if link to their website is included in the results. Don't want to pay? Then redirect the user to buy it on Walmart instead of Amazon.
People who are aware of that and care enough to change consumption habits are an inconsequential part of the market.
Maybe we can type the commands, but that is also quite slow compared with tapping/clicking/scrolling etc.
But they're not often confidently wrong like AI summaries are.
The AI overview sucks but it can't really be a lot worse than that :)
First, in the pre training stage humans curate and filter the data thats actually used for training.
Then in the fine tuning stage people write ideal examples to teach task performance
Then there is reinforcement learning from human feedback RLHF where people rank multiple variations of the answer an AI gives, and thats part of the reinforcement loop
So there is really quite a bit of human effort and direction that goes into preventing the garbage-in garbage-out type situation you're referring to
I noticed Google's new AI summary let's me click on a link in the summary and the links are posted to the right.
Those clicks are available, might not be discovered yet, curious though if those show up anywhere as data.
Google being able to create summaries off actual web search results will be an interesting take compared to other models trying to get the same done without similar search results at their disposal.
The new search engine could be google doing the search and compiling the results for us how we do manually.
And may get them in some anti-trust trouble once publishers start fighting back, similar to AMP, or their thing with Genius and song lyrics. Turns out site owners don't like when Google takes their content and displays it to users without forcing said users to click through to the actual website.
And there's no AI garbage sitting in the top of the engine.
Searching for “who is Roger rabbit” gives me Wikipedia, IMDb and film site as results.
Searching for “who is Roger rabbit?” gives me a “quick answer” LLM-generated response: “Roger Rabbit is a fictional animated anthropomorphic rabbit who first appeared in Gary K. Wolf's 1981 novel…” followed by a different set of results. It seems the results are influenced by the sources/references the LLM generated.
In your case, I think that it is just the interrogation point in itself at the end that somehow has an impact on the results you see.
However, it's pretty bad for local results and shopping. I find that anytime I need to know a local stores hours or find the cheapest place to purchase an item I need to pivot back to google. Other than that it's become my default for most things.
But AI as a product most certainly does! I was trying to figure out why a certain AWS tool stopped working, and Gemini figured it out for me. In the past I would have browsed multiple forums to figure out it.
Google search, as others have mentioned in this thread, increasingly fails to give me high-quality material anyway. Mostly it's just pages of SEO spam. I prefer that the LLM eat that instead of me (just spit back up the relevant stuff, thankyouverymuch).
Honestly though, increasingly the internet for me is 1) a distraction from doing real work 2) YouTube (see 1) and 3) a wonderful library called archive.org (which, if I could grab a local snapshot would make leaving the internet altogether much, much easier).
- Hobbyist site
- Forum or UGC
- Academic/gov
- Quality news which is often paywalled
Most of that stuff doesn't depend on ad clicks. The things that do depend on ad clicks are usually infuriating slop. I refuse to scroll through three pages of BS to get to the information I want.
We know they aren't oracles and come up with a lot of false information in response to factual questions.
If you do a Google (or other engine) search, you have to invest time pawing through the utter pile of shit that Google ads created on the web. Info that's hidden under reams of unnecessary text, potentially out of date, potentially not true; you'll need to evaluate a list of links and, probably, open multiple of them.
If you do an AI "search", you ask one question and get one answer. But the answer might a hallucination or based on incorrect info.
However, a lot of the time, you might be searching for something you already have an idea of, whether it's how to structure a script or what temperature pork is safe at; you can use your existing knowledge to assess the AI's answer. In that case the AI search is fast.
The rest of the time, you can at least tell the AI to include links to its references, and check those. Or its answer may help you construct a better Google search.
Ultimately search is a trash heap of Google's making, and I have absolute confidence in them also turning AI into a trash heap, but for now it is indeed faster for many purposes.
People will go to museums to see how complicated pre-ai era was
Google AI has been listing incorrect internal extensions causing departments to field calls for people trying to reach unrelated divisions and services, listing times and dates of events that don't exist at our addresses that people are showing up to, and generally misdirecting and misguiding people who really need correct information from a truth source like our websites.
We have to track each and every one of these problems down, investigate and evaluate whether we can reproduce them, give them a "thumbs down" to then be able to submit "feedback", with no assurance it will be fixed in a timely manner and no obvious way to opt ourselves out of it entirely. For something beyond our consent and control.
It's worse than when Google and Yelp would create unofficial business profiles on your behalf and then held them hostage until you registered with their services to change them.
- Looking up a hint for the casino room in the game "Blue Prince", the AI summary gave me details of the card games on offer at the "Blue Prince Casino" in the next suburb over. There is no casino there.
- Looking up workers rights during a discussion of something to do with management, it directly contradicted the legislation and official government guidance.
I can't imagine how frustrating it must be for business-owners, or those providing information services to find that their traffic is intercepted and their potential visitors treated to an inaccurate version on the search page.
No different from Google search results.
Let’s not pretend that some websites aren’t straight up bullshit.
There’s blogs spreading bullshit, wrong info, biased info, content marketing for some product etc.
And lord knows comments are frequently wrong, just look around Hackernews.
I’d bet that LLMs are actually wrong less often than typical search results, because they pull from far greater training data. “Wisdom of the crowds”.
1. Here's the answer (but it's misinformation) 2. Here are some websites that look like they might have the answer
?
6 months ago, "what temp is pork safe at?" was a few clicks, long SEO optimised blog post answers and usually all in F not C ... despite Google knowing location ... I used it as an example at the time of 'how hard can this be?'
First sentance of Google AI response right now: "Pork is safe to eat when cooked to an internal temperature of 145°F (63°C)"
People have been eating pork for over 40,000 years. There’s speculation about whether pork or beef was first a part of the human diet.
(5000 words later)
The USDA recommends cooking pork to at least 145 degrees.
First result under the overview is the National Pork Board, shows the answer above the fold, and includes visual references: https://pork.org/pork-cooking-temperature/
Most of the time if there isn't a straightforward primary source in the top results, Google's AI overview won't get it right either.
Given the enormous scale and latency constraints they're dealing with, they're not using SOTA models, and they're probably not feeding the model 5000 words worth of context from every result on the page.
Maybe they could just show the links that match your query and skip the overview. Sounds like a billion-dollar startup idea, wonder why nobody’s done it.
I know you can’t necessarily trust anything online, but when the first hit is from the National Pork Board, I’m confident the answer is good.
Trust it if you want I guess. Be cautious though.
> The next full moon in New York will be on August 9th, 2025, at 3:55 a.m.
"full moon time LA"
> The next full moon in Los Angeles will be on August 9, 2025, at 3:55 AM PDT.
I mean, it certainly gives an immediate answer...
If you made a bet with your friend and are using the AI overview to settle it, fine. But please please click on an actual result from a trusted source if you’re deciding what temperature to cook meat to
But SEO slop machines have made it so hard to find the good websites without putting in more legwork than makes sense a lot of the time. Funnily enough, this makes AI look like a good option to cut through all the noise despite its hallucinations. That's obviously not acceptable when it comes to food safety concerns though.
First result: https://www.porkcdn.com/sites/porkbeinspired/library/2014/06...
Second result: https://pork.org/pork-cooking-temperature/
Remember the past scandals with google up/downranking various things? This isn't a new problem. Wrt how the average person gets information google doesn't really have more control because people aren't clicking through as much.
skywhopper•6h ago
thewebguyd•6h ago
~57% of their revenue is from search advertising. How do they plan on replacing that?
flashgordon•6h ago
EarlKing•4h ago
Seriously, Futurama and Cyberpunk and 1984 were all supposed to be warnings... not how-to manuals.
xt00•6h ago
bugsMarathon88•2h ago
landl0rd•19m ago
mrheosuper•48m ago