Instead of structuring a Google query in an attempt to find a relevant page filled with ads, I just ask Copilot and it gives a fully digested answer that satisfied my question.
What surprises me is that it needs very little context.
If I ask ‘ Linux "sort" command line for sorting the third column containing integers’, it replies with “ sort -k3,3n filename” along with explanations and extensions for tab separated columns.
Another effect is that Google has already been targetted for a lot of abuse, with SEO, backlinks, etc.. ChatGPT did not yet have many successful attempts at manipulating the essence of the mechanism to advertise by third parties.
Finally, ChatGPT is designed to parasite off of the Web/Google, it shows the important information without the organic ads. If a law firm puts out information on the law and real estate regulations, Google shows the info, but also the whole website for the law firm and the logo and motto and navbar with other info and Call To Actions. ChatGPT cuts all that out.
So while I don't deny that there is a technological advancement, there is a major component to quality here that is just based of n+1 white-label parasitism.
At least it seems likely to be more expensive for attackers than the last iteration of the spam arms race. Whether or to what extent search quality is actually undermined by spammers vs Google themselves is a matter for some debate anyway
No, though it does provide something like security through obscurity, in that the models still rely on search engines to locate sources to rely on for detailed answers, but the actual search engine being used is not itself publicly exposed, so while gaming its rankings may still have value (not to show ads, obviously, but to get your message into answers yielded by the AI tool), it is harder to assess how well you've done (and may take more pages as well as high ranking ones to be effective), because instead of looking at where you rank which you can do easily with a small number of tests or particular search queries, you have to test how well your message comes through in answers for realistic questions, which would take more queries and provide more ambiguous results. For highly motivated parties (especially for things like high-impact political propaganda around key issues) it may still be worth doing, though.
Wow, that's actually quite a lot. You can also just say "sort 3rd col nix."
A lot of my interactions with LLMs are like that and it is impressive how it doesn't care about typos, missed words and context. For regular expressions, language specific idioms, Unix command line gymnastics ("was it -R or -r for this command") merely grunting at the LLM provides not only the exact answer but also context most of the time.
Googling for 4 or 5 different versions of your question and eventually having to wade through 3,000 lines of a web version of a man page is just not my definition of a good time anymore.
Tangent: it annoys me so much that there's a persistent useless tiny horizontal scroll on the Bing page! when scrolling down, it rattles the page left and right.
I wish there was a free Gmail alternative (if there's is lmk!).
Edit: downvotes for expressing an opinion, low.
(also, there is no such thing as a free lunch https://en.wikipedia.org/wiki/No_such_thing_as_a_free_lunch)
Maybe I just don't have the right ChatGPT++ subscription.
I have replaced Google with Perplexity. It backs up every answer with links, so I find it to be more trustable than ChatGPT.
Perplexity also keeps their index current, so you're not getting answers from the world as it existed 10 months ago. (ChatGPT says its knowledge cutoff is June 2024, Perplexity says its search index includes results up to April 29, 2025, and to prove it, it gives me the latest news.)
What's interesting is the monetization aspect. Right now none of them act as ad vehicles. Who will be the first to fall?
They have done it all with their own custom silicon (TPUs, no Nvidia)
That's like saying AT&T invented the graphical user interface. It doesn't matter because AT&T failed to commercialize it.
They have lower costs, less overhead without the Nvidia tax.
This isn't a hype cycle they need to catch. It's the final technology. Steady and stable will win here.
That's too soon to say. We're only at the first down.
Based on reports from today, April 29, 2025, a Canadian federal election was held on Monday, April 28, 2025.
The Liberal Party, led by Mark Carney, won the election. Therefore, Mark Carney is the candidate who has won the position of Prime Minister. Reports indicate this is the Liberal Party's fourth consecutive term in power. It's still being determined in some initial reports whether they will form a majority or a minority government.
Sources and related content (links provided)
I also don't think ChatGPT is very reliable for looking things up. I think Google has just been degraded so far as a product that it is near worthless for anything more than the bottom 40% of scenarios.
[1] https://thinkmagazine.mt/the-earth-is-flat/
The point here is information is hard... unless you do your own thinking, your own testing, you can't be sure. But I agree references are nice.
So, Google should De-Google itself?
I'm wondering if ChatGPT (and similar products) will mimic social media as a vector of misinformation and confirmation bias.
LMMs are very clearly not that today, but social media didn't start out anything like the cesspool it has become.
There are a great many ways that being the trusted source of customized, personalized truth can be monetized, but I think very few would be good for society.
Russia is already performing data poisoning attacks on LLMs: https://www.newsguardrealitycheck.com/p/a-well-funded-moscow...
(and I use Google's Gemini for 50% of my pure LLM requests)
Gemini should be excellent here - it should have access to the best search index out there.
But... it doesn't show me what it's searching for. This is an absolute show-stopper for me: I need to know what the LLM is searching for in order to evaluate if it is likely to find the right information for me.
ChatGPT gets this right: it shows me the search terms it's using, then shows me the articles from the results that it used to generate the response.
Until a few weeks ago I still didn't use it much, because inevitably when it ran a search I would shout at my computer "No, don't search for that! You'll get crap results".
This changed with the recent release of o3 and o4-mini. For the first time I feel like I have access to a model with genuinely good taste in searching - it picks good initial searches, then revises those searches based on the incoming results.
I wrote about that recently: https://simonwillison.net/2025/Apr/21/ai-assisted-search/
I used it for a project I'm working on, and it has given really good, well-sourced responses so far.
Chatgpt is better because it gives links of source websites, so you can easily check them up.
Chatbots are absolute trash when it comes to needing factual information that cannot be trivially verified. I include the various "deep research" tools -- they are useless, except maybe as a starting point. For every problem I've given them, they've just been wrong. Not even close. The people who rely on these tools, it seems to me, are the same sort of folks who read newspaper headlines and Gartner 'reasearch reports' and accept the conclusions at face value.
For anything else, it's just easier to write a search query. The internet is wrong too, but it's easier for me to cross-validate answers across 10 blue links than to figure it out via socratic method with a robot.
The Gemini product seems to be evolving better and faster than chatgpt. Probably doing so cheaper, too, given they have their own hardware.
I am pleasently surprised how Gemini went from bad to quite good in less than a year.
The only thing missing is a faster way to go from thought to typing. The ChatGPT apps’ pop-up bar is too easy to not bump every thought and question into. I just don’t trust the results as much.
Why can't Google get this right? When I use the chat bot at aistudio it is clearly the state of the art - at least as good as any other option - so why can't Google sort out the product discrepancies?
> Click here to learn the secret that all celebrities use to maintain their curly hair!
Imagine having a blog which has 4 LLMs as users and never know hundreds, thousands or millions of people are using your work.
(Not for the integration in Bing, the copilot.microsoft.com minimalistic chat thing)
Remember the uproar when Google started displaying the text results directly from the sites in the search results. It basically eliminated the need to visit the actual websites at all.
Now, you get your answer right at the top without even looking at the search results themselves.
I don’t know what this means in the long term for websites and online content.
For more in-depth queries, I use OpenAI or Claude.
Google remains superior for shopping and finding deals.
It’s less that they don’t know, I still have no clue what it stands it seems like no one defines it anywhere, it’s more that they show 0 evidence of not knowing. I still really struggle to understand how someone would genuinely replace research with LLMs. Argument sure, but fully replace? The likelihood of being convinced of a total falsehood still feels too high.
So, staying ahead of LLMs using raw brain power is the only way out of this mess.
It is sad that it has come to this.
Stick to the ones you know and seen for decades, avoid using new ones, at least for now.
If you absolutely must know, ask someone.
It is a matter of time until LLMs suffer the same fate, all of them. It spares no one.
Conclusion? Placement optimization strategies stink ass. Whether it is for marketing, militia or entertainment, it sucks.
What i wonder: (apparently) no one uses DeepSeek?
I do, and i pretty much like it, to be honest.
More than Copilot at the very least, which, in a recent attempt to "vibe code" a small tool at work, hallucinated in at least large parts of the answers and had to be corrected by me over and over again (when i haven't written a single line of code in that language).
bigyabai•3h ago
I think most authors would argue the same thing, but it's really up to the readers to decide isn't it?
thomasfromcdnjs•3h ago