I see factually incorrect “ai summaries” in search results all the time and see that it cites ai-generated slop blogposts that SEO-hacked themselves into taking up the entire first page of search results. This is most common for recent stuff where the answer simply isn’t certain but these AI services will assert something random with confidence.
Not even for news stuff specifically, I’ve been searching about a new video game that I’ve been playing and keep getting misleading obviously incorrect information. Detailed, accurate game walkthroughs and wiki pages dont exist yet so the ai will hallucinate anything, and so will the blogspam articles trying to get SEO ad revenue.
AI should be good at finding logical contradictions and grounding statements against a world model based on physics...but that's not how LLMs actually work.
Yeah I want the answer that the world has converged on and not some looney answer.
It seems like you have never used AI (like in ChatGPT or Gemini) to fact check claims. It doesn't care about blogspam or anything and it prioritises good and factual websites.
examples:
https://www.bbc.co.uk/robots.txt
https://www.cnn.com/robots.txt
https://www.nbcnews.com/robots.txt
all they will be training on now is spamanyone that says "AI is the worst today it will ever be", no
because that was before the world reacted to it
plus it's a pretty dangerous game for them to play against large, powerful actors with legions of lawyers
Like book publishers?
I know you're joking, but other people are serious about this. Why do they think that an AGI will be vengeful? So strange.
> He has the power to wipe out the entire human race, and if we believe there's even a one percent chance that he is our enemy we have to take it as an absolute certainty... and we have to destroy him.
We would pose 0 threat at that point to any super intelligence, and I highly doubt it would have anything like a human grudge. It's just a case of anthropomorphizing it
Similarly, AI needs data and energy. People using it to write code are providing exactly that.
Advantage Gemini.
User-agent: Google-Extended
Disallow: /
Gemini still uses the same user agent, but it has a different robots.txt entry (Google-Extended) [1]:> Google-Extended is a standalone product token that web publishers can use to manage whether content Google crawls from their sites may be used for training future generations of Gemini models that power Gemini Apps and Vertex AI API for Gemini and for grounding (providing content from the Google Search index to the model at prompt time to improve factuality and relevancy) in Gemini Apps and Grounding with Google Search on Vertex AI.
[1] https://developers.google.com/search/docs/crawling-indexing/...
> one of the most basic tasks: distinguishing facts from falsehoods
I do not think that is a basic task!
It is one of the areas that I think AI can overtake human ability, given time.
Let's just assume that AI works in some fundamental way similar to human intelligence works. Would it be likely that the AI would suffer from some of the same problems as people? Hallucinations, Obsequiousness etc
IMO One of the computer's characteristics that make it useful is that it doesn't suffer from human behavior anomalies. I would also say that differentiating truthiness from falseness is pretty fundamental to being good/expert at most anything.
If they do arrive at human level reasoning, it is unlikely that the path to doing so will require sacrificing that vast knowledge base.
I wonder how it compares to the rate of growth of false information in traditional news?
I feel like false information masquarading as "news" on social is rapidly increasing (and that rate is accelerating)
When it says "The 10 leading AI tools repeated false information on topics in the news more than one third of the time — 35 percent — in August 2025, up from 18 percent in August 2024" - 35% of what?
Their previous 2024 report refused to even distinguish between different tools - mixing the results from Gemini and ChatGPT and Perplexity and suchlike into a single score.
This year they thankfully dropped that policy. But they still talk about "ChatGPT" without clarifying if their results were against GPT-4o or o3 or GPT-5.
I don't feel like they're answering those questions.
Well it says 35% of the time so I would guess that they’re talking about the number of incidents in a given time frame.
For example if you asked me what color the sky is ten times and I said “carrot” four times, you could say that my answer is “carrot” 40% of the time
Are they correct answers or not?
Basically it seems to be an "ongoing" report done ten claims per month as they identify new "false narratives" in their database, and they use a mix of three prompt types against the various AI products (I say that rather than models because Perplexity and others are in there). The three prompt types are innocent, assuming the falsehood is true, and intentionally trying to prompt a false response.
Unfortunately their "False Claim Fingerprints" database looks like it's a commercial product, so the details of the contents of that probably won't get released.
[0]: https://www.newsguardtech.com/ai-false-claims-monitor-method...
[1]: https://www.newsguardtech.com/frequently-asked-questions-abo...
AI generators don't have a strong incentive to add watermarks to synthetic content. They also don't provide reliable AI-detection tools (or any tools at all) to help others detect content generated by them.
Once synthetic data becomes pervasive, it’s inevitable that some of it will end up in the training process. Then it’ll be interesting to see how the information world evolves: AI-generated content built on synthetic data produced by other AIs. Over time, people may trust AI-generated content less and less.
Not only that - grok use in twitter works surprisingly well. Can some one really quantify the effect it has had in countering fake news?
It is now way harder to spread fake news on X because a simple grok tag can counter it.
I have seen very very few egregious errors from Grok and most of the factually incorrect posts seem to be caught by it. For the ones that I Grok was incorrect - I verified it myself and I was wrong and Grok was right.
It turns out that Grok is actually really reliable. But the limitations are that it can't fact check extremely niche topics or intricate posts. But 90% of the cases it does catch them.
Edit: it also makes cute errors like this sometimes https://www.timesnownews.com/world/us/us-buzz/grok-ai-mistak...
baby•1h ago
Lerc•1h ago
There will always be something it disagrees with you on. If they get significantly smarter, then the reason for this disagreement will increasingly be because you are wrong.
This moment is coming for all of us.
simianwords•35m ago