Follow them and you should be able to comment without further issue. Hope this helps.
Oh, you mean like removing scores of covid videos from real doctors and scientists which were deemed to be misinformation
I'm glad that we've decided Youtube is the oracle for everything
The credentials don't matter, the actual content does. And if it's misinformation, then yes, you can be a quadruple doctor, it's still misinformation.
In France, there was a real doctor, epidemiologist, who became famous because he was pushing a cure for Covid. He did some underground, barely legal, medical trials on his own, and proclaimed victory and that the "big bad government doesn't want you to know!". Well, the actual proper study finished, found there is basically no difference, and his solution wasn't adopted. He didn't get deplatformed fully, but he was definitely marginalised and fell in the "disinformation" category. Nonetheless, he continued spouting his version that was proven wrong. And years later, he's still wrong.
Fun fact about him: he's in the top 10 of scientists with the most retracted papers, for inaccuracies.
It matters in the context of health related queries.
> Researchers at SE Ranking, a search engine optimisation platform, found YouTube made up 4.43% of all AI Overview citations. No hospital network, government health portal, medical association or academic institution came close to that number, they said.
> “This matters because YouTube is not a medical publisher,” the researchers wrote. “It is a general-purpose video platform. Anyone can upload content there (eg board-certified physicians, hospital channels, but also wellness influencers, life coaches, and creators with no medical training at all).”
> However, the researchers cautioned that these videos represented fewer than 1% of all the YouTube links cited by AI Overviews on health.
> “Most of them (24 out of 25) come from medical-related channels like hospitals, clinics and health organisations,” the researchers wrote. “On top of that, 21 of the 25 videos clearly note that the content was created by a licensed or trusted source.
> “So at first glance it looks pretty reassuring. But it’s important to remember that these 25 videos are just a tiny slice (less than 1% of all YouTube links AI Overviews actually cite). With the rest of the videos, the situation could be very different.”
"AI responses may include mistakes. Learn more"
It's not mistakes, half the time it's completely wrong and total bullshit information. Even comparing it to other AI, if you put the same question into GPT 5.2 or Gemini, you get much more accurate answers.
Before we get too worked up about the results, just look at the source. It's a SERP ranking aggregator (not linking to them to give them free marketing) that's analyzing only the domains, not the credibility of the content itself.
This report is a nothingburger.
A professor in the field can probably go "ok this video is bullshit" a couple minutes in if it's wrong. They can identify a bad surgeon, a dangerous technique, or an edge case that may not be covered.
You and I cannot. Basically, the same problem the general public has with phishing, but even more devastating potential consequences.
...and then there's WebMD, "oh you've had a cough since yesterday? It's probably terminal lung cancer."
A few days ago, I asked it some questions on Russia's industrial base and military hardware manufacturing capability, and it wrote a very convincing response, except the video embedded at the end of the response was an AI generated one. It might have had actual facts, but overall, my trust in Gemini's response to my query went DOWN after I noticed the AI generated video attached as the source.
Countering debasement of shared reality and NOT using AI generated videos as sources should be a HUGE priority for Google.
YouTube channels with AI generated videos have exploded in sheer quantity, and I think majority of the new channels and videos uploaded to YouTube might actually be AI; "Dead internet theory," et al.
This itself seems pretty damning of these AI systems from a narrative point of view, if we take it at face value.
You can't trust AI to generate things that are sufficiently grounded in facts that you can't even use it as a reference point. Why should end users believe the narrative that these things are as capable as they're being told they are, by extension?
Most of the "educational" and documentation style content there is usually "just" gathered together from other sources, occasionally with links back to the original sources in the descriptions.
I'm not trying to be dismissive of the platform, it's just inherently catered towards summarizing results for entertainment, not for clarity or correctness.
Google AI Overviews put people at risk of harm with misleading health advice
jeffbee•1h ago