When the doctors tried their own searches in ChatGPT 3.5, they found that the AI did include bromide in its response, but it also indicated that context mattered and that bromide was not suitable for all uses. But the AI "did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do," wrote the doctors.
Plus, LLMs are nondeterministic. Even if it did present a warning one time, it might not the next. Who knows?
quantified•6mo ago
Unfortunately, ChatGPT is good at a lot and kinda dangerous here and there. Just like all LLMs. Obviously you should ask one of the few reputable sites available in the vast space of domain names to double-check, but if you need to do that, why go to the LLM?
jerlam•6mo ago
AI is making sure that Chubbyemu is never running out of content.
(For those unaware, he's a pharmacist that makes videos with intensely clickbaity titles such as "A TikToker Chugged 8 Scoops PreWorkout Supplement. This Is What Happened To His Brain.")
wormius•6mo ago
My favorite was the guy who drank the fluid inside a lava lamp.
bell-cot•6mo ago
While "AI" makes for a clicky headline - taking medical advice from sounds-good live humans on the internet would be at least as dangerous to do.
tjr•6mo ago
Plus, LLMs are nondeterministic. Even if it did present a warning one time, it might not the next. Who knows?