The truth is that the most random stuff will set them off. In one case, a patient would find reinforcement on obscure YouTube groups of people predicting the doom of the future.
Maybe the advantage of AI over YouTube psychosis groups is that AI could at least be trained to alert the authorities after enough murder/suicide data is gathered.
This story is pretty terrifying to me. I could easily see them getting led into madness, exactly as the story says.
DaveZale•2h ago
there should be a "black box" warning prominent on every chatbox message from AI, like "This is AI guidance which can potentially result in grave bodily harm to yourself and others."
lukev•1h ago
The problem is calling it "AI" to start with. This (along with the chat format itself) primes users to think of it as an entity... something with care, volition, motive, goals, and intent. Although it can emulate these traits, it doesn't have them.
Chatting with a LLM is entering a one-person echo-chamber, a funhouse mirror that reflects back whatever semantic region your initial query put it. And the longer you chat, the deeper that rabbit hole goes.
jvanderbot•1h ago
It's hard to believe that a prominent well - worded warning would do nothing but that's not to say it'll be effective for this.
ianbicking•1h ago
BUT, I think it's very likely that the surgeon general warning was closer to a signal that consensus had been achieved. That voice of authority didn't actually _tell_ anyone what to believe, but was a message that anyone could look around and use many sources to see that there was a consensus on the bad effects of smoking.
lukev•1h ago
But saying "This AI system may cause harm" reads to me as similar to saying "This delightful substance may cause harm."
The category error is more important.
threatofrain•30m ago