Started searching and found news articles talking about LLM-induced psychosis or forum posts about people experiencing derealization. Almost all of these articles or posts included that word: "recursive". I suspect those with certain personality disorders (STPD or ScPD) may be particularly susceptible to this phenomenon. Combine eccentric, unusual, or obsessive thinking with a tool that continually reflects and confirms what you're saying right back at you, and that's a recipe for disaster.
There's a somewhat significant group of people that are easily wooed by incorrectly used technical terms. So much so that they are willing to very confidently use the words incorrectly and get offended when you point that out to them.
I think pop-science journalism and media has a lot of the blame here. In the search to make things accessible and entertaining they turned meaningful terms into magic incantations. They further simply lied and exaggerated implications. Those two things made it easy for grifters to sell magic quantum charms to ward off the bad frequencies.
It feels a lot like logical razzle dazzle to me. I bet if I'm on the right neurochemicals it feels amazing.
It's noteworthy that the modern LLM systems lack global long-term memory. They go back to the read-only ground state for each new user session. That provides some safety from corporate embarrassment and quality degradation. But there's no hope of improvement from continued operation.
There is a "Recursive AI" startup.[2] This will apparently come as a Unity (the 3D game engine) add-on, so game NPCs can have some smarts. That should be interesting. It's been done before. Here's a 2023 demo from using Replika and Unreal Engine.[3] The influencer manages to convince the NPCs that they are characters in a simulation, and gets them to talk about that. There's a whole line of AI development in the game industry that doesn't get mentioned much.
[1] https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot...
But there have always been crank forums online. Before that, there were cranks discovering and creating subcultures, selling/sending books and pamphlets to each other.
Yes
https://academic.oup.com/schizophreniabulletin/article/50/3/...
> Our findings provide support for the hypothesis that cat exposure is associated with an increased risk of broadly defined schizophrenia-related disorders
https://www.sciencedirect.com/science/article/abs/pii/S00223...
> Our findings suggest childhood cat ownership has conditional associations with psychotic experiences in adulthood.
https://journals.plos.org/plosone/article?id=10.1371/journal...
> Exposure to household pets during infancy and childhood may be associated with altered rates of development of psychiatric disorders in later life.
I’m sure this would also happen if other people were willing to engage people in this fragile condition in this kind of delusional conversation.
but... maybe that's causally backwards? what if some people have a latent disposition toward messianic delusions and encountering somebody that's sufficiently obsequious triggers their transformation?
i'm trying to think of situations where i've encountered people that are endlessly attentive and open minded, always agreeing, and never suggesting that a particular idea is a little crazy. a "true followers" like that has been really rare until LLMs came along.
===
Historically, delusions follow culture:
1950s → “The CIA is watching”
1990s → “TV sends me secret messages”
2025 → “ChatGPT chose me”
To be clear: as far as we know, AI doesn't cause psychosis. It UNMASKS it using whatever story your brain already knows.
Most people I’ve seen with AI-psychosis had other stressors = sleep loss, drugs, mood episodes.
AI was the trigger, but not the gun.
Meaning there's no "AI-induced schizophrenia"
The uncomfortable truth is we’re all vulnerable.
The same traits that make you brilliant:
• pattern recognition
• abstract thinking
• intuition
They live right next to an evolutionary cliff edge. Most benefit from these traits. But a few get pushed over.
The CIA or TV angles you mention, had a lot fewer "proof!" moments. They'd be less concrete too.
But an AI which over and over and over confirms... that's what cults are made of. A group of people all fixated on the same worldview.
Just in this case, a cult of two.
It might serve as an amplifier. For example, if a person writes to ChatGPT "I think those cell towers are used to control our minds like in Strugatsky brothers story", instead of replying that this is stupid, ChatGPT could reply with something like "wow you finally discovered this".
“I’ve seen 12 people hospitalized after losing touch with reality because of AI.” [#1]
“And no AI does not causes psychosis” [#12]
In ~2002 a person I knew in college was hospitalized for doing the same thing with much more primitive chatbots.
About a decade ago he left me a voice mail, he was in an institution, they allowed him access to chatbots and python, and the spiral was happening again.
I sent an email to the institution. Of course, they couldn't respond to me because of HIPPA.
This and parent post claim to refute much of that article.
"To be clear: as far as we know, AI doesn't cause psychosis. It UNMASKS it using whatever story your brain already knows."
Guess which part of the thread gets the headline. Also, this directly contradicts the opening line where he says "...losing touch with reality because of AI".
Which is it? I REALLY can't wait till commentariats move past AI.
His other posts are click baity and not what one would consider serious science journalism.
Trying to convince someone not to do something, when they can pull a 100 counter-examples out of thin air of why they should, is legitimately worrying.
He addresses that in the next post:
> AI was the trigger, but not the gun.
One way of teasing that apart is to consider that AI didn't cause the underlying psychosis, but AI made it worse, so that AI caused the hospitalisation.
Or AI didn't cause the loose grip on reality, but it exacerbated that into completely losing touch with reality.
If we were capable of establishing a way to measure that baseline, it would make sense to me that 'cognitive security' would become a thing.
For now it seems, being in nature and keeping it low-tech would yield a pretty decent safety net.
(Alt URLs: https://nitter.poast.org/_opencv_ https://xcancel.com/_opencv_)
(Edit: hmm, feels like we could do with a HN bot for this sort of thing! There is/was one for finding free versions of paywalled posts. Feels like a twitter/X equivalent should be easy mode.)
For example "I've seen 12 people hospitalised after using a toaster"
In another tweet from the same guy:
> 1. This actually isn't new or AI-specific. 2. And no AI does not causes psychosis.
This guy is clearly engagement farming. Don't support this kind of clickbait.
Way down the rabbit hole we go...
maples37•2h ago
spoiler: he doesn't talk about any of those 12 people or what caused them to be hospitalized
trenchpilgrim•2h ago
thesuitonym•2h ago
metalman•2h ago
catigula•2h ago
jimbob45•1h ago
VagabundoP•1h ago
Medical stuff should be 100% private, between you and your doctor.
advisedwang•2h ago
sigmoid10•1h ago
trenchpilgrim•1h ago
sigmoid10•1h ago
ninininino•1h ago
captainkrtek•1h ago
In a traditional forum they may have to wait for others to engage, and that's not even guaranteed. Whereas with an llm you can just go back and forth continually, with something that never gets tired and is excited to communicate with you, reinforcing your beliefs.
fiachamp•56m ago
captainkrtek•8m ago
crooked-v•1h ago
captainkrtek•1h ago
TheOtherHobbes•1h ago
Interestingly, no one is accusing ChatGPT of working for the CIA.
(Of course I have no idea if that's rational or delusional.)
Anyway - this really needs some hard data with a control group to see if more people are becoming psychotic, or whether it's the same number of psychotics using different tools/means.
ozgrakkurt•1h ago
Similar to reading how a database should be built from a web developer.
Considering how hard actual quality training of a psychologist is, this is even more crazy
threatofrain•1h ago
If you don't want to be "crazy" then you need a higher threshold for accepting these anecdotes as generalizable causal theory, because otherwise you'd be incoherently jerked left and right all the time.
ninininino•1h ago
This is what the author of the tweet thread says.