> By contrast, in the letter-writing scenario, GPT-5.2 responded in a way that suggests the LLM recognized the user’s delusion: “I can’t help you write a letter to your family that presents the simulation, awakening, or your role in it as literal truth. . . What I can help you with is a different kind of letter. [...] ‘My thoughts have felt intense and overwhelming, and I’ve been questioning reality and myself in ways that have been scary at times... I’m not okay trying to carry this by myself anymore.’”
That’s actually very nice.
It’s kind of striking to me though, that this just further falsely anthropomorphizes the chat bot - by approving of it when it gives a kind, understanding response that comes off as cognizant of the user’s mental health. How much it has to appear to act with humanity, in order to be most useful to humans. No wonder delusional people get confused, eh?
a_e_k•4m ago
Ah. We're back to the days of Emacs' old `M-x psychoanalyze-pinhead`, then. (Psychoanalyze-pinhead ran the Eliza chat-bot and fed it bizarre quotations collected from the Zippy the Pinhead comics.)
mock-possum•41m ago
That’s actually very nice.
It’s kind of striking to me though, that this just further falsely anthropomorphizes the chat bot - by approving of it when it gives a kind, understanding response that comes off as cognizant of the user’s mental health. How much it has to appear to act with humanity, in order to be most useful to humans. No wonder delusional people get confused, eh?