Like @NitpitckLawyer said, it's the resulting content that matters, not how it's presented. If a person anthropomorphizes an LLM in their mind (rather than just in their speech patterns), then they probably have pre-existing mental problems.
People used to also talk to burning bushes.
What a wild thing to say. If you had a coworker who was brilliant and taught you many great things, but only screamed instead of talking, would you feel the same way?
> If a person anthropomorphizes an LLM in their mind (rather than just in their speech patterns), then they probably have pre-existing mental problems.
Correct, and that's why these tools should be built responsibly under the assumption that people with mental problems are going to use them. It's clear in the article I linked (and my wording linking to it) that it can exacerbate issues for people. Chatgpt told him that he's sane and his mom was trying to kill him. He didn't understand what an LLM actually was.
NitpickLawyer•1h ago
grantseltzer•42m ago
You're asserting quite a lot of bias when you say "What most people want are useful results." Maybe in our circles of software engineers or lawyers, but many people are using AI for companionship. Even if they're not seeking companionship, unless you have a very clear understanding of how LLMs work, it's very easy to get caught up thinking that the chatbot you're talking to is "thinking" or "feeling". I feel companies that offer chatbots should be more responsible with this as it can be very dangerous.