But, as this article points out, current AI doesn't do this. The AI conversations I've seen and had felt bland and shallow, and I think (although some people really connect with it) most others would agree. Moreover, current AI is very unreliable at being a good influence, especially for people who already have bad mental health, because it's very suggestible. AI can also hallucinate, at best (when detected) creating awkward conversations and at worse (when undetected) misleading people.
These problems have been around since ChatGPT has released, and even o3 and Gemini 2.5 seem to have them, so I'm skeptical they'll improve in the near future. Meanwhile, there's an "obvious" way I think we can improve people's loneliness without AI: creating in-person third spaces and encouraging people to visit them. Facebook can even help with this (and profit from it); e.g. perhaps by launching a meetup.com alternative, then funding some groups to make it popular (although it wouldn't be easy and certainly not "obvious" how to get the right people and not grifters, plus other side effects).
I think the opposite, that such artificial "friendships" are more akin to a drug that gives a short-term relief at the cost of increasing long-term dysfunction. At the very least, it poses the risk of devaluing the notion of friendship.
But, I suppose, we're going to find out.
The key is "good". An AI friend which just validates and reinforces one's existing feelings (like current AI) may become popular, but I wouldn't call it a good friend. Fortunately if such AI leads to bad outcomes, real friendship will be seen as more valuable.
downrightmike•3h ago