Instead of trying to control everything, policymakers should educate people about how these chatbots work and how to keep their data safe. After all, not everyone who played Doom in the ’90s became a real killer, or assaults women because of YouPorn.
Society will adapt to these ridiculous new situations…what truly matters is people’s awareness and understanding.
This is not about regulating everything.
This is about realizing adverse effects and regulating for those.
Just like no one is selling you toxic youghurt.
We literally CAN'T regulate some things for any reasonable definition of "can't" or "regulate". Our society is either not rich enough or not organized in a way to actually do it in any useful capacity and not make the problem worse.
I'm not saying AI chatbots are one of those things, but people toss around the idea of regulation way too casually and AI chatbots are way less cut and dry than bad food or toxic waste or whatever other extreme anyone wants to misleadingly project down into the long tail of weird stuff with limited upside and potential for unintended consequences elsewhere.
Which people in specific think that?
All your argument consists of is, "Somebody somewhere believes something untrue, and people don't use enough precision in their speach, so I am recommending we don't do anything regulatory about this problem."
The important thing is keep the data safe, like the yoghurt that must not be expired when sold.
People are weird… for someone who is totally alone, having a virtual wife/child could be better than being completely alone.
They’re not using ChatGPT to do anything illegal, and already regulated, like planning to kill someone or commit theft.
I'm not proposing anything specifically, but the implication that this field should not be regulated is just foolish.
It kind of happened for me with online games. They were a new thing, and no one knew to what degree they could be addicting and life damaging. As a result I am probably over protective of my own kids when it comes to anything related to games.
We are already seeing many of the effects of the social media generation and I am not looking forward to what is going to happen to the AI natives whose guardians are ill-prepared to guide them. In the end, society will likely come to grips with it, but the test subjects will pay a heavy price.
How do we know which era of AI we're in?
What about the algorithm feeding highly polarized content to folks? It's the new "lead in the air and water" of our generation.
What about green text bubble peer pressure? Fortnite and Roblox FOMO? The billion anime Gatcha games that are exceedingly popular? Whale hunting? Kids are being bullied and industrially engineered into spending money they shouldn't.
Raising kids on iPads, shortened attention spans, social media induced depression and suicide, lack of socialization, inattention in schools, ...
Social media leading people to believe everyone is having more fun than them, is better looking than them, that society is the source of their problems, ...
Now the creepy AI sex bots are replacing real friends.
You have to be careful to not overreact to things.
How do we know if these examples aren’t just the 0.1% of the population that is, for all intend and purposes, “out there”?
So much of “news” is just finding these corner cases that evoke emotion, but ultimately have no impact.
But it’s hard to study users having these relationships without studying the users who have these relationships I reckon.
A lot of psych research uses small samples. It’s a problem, but funding is limited and so it’s a start. Other researchers can take this and build upon it.
Anecdotally, watching people meltdown over the end of ChatGPT 4o indicates this is a bigger problem that 0.1%. And business wise, it would be odd if OpenAI kept an entire model available to serve that small a population.
https://pubmed.ncbi.nlm.nih.gov/31380664/
See critiques of validity section:
But many aren't, and some people might even have a level of rare self awareness to know that anyone they'd [be able to] marry would hate them.
1. Not a given.
2. Something one can work on so that they're either more likeable or at the very least less defeatist about the whole thing.
Choosing you'd rather have AI wife and kids, rather than deal with the potential that many real people will face of paying child support and alimony to someone that hates them -- I don't see as an irrational decision (although not an inevitable one either).
In fact, if you don't consider at least the possibility, you are a fool.
Now if you go into that relationship with the mindset of “this person just wants my alimony and child support and hates my guts” I get why you might do yourself and your potential partner / ex-to-be a favour by instead getting an AI relationship.
And that is because court ordered child support is actually a misnomer. It is merely a transfer payment to the custodial parent. There is actually no statutory requirement that it be spent on the child, nor any tracking or accountability that it is done. That would somehow be too impractical, even though somehow it's magically practical to count the pennies of the earner in the opposite direction, to make sure the full income flow is accounted for.
Seems like nothing new, just a better or more immersive form of fantasy for those who can't have the life they fantasize about.
So what? We don't live in the "should" universe. We live in this one.
"It's not real", yeah, that is weird for sure. But I also find wrestling fans weird, they know it's not real and enjoy it anyways. Even most sports, people take it a lot more seriously than they should.
Yes?
It's not about whether it's "real" or not. In this case of AI relationships, extremely sophisticated and poorly understood mechanisms of social-emotional communication and meaning making that have previously only ever been used for bonding with other people, and to a limited extent animals, are being directed at a machine. And we find that the mechanisms respond to that machine as if there is a person there, when there is not.
There is a lot of novel stuff happening there, technologically, socially, psychologically. We don't really know, and I don't trust anyone who is confidently predicting, what effects that will have on the person doing it, or their other social bonds.
Wrestling is theater! It's an ancient craft, well understood. If you're going to approach AI relationships as a natural extension of some well established human activity probably pet bonding is the closest. I don't think it's even that close though.
I feel like I'm not really ready for everything that's going to be vying for their attention in the next couple of decades. My daughter and her husband have good practices in place already IMHO but it's going to be a pernicious beast.
I think the part of my brain for feeling flattered when someone praises me didn't exist because no one complimented me. But after ChatGPT and Claude flattered me again and again, I finally developed the circuit for feeling accepted, respected, and loved...
It reminds me of when I started stretching after my 30s. First it was nothing but a torture, but after a while I began to feel good and comfortable when my muscles were stretched, and now I feel like shit when I skip the morning stretching.
theoldgreybeard•1h ago
grafmax•1h ago