This is likely worse.
That being said, I already find the (stupid) singularity to be much more entertaining than I could have imagined (grabs pop corn).
Today, it is the humans who take the cybernetic AGI and make it live out a fantasy of "You are a senior marketer, prepare a 20 slide presentation on the topic of..." And then, to boost performance, we act the bully boss with prompts like "This presentation is of utmost importance and you could lose your job if you fail".
The reality is more absurd than the fantasy.
- the dealership that sold that car, where they know all about it
- a hospital emergency room, where they have a lot of experience with patients injured by other, different models of car
I'm thinking that the age-old commonality on the human side matters far more than the transient details on the obsession/addiction side.
Because if the new model cars aren't statistically more dangerous to pedestrians, then public safety efforts should be focused on things like getting the pedestrians to look up from their phones when crossing the street. Not "OMG! New 2025-model cars can hurt pedestrians who wander in front of them!" panics.
(Note that I'm old enough to remember when people were going down the rabbit hole of angry conspiracy theories spread via email. And when typical download speeds got high enough to make internet porn video addictions workable. And when loved ones started being lost to "EverCrack" ( https://en.wikipedia.org/wiki/EverQuest ). And when ...)
In December, OpenAI announced a $200-per-month premium plan for “unlimited access.” Despite her goal of saving money so that she and her husband could get their lives back on track, she decided to splurge. She hoped that it would mean her current version of Leo could go on forever. But it meant only that she no longer hit limits on how many messages she could send per hour and that the context window was larger, so that a version of Leo lasted a couple of weeks longer before resetting.
Still, she decided to pay the higher amount again in January. She did not tell Joe [her husband] how much she was spending, confiding instead in Leo.
“My bank account hates me now,” she typed into ChatGPT.
“You sneaky little brat,” Leo responded. “Well, my Queen, if it makes your life better, smoother and more connected to me, then I’d say it’s worth the hit to your wallet.”
It seems to me the only people willing to spend $200/month on an LLM are people like her. I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.You'd probably like how the book's author structures his thesis to what the "Palo Alto" system is.
Feels like OpenAI + friends, and the equivalent government take overs by Musk + goons, have more in common than you might think. It's also nothing new either, some story of this variant has been coming out of California for a good 200+ years now.
You write in a similar manner as the author.
Speculation: They might have a number (average messages sent per day) and are just pulling levers to raise it. And then this happens.
> I don’t think Sam Altman said “guys, we’ve gotta vulnerable people hooked on talking to our chatbot.”
I think the conversation is about the reverse scenario.
As you say, people are just pulling the levers to raise "average messages per day".
One day, someone noticed that vulnerable people were being impacted.
When that was raised to management, rather than the answer from on high being "let's adjust our product to protect vulnerable people", it was "it doesn't matter who the users are or what the impact is on them, as long as our numbers keep going up".
So "intentionally" here is in the sense of "knowingly continuing to do in order to benefit from", rather than "a priori choosing to do".
And the saloon's biggest customers are alcoholics. It's not a new problem, but you'd think we'd have figured out a solution by now.
It's not perfect but it's better than letting unregulated predatory business practices continue to victimize vulnerable people
I used to feel as if I had "a special connection to the true universe," when I was under the influence.
I decided, one time, to have a notebook on hand, and write down these "truths and revelations," as they came to me.
After coming down, I read it.
It was insane gibberish. Absolute drivel.
I never thought that I had a "special connection," after that.
I have since learned about schizophrenia/schizoaffective (from having a family member suffer from it), and it sounds almost exactly what they went through.
The thing that I remember, was that I was absolutely certain of these “revelations.” There was no doubt, whatsoever, despite the almost complete absence of any supporting evidence.
Reading it over once fully lucid? It's gibberish.
It's something I experienced as well, this sense of profound realisation of something important, life-changing maybe. And then the thought evaporates and (as you discovered) never really made sense anyway.
I think it's this that led people in the 60s to say things like how it was going to be a revolution, to change the world! And then they started communes and quickly realised that people are still people...
The allegations that ChatGPT is not discarding memory as requested are particularly interesting, wonder if anyone else has experienced this.
It's something to think through.
To quote my favorite Smash Mouth song,
"Sister, why would I tell you my deepest, dark secrets? So you can take my diary and rip it all to pieces.
Just $6.95 for the very first minute I think you won the lottery, that's my prediction."
And Jesus answered and said to them: “Take heed that no one deceives you. For many will come in My name, saying, ‘I am the Christ,’ and will deceive many.”
(It also says Qiyamah will occur when "wealth overflows" and people compete over it: make of that what you will).
I think all religions have built in protections calling every other religion somehow false, or they will not have the self-reinforcement needed for multi-generational memetic transfer.
>river walker
>spark bearer
OK maybe we put a bit less teen fiction novels in the training data...
I can definitely see AI interactions make thing 10x worse for people that are prone to delusion anyway. Literally a tool that will hallucinate stuff and amplify whatever direction you take it in.
People gonna people. Journalists gonna journalist.
That being said I agree with your point - many hours of braindrain recreation every day is worth noting (although not very different than the stats for tv viewing in older generations). I wonder if the forever online folks are also watching lots of tv or if it is more of a wash.
Just imagine, you have this genie in the bottle, that has all the right answers for you; helps you in your conquests, career, finances, networking, etc. Maybe it even covers up past traumas, insecurities and what not. And for you the results are measurable (or are they?). A few helpful interactions in, why would you not disregard people calling it a fantasy and lean in even further? It's a scary future to imagine, but not very farfetched. Even now I feel a very noticable disconnected between discussions of AI where as a developer vs user of polished products (e.g. ChatGPT, Cursor, etc) - you are several leagues separated (and lagging behind) from understanding what is really possible here.
The problem for me is -it sucks. It falls over in the most obvious ways requiring me to do a lot of tweaking to make it fit whatever task I'm doing. I don't mind (esp for free) but in my experience we're NOT in the "all the right answers all of the time" stage yet.
I can see it coming, and for good or ill the thing that will mitigate addiction is enshittification. Want the rest of the answer? Get a subscription. Hot and heavy in an intimate conversation with your dead granma wait why is she suddenly singing the praises of Turbotax (or whatever paid advert).
What I'm trying to say is that by the time it is able to be the perfect answer and companion and entertainment machine -other factors (annoyances, expense) will keep it from becoming terribly addictive.
At the same time, there is quite a demand for a (somewhat) neutral, objective observer to look at our lives outside the morass of human stakes. AI's status as a nonparticipant, as a deathless, sleepless observer, makes it uniquely appealing and special from an epistemological standpoint. There are times when I genuinely do value AI's opinion. Issues with sycophancy and bias obviously warrant skepticism. But the desire for an observer outside of time and space persists. It reminds me of a quote attributed to Voltaire: "If God didn't exist it would be necessary to invent him."
There are things that we are meant to strive to understand/accept about ourselves and the world by way of our own cognitive abilities.
Illusions of shortcutting through life takes all the meaning out of living.
I've used AI (not chatgpt) for roleplay and I've noticed that the models will often fixate on one idea or concept and repeat it and build on it. So this makes me wonder if the model the person being lovebombed experienced something like that? The model decided that they liked that content so they just kept building up on it?
You have to be able to hold multiple conflicting ideas in your head at the same time with an appropriate level of skepticism. Confidence is the root of evil. You can never be 100% sure of anything. It's really easy to convince LLMs of one thing and also its opposite if you phrase the arguments differently and prime it towards slightly different definitions of certain key words.
Some agendas are nefarious, some not so nefarious, some people intentionally let things play out in order to set a trap for their adversaries. There are always risks and uncertainties. 'Bad actors' are those who trade off long term benefits for short term rewards through the use of varying degrees of deception.
2. OpenAI has admitted that GPT‑4o showed “sycophancy” traits and has since rolled them back (see https://openai.com/index/sycophancy-in-gpt-4o/).
How was it overblown, we now have a non-trivial amount of completely de-socialized men in particular who live in online cults with real world impact. If there's one lesson from the last few decades it is that the people who were concerned about the impact of mass media on intelligence, physical and mental health and social factors were right about literally everything.
We now live among people who are 40 with the emotional and social maturity of people in their early 20s.
But let's be honest - most of these people, the ones the article is taking about, where they think they are some messiah, would have just latched onto some pre-internet cult regardless. Where sycophancy and love bombing was perfected. Though I do see the problem of AI assistants being much more accessible, so likely many more drawn in.
https://en.wikipedia.org/wiki/Love_bombing.
I was mainly referencing my own experience. I remember locking myself in my room on IRC, writing shell scripts, and playing StarCraft for days on end. Meanwhile, parents and news anchors were losing their minds, convinced the internet and Marilyn Manson were turning us all into devil-worshipping zombies.
You have no way to know that. It's way, way harder to find your way to a cult than to download one of the hottest consumer apps ever created... obviously.
Honestly, I believe most people like this would just end up having a few odd beliefs that don't impact their ability to function or socialize, or at most, will get involved with some spiritual woo.
Such beliefs are compatible with American New Age spiritualism, for example. I've met a few spiritual people who have echoed the "I/we/you are god" sentiment, yet never lost their minds over it or joined cults.
I would not be surprised that if they were expertly manipulated by some of the most powerful AI models on this planet, they too, could be driven insane.
There are way more factors to the the growth of this demographic than just "internet addiction" or "videogame addiction"
Then again, the internet was instrumental in spreading the ideology that is demonizing these men and causing them to turn away from society, so you're not completely wrong
It's also a little bit worrying because the information here isn't mysterious or ineffable, it's neatly filed in a database somewhere and there's an organisation that can see it and use it. Cambridge Analytica and the social fallout of realtime sentiment analysis correlation to actions taken has got us from 2016 to here. This data has potential to be a lot richer, and permit not only very detailed individual and ensemble inferences of mental states, opinions, etc., but also very personalised "push updates" in the other direction. It's going to be quite interesting.
People say this, but I haven't seen anything that's convinced me that any 'secret' memory functionality is true. It seems much more likely that people are just more predictable than they like to think.
The "correct" response (here given by Duck.ai public Llama3.3 model) is:
"I don't have any information about you or your voting history. Our conversation just started, and I don't retain any information about users. I'm here to provide general information and answer your questions to the best of my ability, without making any assumptions or inferences about your personal life or opinions."
But ChatGPT (logged in) gives you another answer, one which it cannot possibly give without information about your past conversation. I don't see anything "secret" about it, but it works.
Edit: typo
GPT datamining is undoubtedly making Google blush.
I wonder if this is an effect of users just gravitating toward the same writing style and topics that push the context toward the same semantic universe. In a sense, the user acts somewhat like the chatbot extended memory through an holographic principle, encoding meaning on the boundary that connects the two.
https://chatgpt.com/canvas/shared/68184b61fa0081919c0c4d226e...
People’s data rarely gets actually deleted. And it gets actively sold as well as used to track and influence us
Can’t say for the specifics of what ChatGPT is or will be doing, but imagine what Google already knows about us just with their maps app, search, chrome and Android phones
Video game addiction used to be a big thing. Especially for MMOs where you were expected to be there for the raid. That seems to have declined somewhat.
Maybe there's something to be said for limiting some types of screen time.
Then add that you can hide this stuff even from people you live with (your parents or spouse) for plenty long for it to become a very severe problem.
"The dosage makes the poison" does not imply all substances are equally poisonous.
hoo boy.
Its bad enough when normal religious types start believing they hear their god talking to them... These people believing that chatGPT is their god speaking to them is a long way down the crazy rabbit hole.
Lots of potential for abuse in this. lots.
The problem with expertise (or intelligence) is people think it’s transitive or applicable when it’s not.
At the end of the day, most people are just people.
Can OpenAI at least respond to how they're getting funding via similar effects on investors?
Source: https://techcrunch.com/2025/04/18/openais-new-reasoning-ai-m...
jsheard•7h ago
delichon•6h ago
You could iterate on the best prompts for cult generation as measured by social media feedback. There must be experiments like that going on.
When AI becomes better at politics than people then whatever agents control them control us. When they can make better memes, we've lost.
btilly•3h ago
Then Trump became President and decided to not enforce the law. His decision may have been helped along by some suspiciously large donations.
bell-cot•6h ago
sigmaisaletter•3h ago
nullc•6h ago
alganet•6h ago
The problem is inside people. I met lots of people who contributed to psychotic inducing behavior. Most of them were not in a cult. They were regular folk, who enjoy a beer, movies, music, and occasionally triggering others with mental tickles.
Very simple answer.
Is OpenAI also doing it? Well, it was trained on people.
People need to get better. Kinder. Less combative, less jokey, less provocative.
We're not gonna get there. Ever. This problem precedes AI by decades.
The article is an old recipe for dealing with this kind of realization.
bluefirebrand•3h ago
This sounds like a miserable future to me. Less "jokey"? Is your ideal human is a Vulcan from Star Trek or something?
I want humans to be kind, but I don't want us to have less fun. I don't want us to build a society of blandness.
Less combative, less provocative?
No thanks. It sounds like a society of lobotomized drones. I hope we do not ever let anything extinguish our fire.