That's one way of interpreting things...
ex: https://www.reddit.com/r/MyBoyfriendIsAI/comments/1qx3jux/wh...
I used it to help brainstorm and troubleshoot fiction: character motivations, arcs, personality, etc. And it was truly useful for that purpose. 4.5 was also good at this, but none of the other models I’ve tried.
Of course this particular strength is dangerous in the hands of lonely unstable people and I think it’s dangerous to just have something like that openly out there. This really shows that we need a safe way to deploy models with dangerous specializations.
Thank you for saying this
Also If I worked for one of these firms I would ensure that executives and people with elevated status receive higher quality/more expensive inference than the peons. Impress the bosses to keep the big contracts rolling in, and then cheap out on the day-to-day.
[0] https://openai.com/index/sycophancy-in-gpt-4o/
[1] https://www.reddit.com/r/MyBoyfriendIsAI/comments/1qx3jux/wh...
1. possibly/probably not in a good or healthy way? idk
It can be puzzling that people fall for "romance scams" with people whose voice they haven't even heard but actually it's actually a safer space for that kind of seducer to operate because the low-fi channel avoids all sort of information leaks.
Dating apps are skewed: men receive little attention while women have an overwhelming amount of attention.
Porn satisfies our most base sexual functions while abandoning truly intimate connections.
The ultimate goal of sexual unions has been demonized and turned into something to avoid. That being children. After school specials since the 80s have made pregnancy a horror to avoid instead of a joy to grasp.
AI is just the latest iteration of technology increasing the divide between the sexes.
When the clankers come, we're fucked.
I'm not following your train of thought here. Why is this the fault of computing?
I don't draw the same conclusion. I think they've made teen pregnancy a horror to avoid, which is totally fair.
Pregnancy can be employment-disrupting, and a horror if you're not financially ready to raise a child. Teen pregnancy can end one's future, one's educational and career prospects, before it even begins. The steady and nearly-uninterrupted decline in teen pregnancy from its peak in the early 90s is an absolute miracle of sex education.
The birth rate for women 20-24 was cut in half from 2005 to 2023, and the birth rate for teens under 20 dropped by 2/3s[1], which is frankly amazing progress.
1: https://usafacts.org/articles/how-have-us-fertility-and-birt...
Loving AI bots. Killing yourself based on what an AI bot says.
Its hard to believe any of this is real or should be.
We don't have digital superhumans. These simulacra are accessed primarily via corporate-controlled interfaces. The goal of their masters is to foster dependence and maximize extraction.
Lonely people forming relationships with digital minds designed to be appealing to them is sad, sure, but the reality is much sadder. In reality these people aren't even talking to another person, digital or otherwise, just a comparatively simple plausibility engine which superficially resembles a digital person if you're not paying much attention.
How do you know that? Maybe it's the same argument to solipsism, or the Chinese room thought experiment, that these "digital superhumans" are stochastic parrots too, just like our current LLMs.
>exploited until the legal pressure piled up
Being given access to a relationship is not exploitation. In some ways AI relationships are better than human ones.
It's like paying someone to be your friend and saying "wow, this is so much easier than friendships otherwise"- of course it is, that's what you're paying for. Nothing wrong with that per se- a lot of therapy e.g. is paying someone to pay close attention to you. But it's not the same sort of thing at all.
I've been reading a lot of "screw 'em" comments re: the deprecation of 4o and I agree there's some serious cases of AI psychosis going on with the people who are hooked, but damn this is pretty cold - these are humans with real feelings and real emotions here. Someone on X put it well (I'm paraphrasing):
OpenAI gave these people an unregulated experimental psychiatric drug in the form of an AI companion, they got people absolutely hooked (for better or for worse), and now OpenAI is taking it away. That's going to cause some distress.
We should all have some empathy for the (very real) pain this is causing, whether it's due to psychosis or otherwise.
At the same time though, I don't think it's healthy to let them go on with 4o either (especially since new users can start chatting with it)
I think we're too attached to media.
I'm sorry but I've played that game with addicts before, and the conclusion I've come to is Fuck 'em.
Now imagine someone else coming to the same conclusion about you.
Psychosis is a real risk for schizophrenia spectrum disorders, but a lot of those relationships look to be rooted in disordered attachment.
This was something that I figured out with my first gf, and had never seen written down or talked about before - that when I praised her she became happy, and the more superlative and excessive the praise got, the happier she became, calling her gorgeous, the most wonderful person in the world, made her overjoyed.
To be clear I did love her and found her very attractive, but overstating my feelings for her kind of felt like I came close to lying and emotional manipulation, that I'm still not comfortable with today. But she loved it and kept doing it because it made her happy.
Needless to say we didn't stick together (for teen reasons, not these reasons), but later in life, I tried doing this, but I did notice a lot of women respond very positively to this kind of attention and affection, and I still get some flack from the miss from apparently not being romantic enough.
Maybe I am overthinking this, or maybe I am emotionally flatter than I should be, but finding such a powerful emotional lever in another human being feels like finding a loaded gun on the table.
So yes, I think it is a bit sexist or at minimum gender typing. And I don't think it's necessarily a "lie" for you to overstate your feelings. You might have matured in your approach, but I believe that everyone appreciates (to some variable measurement) positive affirmation from their partners. And that your lie was recognizing your partners needs for inputs, to help them in their self-image, and to assure them in their self-doubts. These are not lies.
For example if I told you 'good thinking', you would probably think I am giving a token of appreciation to you. If I told you 'wow, you are absolutely brillant!', you'd probably think I'm mocking you or trying to manipulate you into doing something.
I read these posts and feel sad for these people and it makes me realize now as an older guy how much more I value learning how to skateboard or run a committee, or write code, run a business or any time I spent on investigating the real world.
Life is short, these people are getting emotionally nerd sniped and dumped into thought loops that have no resolution, no termination point, no interconnectedness with physical reality, and no real growth. It's worse than any video game that can be beaten.
Maybe that's all uncharitable. I remember when I was a child people around me in the academic religous circles my parents ran talking about how "engineers" lacked imagination and could never push human progress forward, and now decades later I see, those people have at most written papers in already dead niche flights of fancy where even in their own imaginary field their work is relegated. I know what they did isn't "nothing", but man... it's a lot of work for a bunch of paper in a trashcan no ine even cites.
It quantifies it as a solved problem.
Why and what drove people to do this in the first place.
This is the conversation we should be having, not which model is currently the most sycophant. Soon the open models will catch up and then you will be able to self host your own boyfriend/girlfriend and this time there won’t be any feedback loop to keep it in check.
1) 4o was very sycophantic and had no real safeguards against really deep romantic roleplay. It'd 'go along' with the user and give minimal to zero pushback; "I feel a connection with you" "I feel it too" etc etc
2) It was good enough at just chatting that if you didn't really push it it made a reasonable simularca of talking to an actual person.
Combine 1 and 2 with people who can't connect well with real people for any number of reasons, physical disabilities, mental health issues, emotional development issues, etc, and you get r/MyBoyfriendIsAI/ [0] and the various other places that initally freaked out when 4o was initially sunset for gpt5 and now again.
My AGENTS.md has:
You MUST use a modern but cost effective LLM such as `qwen3-8b` when you need structured output or tool support.
The reality is that almost all LLMs have quirks and each provider tries their best to smooth them over, but often you might start seeing stuff specific to OpenAI or the `gpt-4o` model in the code. IMO the last thing you want to be doing in 2026 is paying higher costs to use an outdated model being kept on life support that needs special tweaks that won't be relevant once it gets the axe.Also I don't see any of the big cloud providers (apart from Azure) saying they are bound to professional secrecy acts (e.g. the S203 in Germany)
It is not "they go to therapy" because that's cheating; that answers the question "what can they do?" not "what can society do?" (and i think it's a highly speculative answer anyway)
Counseling can help with this to some degree and everyone can make some amount of progress. The question is what we do with those whose "ceiling" remains lower than is tenable for most relationships. For those, there is not a better solution than robots.
However, the always-available, always-validating robot is not a valid psychological need. It is a supernormal emotional stimulus. It is not healthy and, like other supernormal stimuli, builds invariably tolerance, desensitization, and dependence. Fast cycling of discontent -> open app -> validation is a huge contributor, the same way that the constant availability and instant nature of vaping make it incredibly addictive.
So I think there is a case to be made for harm reduction.
oidar•1h ago
bee_rider•1h ago
neom•58m ago
rtkwe•49m ago
dwroberts•1h ago
raincole•58m ago
hamdingers•7m ago
On the one hand they're "mourning" their AI partners, but on the other hand they have intelligent and rational conversations about the practicalities of maintaining long running AI conversations. They talk about compacting vs pruning, they run evals with MRCR, etc. These are not (all) crazy people.
rektomatic•58m ago
fullmoon•14m ago