The invention of fast food does not change anyone's ability to excersize. When fast food was invented people excersized way more than they do today.
Time constraints have caused an increase in fast food consumption and a reduction in excersize.
Both issues then seem to be addressed by coercion to change behaviour when what is needed is a systemic change to the environment to provide preferable options.
And, tbh, I often try to remember to do the same.
Used to be only the wealthiest students could afford to pay someone else to write their essay homework for them. Now everyone can use ChatGPT.
Used to be you had to be a Trumpian-millionaire/Elonian-billionaire to afford an army of Yes-men to agree with your every idea. Now anyone can have that!
https://courts.delaware.gov/Opinions/Download.aspx?id=392880
> Meanwhile, Kim sought ChatGPT’s counsel on how to proceed if Krafton failed to reach a deal with Unknown Worlds on the earnout. The AI chatbot prepared a “Response Strategy to a ‘No-Deal’ Scenario,” which Kim shared with Yoon. The strategy included a “pressure and leverage package” and an “implementation roadmap by scenario.”
It's really nothing new. It takes significant mental energy (a finite resource) to question what you're being told, and to do your own fact checking. Instead people by default gravitate towards echo chambers where they can feel good about being a part of a group bigger than themselves, and can spend their limited energy towards what really matters in their lives.
I disagree. What's new is that this flattery is individually, personally targeted. The AI user is given the impression that they're having a back-and-forth conversation with a single trusted friend.
You don't have the same personal experience passively consuming political mass media.
I would have downvoted your comment, except you can't downvote direct replies on HN. ;-)
Village idiot used to be found out because no one in the village shared the same wingnut views.
Partisan media gave you two polls of wingnut views to choose for reinforcement.
Social media allowed all village idiots to find each other and reinforce each others shared wingnut views of which there are 1000s to choose from.
Now with LLMs you can have personalized reinforcement of any newly invented wingnut view on the fly. So can get into very specific self radicalization loops especially for the mentally ill.
I don't quite understand why other people seem to crave that. Every time I read about someone who has gone down a dark road using LLMs I am constantly amazed at how much they "fall" for the LLM, often believing it's sentient. It's just a box of numbers, really cool numbers, with really cool math, that can do really cool things, but still just numbers.
So, "great idea" is coming from the renforcement learning instruction rather than the answer portion of the generation.
I like to make a subagent take the "devil's advocate" take on a subject. It usually does all the arguing for me as to why the main agent has it wrong. Commonly results in better decisions that I'd have made alone.
Asking the agent to interview on why I disagree helps too but is more effort.
Also, many many people suffer from low self esteem, and being showered with endorsement and affirmation by something that talks like an authority figure must be very addictive.
I've talked with my family about LLMs and I think I've conveyed the "it's a box of numbers" but I might need to circle back. Just to set some baseline education, specifically to guard against this kind of "psychosis". Hopefully I would notice the signs well before it got to a dangerous point but, with LLMs you can go down that rabbit hole quickly it seems.
Precisely. Even for technical people, I doubt its possible to totally disallow your own brain from ever, unconciously, treating the entity you're speaking to like a sentient being. Most technical people still will have some emotion in their prompts, say please or thank you, give qualitative feedback for no reason, express anger towards the model, etc.
Its just impossible to seperate our capacity for conversation from our sense that we're actually talking to "someone" (in the most vague sense).
I don't need the patronizing, just give me the damn answer..
Realizing that the people they’re targeting DO need that is kind of frightening.
These days most LLMs respond with unsolicited grandiose feedback: you've made a realisation very few people are capable of. Your understanding is remarkable. You prove to have a sharp intellect and deep knowledge.
It got me to test throwing non sensical observations about the world, it always takes me side and praise my views.
To note some people are like that too.
Not to criticize at all, but it's remarkable that LLMs have already become so embedded that when we get the sense they're lying to us, the instinct is to go ask another LLM and not some more trustworthy source. Just goes to show that convenience reigns supreme, I suppose.
Cynical part of me had this theory that, at least for part of them, it's the other way around. It's not that they see AI as sentient, it's that they never have seen other human beings like that in the first place. Other people are just means for them to reach their goals, or obstacles. In that sense, AI is not really different for them. Except they're cheaper and be guaranteed to always agree with them.
That's why I believe CEOs, who are more likely to be sociopaths by natural selection, genuinely believe AI is a good replacement for people. They're not looking for individuals with personal thoughts that may contradict with theirs at some point, they're looking for yes-men as a service.
related: if you suggest a hypothesis then you'll get biased results (iow, you'll think you're right, but the true information is hidden)
I say "I think you are getting me to chase a guess, are you guessing?"
90% of the time it says "Yes, honestly I am. Let me think more carefully."
That was a copypasta from a chat just this morning
The study explores outdated models, GPT-4o was notoriously sycophantic and GPT-5 was specifically trained to minimize sycophancy, from GPT-5's announcement:
>We’ve made significant advances in reducing hallucinations, improving instruction following, and minimizing sycophancy
And the whole drama in August 2025 when people complained GPT-5 was "colder" and "lacked personality" (= less sycophantic) compared to GPT-4o
It would be interesting to study evolution of sycophantic tendencies (decrease/increase) in models from version to version, i.e. if companies are actually doing anything about it
jmclnx•1h ago
Anyway no real surprise, we have many examples of people ignoring facts and moving to media that support their views, even when their views are completely wrong. Why should AI be different.