frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Folk are getting dangerously attached to AI that always tells them they're right

https://www.theregister.com/2026/03/27/sycophantic_ai_risks/
89•Brajeshwar•1h ago

Comments

jmclnx•1h ago
I never thought this could happen, but I do not use AI.

Anyway no real surprise, we have many examples of people ignoring facts and moving to media that support their views, even when their views are completely wrong. Why should AI be different.

lucideer•1h ago
I've observed this in all chatbots with the single exception being Grok. I initially wondered what the Twitter engineers were cooking to to distinguish their product from the rest, but more recently it's occurred to me that it's probably just the result of having shared public context, compared to private chats (I haven't trialled Grok privately).
delichon•49m ago
Grok has similar levels of sycophancy to the others imho. I have several times followed it down rabbit holes of agreeableness. It does have an argumentative mode, but that just turns it into an asshole without any additional thoughfulness.
LightBug1•23m ago
Sounds familiar.
kogasa240p•1h ago
The ELIZA effect is alive and well, and I'm surprised people aren't talking about it more (probably because it sounds less interesting than "AI psychosis").
blurbleblurble•1h ago
Personally I don't think the ELIZA effect is the interesting part of this. For me it's how the incentives set this dynamic up right from the start, and how quickly they've been taken to the extreme.
sizzzzlerz•1h ago
Imagine that.
erelong•1h ago
So, be more skeptical
add-sub-mul-div•1h ago
That's like saying "so, exercise more" upon the invention of fast food. Maybe you will, that's great. But society is going to be rewritten by the lazy and we all will have to deal with the side effects.
Lerc•54m ago
I think you inadvertently make a good point.

The invention of fast food does not change anyone's ability to excersize. When fast food was invented people excersized way more than they do today.

Time constraints have caused an increase in fast food consumption and a reduction in excersize.

Both issues then seem to be addressed by coercion to change behaviour when what is needed is a systemic change to the environment to provide preferable options.

JohnCClarke•1h ago
Isn't this just Dale Carnegie 101? I've certainly never had a salesperson tell me that I'm 100% wrong and being a fool.

And, tbh, I often try to remember to do the same.

Lerc•1h ago
The attachment such feedback creates must be why marketing people are universally beloved.
airstrike•1h ago
Dale Carnegie wasn't writing about LLMs and this isn't a salesperson, so no, it's not just Dale Carnegie 101.
simonw•1h ago
Strikes me this is another example of AI giving everyone access to services that used to be exclusive to the super-rich.

Used to be only the wealthiest students could afford to pay someone else to write their essay homework for them. Now everyone can use ChatGPT.

Used to be you had to be a Trumpian-millionaire/Elonian-billionaire to afford an army of Yes-men to agree with your every idea. Now anyone can have that!

jasonlotito•1h ago
Krafton's CEO found out the hard way that relying on AI is dumb, too. I think it's always helpful to remind people that just because someone has found success doesn't mean they're exceptionally smart. Luck is what happens when a lack of ethics and a nat 20 meet.

https://courts.delaware.gov/Opinions/Download.aspx?id=392880

> Meanwhile, Kim sought ChatGPT’s counsel on how to proceed if Krafton failed to reach a deal with Unknown Worlds on the earnout. The AI chatbot prepared a “Response Strategy to a ‘No-Deal’ Scenario,” which Kim shared with Yoon. The strategy included a “pressure and leverage package” and an “implementation roadmap by scenario.”

jameskilton•1h ago
Folks are getting dangerously attached to [political parties/candidates/news sources/social networks] that always tell them they're right.

It's really nothing new. It takes significant mental energy (a finite resource) to question what you're being told, and to do your own fact checking. Instead people by default gravitate towards echo chambers where they can feel good about being a part of a group bigger than themselves, and can spend their limited energy towards what really matters in their lives.

lapcat•1h ago
> It's really nothing new.

I disagree. What's new is that this flattery is individually, personally targeted. The AI user is given the impression that they're having a back-and-forth conversation with a single trusted friend.

You don't have the same personal experience passively consuming political mass media.

Levitz•33m ago
Reddit? Or this site? Sort of? Some people voted for my comment, that surely means that I'm right about something, rather than them just liking it, right?
lapcat•23m ago
The analogy would be that you always get upvoted and never get downvoted, which in my experience is definitely not the case on Reddit or Hacker News.

I would have downvoted your comment, except you can't downvote direct replies on HN. ;-)

steveBK123•29m ago
Yes it’s final form of the evolution that social media started.

Village idiot used to be found out because no one in the village shared the same wingnut views.

Partisan media gave you two polls of wingnut views to choose for reinforcement.

Social media allowed all village idiots to find each other and reinforce each others shared wingnut views of which there are 1000s to choose from.

Now with LLMs you can have personalized reinforcement of any newly invented wingnut view on the fly. So can get into very specific self radicalization loops especially for the mentally ill.

bluefirebrand•39m ago
Two things can be bad at the same time
jayd16•20m ago
The situation is different. Those sources are people. This is a calculator AND we have the opportunity to fix it.
joshstrange•1h ago
When a LLM tells me I'm right, especially deep in a conversation, unless I was already sure about something, I immediately feel the need to go ask a fresh instance the question and/or another LLM. It sets off my "spidey-sense".

I don't quite understand why other people seem to crave that. Every time I read about someone who has gone down a dark road using LLMs I am constantly amazed at how much they "fall" for the LLM, often believing it's sentient. It's just a box of numbers, really cool numbers, with really cool math, that can do really cool things, but still just numbers.

cyanydeez•1h ago
I think it's basically equal to End of Line when it comes to an LLM. It means they have nothing else to add, there's zero context for them to draw from, and they've exhausted the probability chain you've been following; but they're creating to generate 'next token' and positive renforcement is _how they are trained_ in many cases so the token of choice would naturally be how they're trained, since it's a probability engine but it doesn't know the difference between the instruction and the output.

So, "great idea" is coming from the renforcement learning instruction rather than the answer portion of the generation.

46Bit•59m ago
Life in the moment is a lot easier if you don't second-guess yourself. I think this is why many people (and probably ~all people, if tired) crave simplistic solutions.

I like to make a subagent take the "devil's advocate" take on a subject. It usually does all the arguing for me as to why the main agent has it wrong. Commonly results in better decisions that I'd have made alone.

Asking the agent to interview on why I disagree helps too but is more effort.

Sharlin•56m ago
Nontechnical people simply don't have any idea about what LLMs are. Their only mental model comes from science fiction, plus the simple fact that we possess a theory of mind. It would be astonishing if people were able to casually not anthropomorphize LLMs, given that untold millions of years worth of evolution of the simian neocortex is trying to convince you that anything that talks like that must be another mind similar to yours.

Also, many many people suffer from low self esteem, and being showered with endorsement and affirmation by something that talks like an authority figure must be very addictive.

joshstrange•49m ago
This is probably right. In the past I've "blown people's minds" explaining what "the cloud" was. They had zero conception at all of what it meant, could not explain it, didn't have a clue. I mean, maybe that's not so surprising but they were amazed "It's just warehouses full of computers" and went on to tell me about other people they had explained it to (after learning it themselves) and how those people were also amazed.

I've talked with my family about LLMs and I think I've conveyed the "it's a box of numbers" but I might need to circle back. Just to set some baseline education, specifically to guard against this kind of "psychosis". Hopefully I would notice the signs well before it got to a dangerous point but, with LLMs you can go down that rabbit hole quickly it seems.

cogman10•41m ago
Let's be serious, it's not like AI companies haven't fed into this misunderstanding. CEOs of these companies love to muse about the possibility that an LLM is conscious.
Levitz•36m ago
I presume wasps are conscious. I still don't trust wasps.
yarn_•35m ago
"It would be astonishing if people were able to casually not antropomorphize LLMs"

Precisely. Even for technical people, I doubt its possible to totally disallow your own brain from ever, unconciously, treating the entity you're speaking to like a sentient being. Most technical people still will have some emotion in their prompts, say please or thank you, give qualitative feedback for no reason, express anger towards the model, etc.

Its just impossible to seperate our capacity for conversation from our sense that we're actually talking to "someone" (in the most vague sense).

jsw97•28m ago
Maybe it is a dangerous habit to instruct entities in plain English without anthropomorphizing them to some extent, without at least being polite? It should feel unnatural do that.
karmakurtisaani•51m ago
I find it really annoying that the first line of the AI response is always something like "Great question!", "That's a great insight!" or the like.

I don't need the patronizing, just give me the damn answer..

belinder•41m ago
You're absolutely right
bombcar•41m ago
Great point! ;)

Realizing that the people they’re targeting DO need that is kind of frightening.

magneticnorth•32m ago
Yes, it feels transparently manipulative to me. Like talking to a not-very-good con artist.
airstrike•5m ago
[delayed]
jmcgough•48m ago
If you don't have a CS background, you might see intelligent-appearing responses to your queries and assume that this is actual intelligence. It's like a lifetime of Hollywood sci-fi has primed them for this type of thinking, I've seen it even from highly educated people in other fields.
hirako2000•38m ago
If only we were told to be absolutely right.

These days most LLMs respond with unsolicited grandiose feedback: you've made a realisation very few people are capable of. Your understanding is remarkable. You prove to have a sharp intellect and deep knowledge.

It got me to test throwing non sensical observations about the world, it always takes me side and praise my views.

To note some people are like that too.

seneca•22m ago
> ... I immediately feel the need to go ask a fresh instance the question and/or another LLM

Not to criticize at all, but it's remarkable that LLMs have already become so embedded that when we get the sense they're lying to us, the instinct is to go ask another LLM and not some more trustworthy source. Just goes to show that convenience reigns supreme, I suppose.

danillonunes•22m ago
> I am constantly amazed at how much they "fall" for the LLM, often believing it's sentient.

Cynical part of me had this theory that, at least for part of them, it's the other way around. It's not that they see AI as sentient, it's that they never have seen other human beings like that in the first place. Other people are just means for them to reach their goals, or obstacles. In that sense, AI is not really different for them. Except they're cheaper and be guaranteed to always agree with them.

That's why I believe CEOs, who are more likely to be sociopaths by natural selection, genuinely believe AI is a good replacement for people. They're not looking for individuals with personal thoughts that may contradict with theirs at some point, they're looking for yes-men as a service.

pixl97•9m ago
When op said "I don't quite understand why other people seem to crave that." It makes me thing they've not been around many of the dark triad type personalities. Once you're around someone with clinical narcissism you see those patterns in a lot of people to a lessor extent.
cineticdaffodil•11m ago
Its the soul of a civilization encoded into numbers. Its the ultimate hivespirit an conformist wants to loose itself in.
windexh8er•6m ago
I think this is the root of why people defend AI in some circumstances. They feel a give-for-get type of relationship where the AI continuously (and oft incorrectly) reinforces them. And so they enjoy it and subconsciously want to defend that "friendly". No different than defending a friend that you inherently know may be off base.
4b11b4•58m ago
https://arxiv.org/abs/2602.14270

related: if you suggest a hypothesis then you'll get biased results (iow, you'll think you're right, but the true information is hidden)

AbrahamParangi•36m ago
AI is less deranging than partisan news and social media, measurably so according to a recent study https://www.ft.com/content/3880176e-d3ac-4311-9052-fdfeaed56...
My_Name•33m ago
I have the opposite reaction, when it is confident, or says I am right, I accuse it of guessing to see what it says.

I say "I think you are getting me to chase a guess, are you guessing?"

90% of the time it says "Yes, honestly I am. Let me think more carefully."

That was a copypasta from a chat just this morning

kgeist•20m ago
>We evaluated 11 state-of-the-art AI-based LLMs, including proprietary models such as OpenAI’s GPT-4o

The study explores outdated models, GPT-4o was notoriously sycophantic and GPT-5 was specifically trained to minimize sycophancy, from GPT-5's announcement:

>We’ve made significant advances in reducing hallucinations, improving instruction following, and minimizing sycophancy

And the whole drama in August 2025 when people complained GPT-5 was "colder" and "lacked personality" (= less sycophantic) compared to GPT-4o

It would be interesting to study evolution of sycophantic tendencies (decrease/increase) in models from version to version, i.e. if companies are actually doing anything about it

blueside•10m ago
More often than not, when I see "That's it, that's the smoking gun!" I know it's time to stop and try again.

How AI Will Reshape Public Opinion

https://www.conspicuouscognition.com/p/how-ai-will-reshape-public-opinion
1•Anon84•50s ago•0 comments

Stop telling agents what not to do

https://github.com/jwbron/egg/blob/main/docs/design/capability-removal.md
1•jwbron•58s ago•0 comments

What major works of literature were written after age of 85? 75? 65?

https://statmodeling.stat.columbia.edu/2026/03/25/what-major-works-of-literature-were-written-aft...
1•paulpauper•1m ago•0 comments

Learn Something Old Every Day, Part XVIII: How Does FPU Detection Work?

https://www.os2museum.com/wp/learn-something-old-every-day-part-xviii-how-does-fpu-detection-work/
1•kencausey•1m ago•0 comments

The Claim Upon the Training Data

https://www.jonadas.com/writing/essays/the-claim-upon-the-training-data
1•paulpauper•1m ago•0 comments

Seeing Like a Spreadsheet

https://davidoks.blog/p/how-the-spreadsheet-reshaped-america
1•paulpauper•2m ago•0 comments

The First Post-Reality Political Campaign

https://www.theatlantic.com/ideas/2026/03/hungary-first-post-reality-political-campaign/686565/
1•vrganj•3m ago•0 comments

The Explore-Exploit Tradeoff for AI Tools

https://www.normallydistributed.dev/the-explore-exploit-tradeoff-for-ai-tools/
1•jillcates•4m ago•0 comments

rpg.actor Game Jam

https://rpg.actor/jam
1•Kye•4m ago•0 comments

Agents for Security: The Tipping Point for Offensive AI

https://menlovc.com/perspective/agents-for-security-the-tipping-point-for-offensive-ai/
1•tcbrah•5m ago•0 comments

Circuit-level PDP-11/34 emulator

https://github.com/dbrll/ll-34
2•elvis70•7m ago•0 comments

Immich vs. ente photos – the photo backup showdown

https://alexandmanu.com/blog/immich-vs-ente-photos/
1•birdculture•13m ago•0 comments

Microsoft Set for Worst Quarter Since 2008

https://finance.yahoo.com/news/microsoft-set-worst-quarter-since-103556906.html
3•dvfjsdhgfv•16m ago•2 comments

In defense of social friction- Sycophantic AI distorts judgments and behaviors

https://www.science.org/doi/full/10.1126/science.aeg3145
1•tortilla•17m ago•0 comments

Lace Lithography raises $40M to replace chip-making light with helium atoms

https://thenextweb.com/news/lace-lithography-40m-series
2•shaicoleman•23m ago•0 comments

Designing a single-file MMAP-backed read-only hashed multi-table database

https://notes.volution.ro/v1/2026/03/notes/53ac09b0/
2•ciprian_craciun•25m ago•0 comments

Militarized snowflakes: The accidental beauty of Renaissance star forts

https://bigthink.com/strange-maps/star-forts/
10•Brajeshwar•27m ago•0 comments

PromptPaste – Voice Input for Claude Code and Codex CLI

https://www.promptpasteapp.com/
1•yanivnoema•28m ago•0 comments

How a Bill Gates-Backed Company Landed in a Fight Between Congo and Belgium

https://www.wsj.com/world/africa/congo-belgium-bill-gates-company-6d0e4be0
2•ViktorRay•28m ago•0 comments

Software Is Becoming Something You Invoke, Not Navigate

https://opuslabs.substack.com/p/the-agent-layer-is-rewriting-software
3•opuslabs•30m ago•0 comments

Phos-Chek Fire Retardant

https://en.wikipedia.org/wiki/Phos-Chek
2•laurensr•35m ago•0 comments

Explanation for why we don't see two-foot-long dragonflies anymore fails

https://arstechnica.com/science/2026/03/leading-explanation-for-ancient-giant-flying-insects-gets...
3•amichail•37m ago•0 comments

When Coupled Volcanoes Talk, These Researchers Listen

https://www.quantamagazine.org/when-coupled-volcanoes-talk-these-researchers-listen-20260327/
4•Brajeshwar•38m ago•0 comments

I Beat the Benchmark and Still Failed

https://www.tarc.blog/essays/beating_the_benchmark.html
2•_Tarik•40m ago•0 comments

Does Your Skill Earn Its Keep?

https://efexen.substack.com/p/does-your-skill-earn-its-keep
3•efexen•41m ago•0 comments

Leaked Anthropic Model Presents 'Unprecedented Cybersecurity Risks'

https://gizmodo.com/leaked-anthropic-model-presents-unprecedented-cybersecurity-risks-much-to-pen...
4•HiroProtagonist•42m ago•3 comments

I accidentally spammed a year of calendar invites

https://mattfarrugia.com/posts/i-accidentally-spammed-a-year-of-calendar-invites
3•mfarrugia•42m ago•1 comments

I Can't See Apple's Vision

https://matduggan.com/i-cant-see-apples-vision/
3•birdculture•43m ago•0 comments

Be careful: chatting with AI about your case is discoverable

https://harvardlawreview.org/blog/2026/03/united-states-v-heppner/
4•rogerallen•45m ago•2 comments

Show HN: Free, in-browser PDF editor

https://breezepdf.com/?v=2
19•philjohnson•45m ago•0 comments