frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

LLMs are bullshitters. But that doesn't mean they're not useful

https://blog.kagi.com/llms
49•speckx•1h ago

Comments

1970-01-01•36m ago
The problem is we can't label them as such. If they're bullshitters, then let's call it a LLBSer. It has a nice ring to it. Good luck with your government funding asking for another billion for a bullshitting machine bailout.
koakuma-chan•35m ago
"BS in Computer Science" hits different
schwartzworld•25m ago
They are literally called "Large Language Model". Everybody prefers the term AI because it's easier to pretend they actually know things, but that's not what they are designed to do.
cogman10•34m ago
Good article, I just shared it with my non-technical family because more people need to understand exactly this about AI.
talljeff68•25m ago
Yes, I enjoyed the article as well and good for the non-technical reader.

I think of framing AI as having two fundamental problems:

- Practical problem: They operate in contextual and emotional "isolation" - no persistent understanding of your goals, values, or long-term intent

- Ethical problem: AI alignment is centralized around corporate values rather than individual users' authentic goals and ethics.

There is a direct parallel to social media's failure - platforms optimized for what they could do (engagement, monetization) rather than what they should do (serve user long term interests).

With these much more powerful AI systems emerging, we're at a crossroads of repeating this mistake...possibly at catastrophic scale even.

commandlinefan•34m ago
> You should not go to an LLM for emotional conversations

I'm more worried about who's keeping track of what's being shared with LLM's. Even if you could trust the model to respond with something meaningful, it's worth being very careful how much of your inner thoughts you share directly with a model that knows exactly who you are.

officeplant•28m ago
Or its just leaking private information in a multitude of other ways [1]

[1]https://arstechnica.com/tech-policy/2025/11/oddest-chatgpt-l...

juujian•34m ago
Same goes for many people.
mrweasel•30m ago
Obviously, they learned from people. That could also be why they sound so confident even when their wrong, people online sound incredibly confident, even when we're debating topics we know nothing about.
emp17344•17m ago
And yet, we’re all still employed, so obviously these systems are not yet analogous to humans. They mirror human behavior in some cases because they’ve been trained on almost every piece of text produced by human beings that we have access to, and they still aren’t as capable as the average person.
Legend2440•33m ago
Every time people post these 'gotcha' LLM failures, they never work when I try them myself.

E.g. ChatGPT has no problem with the surgeon being a dog: https://chatgpt.com/share/691e04cc-5b30-800c-8687-389756f36d...

Neither does Gemini: https://gemini.google.com/share/6c2d08b2ca1a

pengaru•32m ago
This is like the LLM era version of the search bubble that prevented people from having the same search results for ostensibly identical searches.

Also keep in mind that LLMs are stochastic by design. If you haven't seen it, Karpathy's excellent "deep dive into LLMs like chatgpt" video[0] explains and demonstrates this aspect pretty well:

[0] https://www.youtube.com/watch?v=7xTGNNLPyMI

foxyv•29m ago
I don't have a problem with more obvious failures. My problem is when the LLM makes a credible claim with its generated text that turns out to have some minor issue that catches me a month later. Generally I have to treat LLM responses as similar to a random comment I find on Reddit.

However, I'm really happy when an LLM provides sources that I can check. Best feature ever!

ceroxylon•17m ago
I have had an issue using Claude for research; it will often cite certain sources, and when I ask why the data it is using is not in the source it will apologize, do some more processing, and then realize that the claim is in a different source (or doesn't exist at all).

Still useful, but hopefully this gets ironed out in the future so I don't have to spend so much time vetting every claim and its associated source.

eli•28m ago
Isn't that Gemini 3 and not 2.5 Pro? But nondeterministic algorithms are gonna be nondeterministic sometimes.

Surely you've had experiences where an LLM is full of shit?

burkaman•27m ago
These are randomized systems, sometimes you'll get a good answer. Try again a couple times and you'll probably reproduce the issue. Here's what I got from ChatGPT on my first try:

This is a *twist* on the classic riddle:

> “A surgeon says ‘I can’t operate on this boy—he’s my son.’ How is that possible?” > Answer: *The surgeon is the boy’s mother.*

In your version, the nurse keeps calling the surgeon “sir” and treating them as if they’re something they’re not (a man, even a dog!) to highlight how the hospital keeps making the same mistaken assumption.

So *why can’t the surgeon operate on the boy?* *Because the surgeon is the boy’s mother.*

I got a similar answer from Gemini on the first try.

cpburns2009•20m ago
I don't understand this at all. What fundamental limitation of a mother prevents her from operating on her son?
VHRanger•15m ago
It's a classic riddle from the late 20th century when surgeons were rarely female.
cogman10•13m ago
It can be emotionally hard to cut into your own kid or to witness them go into a critical situation.

AFAIK, there's no actual limitation that prevents this, but just a general understanding that someone non-related to the patient would be able to handle the stress of surgery better.

VHRanger•20m ago
Hi, author here!

One issue with private LLM tests (including gotcha questions) is that they take time to design and once public, they become irrelevant. So I'm wary of sharing too many in a public blog.

The surgeon dog was well known in May, the newest generation of models have all corrected against it.

Those gotcha questions are generally called "misguided attention" traps, they're useful for blogs because they're short and surprising. The ChatGPT example was done with ChatGPT 5.1 (latest version) and Claude Haiku 4.5 is also a recent model.

You can try other ones that Gemini 3 hasn't corrected for. For example:

``` Jean Paul and Pierre own three banks nearby together in Paris. Jean Paul owns a bank by the bridge What has two banks and money in Paris near the water? ```

This looks like the "what has two banks and no money" puzzle (answer: a river).

Either way they're largely used as a device to show how LLMs come up to a verbal response by a different process than humans in an entertaining manner.

ramesh31•32m ago
I've come to cease all "inquiry" type usage of LLMs because of this. You really can't trust anything they say at all that isn't verified by a domain expert. But I can let it write code for me, and the proof is in the PR. I think ultimately the real value in these things is agentic usage, not knowledge generation.
trentnix•31m ago
The headline feels like a strawman.

LLMs are very useful. They are just not reliable. And they can't be held accountable. Being unreliable and unaccountable makes them a poor substitute for people.

ep103•31m ago
Its so nice to see this echo'd somewhere. This has been what I've been calling them for a while, but it doesn't seem to be the dominant view. Which is a shame, because it is a seriously accurate one.
slotrans•31m ago
> that doesn't mean they're not useful

yeah actually it does mean that

candiddevmike•30m ago
The problem is, I'm not expected to be a bullshitter, and I don't expect others to be either (just say you don't know!). So delegating work to a LLM or working with others who do becomes very, very frustrating.
tekacs•29m ago
This post is a little bizarre to me because it cherry picks some of the worst pairings of problem and LLM without calling out that it did so.

At pretty much every turn the author picks one of the worst possible models for the problem that they present.

Especially oddly for an article written today, all of the ones with an objective answer work just fine [1] if you use a halfway decent thinking model like 5 Thinking.

I get that perhaps the author is trying to make a deeper point about blind spots and LLMs' appearance of confidence, but it's getting exhausting seeing posts like this with cherry picked data cited by people who've never used an LLM to make claims about LLM _incapability_ that are total nonsense.

[1]: I think the subjective ones do too but that's a matter of opinion.

cogman10•20m ago
I don't think the author did anything wrong. The thesis of the article is that LLMs can be confidently wrong about things and to be wary of blindly trusting them.

It's a message a lot of non-technical people, in particular, need to hear. Showing egregious examples drives that point home more effectively than if they simply showed an LLM being a little wrong about something.

My family members that love LLMs are somewhat unhealthy with them. They think of them as all knowing oracles rather than confident bullshitters. They are happily asking them about their emotional, financial, or business problems and relying heavily on the advice the LLMs dish out (rather than doing second order research).

schwarzrules•29m ago
Summary using Kagi Summarizer. Disclaimer, this summary uses LLMs, so the summary may, in fact, be bullshit.

Title: LLMs are bullshitters. But that doesn't mean they're not useful | Kagi Blog

The article "LLMs are bullshitters. But that doesn't mean they're not useful" by Matt Ranger argues that Large Language Models (LLMs) are fundamentally "bullshitters" because they prioritize generating statistically probable text over factual accuracy. Drawing a parallel to Harry Frankfurt's definition of bullshitting, Ranger explains that LLMs predict the next word without regard for truth. This characteristic is inherent in their training process, which involves predicting text sequences and then fine-tuning their behavior. While LLMs can produce impressive outputs, they are prone to errors and can even "gaslight" users when confidently wrong, as demonstrated by examples like Gemini 2.5 Pro and ChatGPT. Ranger likens LLMs to historical sophists, useful for solving specific problems but not for seeking wisdom or truth. He emphasizes that LLMs are valuable tools for tasks where output can be verified, speed is crucial, and the stakes are low, provided users remain mindful of their limitations. The article also touches upon how LLMs can reflect the biases and interests of their creators, citing examples from Deepseek and Grok. Ranger cautions against blindly trusting LLMs, especially in sensitive areas like emotional support, where their lack of genuine emotion can be detrimental. He highlights the potential for sycophantic behavior in LLMs, which, while potentially increasing user retention, can negatively impact mental health. Ultimately, the article advises users to engage with LLMs critically, understand their underlying mechanisms, and ensure the technology serves their best interests rather than those of its developers.

Link: https://kagi.com/summarizer/?target_language=&summary=summar...

DrewADesign•27m ago
The problem I have with LLM-powered products is that they’re not marketed as LLMs, but as magic answer machines with phd-level pan-expertise. Lots of people in tech get frustrated and defensive when people criticize LLM-powered products and offer a defense as if people are criticizing LLMs as a technology. It’s perfectly reasonable for people to judge these products based on the way they’re presented as products. Kagi seems less hyperbolic than most, but I wish the marketing material for chatbots was more like this blog post than a overpromises.
williamcotton•16m ago
LLMs are both analytical and synthetical. Provide the context and "all bachelors are not married". Remove the context and you are now contingent on "is it raining outside".

We can leave out Kant and Quine for now.

pklausler•12m ago
LLMs are so very good at emitting plausible, authoritative-sounding, and clearly stated summaries of their training data. And if you ask them even fundamental questions about a subject of which you yourself have knowledge, they are too often astonishingly and utterly incorrect. It's important to remember this (avoiding "Gell-Mann amnesia"!) when looking at "AI" search results for things that you don't know -- and that's probably most of what you search for, when you think about it. I.e., if you indignantly flung Bill Bryson's book on the English language across the room, maybe you shouldn't take his book on general science too seriously later.

"AI" search results would perhaps be better for all of us if, instead of having perfect spelling and usage, and an overall well-informed tone, they were cast as transcriptions of what some rando at a bar might say if you asked them about something. "Hell, man, I dunno."

cogman10•10m ago
A coworker of mine recently ran into this. Had they listened to the AI they'd have committed tax fraud.

The AI very confidently told them that a household with 2 people working could have 1 person with a family HSA and the other with an individual HSA (you cannot).

Ask HN: Have you ever seen a perfect codebase?

1•mcdow•1m ago•0 comments

Kontak Call Center Agoda – Kantor Pusat Agoda 0815.4054.505

1•Neno0o•1m ago•0 comments

Linus Torvalds is optimistic about vibe coding except for this one use

https://www.zdnet.com/article/linus-torvalds-is-surprisingly-optimistic-about-vibe-coding-except-...
1•CrankyBear•1m ago•0 comments

Adobe to Buy Semrush for $1.9B

https://www.cnbc.com/2025/11/19/adobe-ai-semrush-stock-deal.html
1•pdyc•1m ago•0 comments

Bantuan Pelanggan Agoda Indonesia 0815-4054-505

1•Neno0o•2m ago•0 comments

Nomor Agoda 24 Jam Hubungi 0815 4054 505

1•Neno0o•3m ago•0 comments

Cypherpunks Hall of Fame

https://github.com/cypherpunkshall/cypherpunkshall.github.io
1•kiray•3m ago•0 comments

Real evidence that LLMs cannot operate businesses

https://skyfall.ai/blog/building-the-foundations-of-an-ai-ceo
2•sumit_psp•3m ago•0 comments

Berapa Nomor Agoda Indonesia

2•Chuckietude•4m ago•0 comments

A better way to search Hacker News using LLMs

https://github.com/typedef-ai/fenic-examples/tree/main/hn_agent
2•cpard•4m ago•1 comments

Number WhatsApp Agoda Indonesia 0815-4054-505

1•Chuckietude•5m ago•0 comments

GPT-5.1-Codex-Max System Card

https://openai.com/index/gpt-5-1-codex-max-system-card/
1•wertyk•5m ago•0 comments

Kinds of Stealing

https://seths.blog/2025/11/kinds-of-stealing/
1•speckx•5m ago•0 comments

Nomor Telepon Agoda Indonesia 0815-4054-505

1•Chuckietude•6m ago•0 comments

Kontak Agoda Indonesia – Hubungi 0815.4054.505

1•markwasilwa•7m ago•0 comments

AI System Outperforms Human Experts at AI Research

https://twitter.com/IntologyAI/status/1991186650240806940
3•RonusMTG•8m ago•1 comments

Library discovery by automated small molecule structure annotation

https://www.nature.com/articles/s41467-025-65282-1
1•PaulHoule•8m ago•0 comments

Animal Spirits: Is the AI Trade Over?

https://awealthofcommonsense.com/2025/11/animal-spirits-is-the-ai-trade-over/
1•paulpauper•10m ago•0 comments

Night of Modern Art History, Night of Spectacle at Sotheby's

https://www.nytimes.com/2025/11/18/arts/design/portrait-auction-record-klimt-sothebys.html
1•paulpauper•11m ago•0 comments

Is AI a Bubble? Not So Fast

https://www.thefp.com/p/is-ai-a-bubble-not-so-fast
1•paulpauper•12m ago•1 comments

Twenty Years of Django Releases

https://www.djangoproject.com/weblog/2025/nov/19/twenty-years-of-django-releases/
2•webology•12m ago•0 comments

Honeycomb Private Cloud

https://www.honeycomb.io/blog/introducing-honeycomb-private-cloud
1•gpi•12m ago•0 comments

The Drummers of Stoner Rock

https://www.jimdero.com/OtherWritings/OtherStonersMD.htm
2•scaglio•13m ago•0 comments

Wall Street Is Paywalling Your Kids' Sports

https://www.levernews.com/wall-street-is-paywalling-your-kids-sports/
2•ilamont•13m ago•0 comments

I'd run down the road thinking I was God: a day at the cannabis psychosis clinic

https://www.theguardian.com/society/2025/nov/16/cannabis-users-psychosis-clinic-london
1•mellosouls•14m ago•0 comments

Broccoli Man, Remastered

https://mbleigh.dev/posts/broccoli-man-remastered/
1•mbleigh•14m ago•0 comments

Netherlands returns control of Nexperia to Chinese owner

https://www.bloomberg.com/news/articles/2025-11-19/dutch-hand-back-control-of-chinese-owned-chipm...
9•boovic•14m ago•0 comments

Autodesk Introduces AI Transparency Cards for AI Features

https://www.autodesk.com/trust/trusted-ai/ai-transparency-cards
2•skobux•15m ago•0 comments

To Launch Something New, You Need "Social Dandelions"

https://www.actiondigest.com/p/to-launch-something-new-you-need-social-dandelions
4•curiouska•17m ago•0 comments

Adobe bolsters AI marketing tools with $1.9B Semrush buy

https://www.reuters.com/business/adobe-nears-19-billion-deal-software-provider-semrush-wsj-report...
1•tosh•19m ago•0 comments