Interesting. Perplexity did that as well, but I've made sure it stops doing that.
Might be relevant for others: https://www.perplexity.ai/search/hey-hey-do-you-remember-whe...
Of course, LLMs can still speak about probabilities and mimic uncertainty, but that’s likely (heh) coming from their training data on the subject matter, not their actual confidence.
Humans are interesting because they employ a two-phased approach: when we’re learning, we fake confidence (you’d never write “I don’t know” on a test unless you truly had nothing of value to say), but during inference, we communicate our confidence. Some humans suffer from underconfidence or overconfidence, but most just seem to know innately how to do this.
Can anyone who works on LLMs clarify whether my understanding is correct?
OkayPhysicist•1h ago
The other phenomena I would love to test is if the act of surveying people effected their declared odds. Not sure how to get good numbers out of that, but I could see the LLM vs surveyed human discrepancy arising from people using "probably" differently in their everyday writing, as opposed to when asked point-blank what "probably" means.