I don't really think people want these things to say "I don't know" - they want it to know.
That's obviously not reasonable for everything but I bet a lot of hallucinations ARE things where the model should be able to know or figure out. Most people are asking questions with well-known answers.
But I would guess the OpenAI post is correct, fundamentally they are trained in a way that rewards guessing, which I think must make it more likely it guesses even if the answer is in its reach.
CodingJeebus•1h ago
Tufte said it best: There are only two industries that call their customers 'users': illegal drugs and software.