But the biggest problem is, even human itself is subjectable to hallucination. That is called being delusional, or being drugged. So it is inevitable from the first principle.
No one uses the NFL to "prove" that LLMs can't learn to be the best optimizers, because it also proves that people can't be the best optimizers, but we manage somehow, so the theorem is irrelevant.
This is a fallacy of proving too much.
bell-cot•2h ago
But might any such care to comment on the consequences, if this "it is impossible, even in theory, to eliminate LLM hallucinations" result holds up?
hilariously•1h ago