I’d say that LLMs maybe understand better than we do, because of their lack of grandstanding classification of information, that belief and fact are tightly interwoven.
There is a dichotomy that truth can exist while fiction can be widely accepted as truth without the ability for humans to distinguish which is which and all the while thinking that some or most can.
I’m not pushing David Hume on you, but I think this is a learning opportunity.
It’s just another round of garbage in garbage out.
The only way we’ve learned is through referencing previous established trustworthy knowledge. The scientific consensus in merely a system that vigorously tests and discards previously held beliefs when they don’t match new evidence. We’ve spent thousands of years living in a world of make-believe. We only learned to emerge relatively recently.
It would be unreasonable to expect an LLM to do it without the tools we have.
It shouldn’t be hard to teach an LLM if you can’t verify it by reference to an evidence based source it’s not fact.
fuzzfactor•3mo ago
Sometimes the more fluent, the more often the fiction may fly under the radar.
For AI this could likely be when they are as close to human as possible.
Anything less, and well the performance will be lower by some opinion or another.