Did somebody expect otherwise?
Isn't virtually everything an LLM produces really just an "opinion" that is kinda/sorta based on it's training data?
As far as I know, there is no automated mechanism for verifying "truth" in chatbot output.
jqpabc123•2h ago
Did somebody expect otherwise?
Isn't virtually everything an LLM produces really just an "opinion" that is kinda/sorta based on it's training data?
As far as I know, there is no automated mechanism for verifying "truth" in chatbot output.