Proof validation is the perfect solution to this, and indeed I would love to see future improvements to LLMs which allow them to formalize their proofs with a feedback loop from something like Lean or Coq so that they can ensure that hallucinations haven't occurred.
I wonder who gets first author credits on that paper.
xeonmc•11h ago