A human discovers a mathematical method or result, but the formal proof is generated (and even cross-verified) by multiple LLMs—even if the original author can’t fully reproduce the proof themselves.
Should such AI-generated proofs be considered valid and publishable?
What standards should apply when the idea is human-created, but the proof is AI-derived?
Curious to hear opinions from mathematicians, engineers, researchers, and journal editors. This feels like an important shift in how we think about proofs and authorship.