So the title of the paper is misleading at this time.
The project webpage has more information about the efforts and how to contribute:
Is the consensus that he never had the proof (he was wrong or was joking) -- or that it's possible we just never found the one he had?
We can't be 100% certain that Fermat didn't have a proof, but it's very unlikely (someone else would almost surely have found it by now).
“Fermat usually did not write down proofs of his claims, and he did not provide a proof of this statement. The first proof was found by Euler after much effort and is based on infinite descent. He announced it in two letters to Goldbach, on May 6, 1747 and on April 12, 1749; he published the detailed proof in two articles (between 1752 and 1755)
[…]
Zagier presented a non-constructive one-sentence proof in 1990“
(https://www.quora.com/What-s-the-closest-thing-to-magic-that... shows that proof was a bit dense, but experts in the field will be able to fill in the details in that proof)
There are many invalid proofs of the theorem, some of whose flaws are not at all obvious. It is practically certain that Fermat had one of those in mind when he scrawled his note. He realized that and abandoned it, never mentioning it again (or correcting the note he scrawled in the margin).
import FLT
theorem PNat.pow_add_pow_ne_pow
(x y z : ℕ+)
(n : ℕ) (hn : n > 2) :
x^n + y^n ≠ z^n := PNat.pow_add_pow_ne_pow_of_FermatLastTheorem FLT.Wiles_Taylor_Wiles x y z n hn
I make notes all the time that I accidentally discover years later with some amusement.
Would love to know whether (in principle obviously) the shortest proof of FLT actually could fit in a notebook margin. Since we have an upper bound, only a finite number of proof candidates to check to find the lower bound :)
Is Wiles' proof even in ZFC?
One could argue, being a lawyer put Fermat in the more rigorous bracket of contemporary mathematicians at least.
Fermat lived before the synthesis of calculus. People often talk about the period between the initial synthesis of calculus (around the time Fermat died) and the arrival of epsilon-delta proofs (around 200 years later) as being a kind of rigor gap in calculus.
But the infinitesimal methods used before epsilon-delta have been redeemed by the work on nonstandard analysis. And you occasionally hear other stories that can often be attributed to older mathematicians using a different definition of limit or integral etc than we typically use.
There were some periods and schools where rigor was taken more seriously than others, but the 1600s definitely do not predate the existence of mathematical rigor.
I possess a very simple proof of FLT, and indeed it does not fit in a margin.
I don't ask you to believe me, I just ask you to be patient.
It just seems like it would be as hard to verify the accuracy of the code written to prove a complex theorem like FLT as a manuscript written in English. But if you can rely on smaller statements that build on each other, it would make more sense.
2: The purpose of this is largely for the “writing it” part, not to make checking it easier. The computer checks the validity of the proof. (Though a person of course has to check that the formal statement of the result shown, is the statement people wanted to show.)
In the past when proofs were done by hands, little mistakes or little changes could lead to a complete failure of the proof. Mathematicians spent weeks or months to redo the steps and recheck every little detail.
Machine assisted prover basically raises the abstraction level for theorem proving. People don’t need sweat about little details and little changes in the proof steps.
Language based machine provers also enable wider corroboration as a problem can be subdivided into smaller problems and farmed out to different people to tackle, perhaps to different experts for different parts of the problem. Since everyone uses the same language, the machine can verify each part and the overall proof once the parts come together.
You can also think of this in reverse and it might help you understand better: Your type checker at compile time is basically providing a sort of "proof" that all the functions and arguments in your program are consistent. Of course, because the type system is not as sophisticated, it can't prove your program is correct, but it as least can prove something about how your program will behave. If you had a more advanced type system, you could, in fact, prove more sophisticated things about the performance of your code (for example that the shapes of all your matrices matched).
A great book on this topic is Type Theory and Formal Proof [1].
0. https://lean-lang.org/functional_programming_in_lean/
1. https://www.cambridge.org/core/books/type-theory-and-formal-...
Advanced proofs essentially just consist of a series of assertions “X, therefore Y, therefore Z, …” and the justification for that isn’t always laid out explicitly. As a result, when you read a formal proof, it often takes some work to “convince yourself” the proof is valid. And if a proof has a mistake, it’s probably not is one of those assertions, but rather in how you get from assertion X to assertion Y. It can often be really subtle.
Disclaimer: I have a math undergraduate, and have played around with the theorem proving language Coq, but haven’t worked with Lean
racl101•6h ago
generationP•3h ago