1. The development that you just did is equivalent to the corresponding object in Mathlib. Frequently, what is developed comes from concrete building blocks, whereas the corresponding definition in Mathlib might be a specialization of a complicated general class.
2. The basic notation and nomenclature for the object in Mathlib, which may differ.
In PLT, we already had a flagship book (The Formal Semantics of Programming Languages by Winskel) formally verified (kind of, it's not a 1-to-1 transcription) using Isabelle (http://concrete-semantics.org) back in the mid 2010s, when tools began to be very polished.
IMHO, if someone is interested in theorem proving, that's a much simpler starting point. Theorems in analysis are already pretty hard on their own.
I will also mention Software Foundations in Rocq (perhaps there is a Lean port). I worked through some of the first parts of it and found it quite pleasant.
(Refactoring is of course made easier because the system will always keep track of what follows logically from what. You don't have that when working with pen and paper, so opportunities to rework things are very often neglected.)
It is a natural question also to ask whether it makes sense to teach the Mathlib, "maximum generality" version of real analysis, or at least something approaching that, in an academic course. Same for any other field of proof-based math, of course.
I just feel that the overhead code presents is massive, since it often does not adhere to any semblance of style. I say say this as someone who has had to parse through other’s mathematical papers that were deemed incomprehensible. Code is 10x worse since there are virtually no standards with regards to comprehensibility.
I believe that's the experience of faculty who've tried as well - it's fine for advanced students, but a waste of instructional time for the average pupil.
Previously, the only feedback students could receive would be from another human such as a TA, instructor, or other knowledgeable expert. Now they can receive rapid feedback from the Lean compiler.
In the future I hope there is an option for more instructive feedback from Lean's compiler in the spirit of how the Rust compiler offers suggestions to correct code. (This would probably require the use of dedicated LLMs though.)
But. I’m also thoughtful about proving things — my own math experience was decades ago, but I spent a lot of ‘slow thought’ time mulling over whatever my assignments were, trying things on paper, soaking up second hand smoke and coffee, your basic math paradise stuff.
I wonder if using Lean here could lead to some flailing / random checking / spewing. I haven’t worked with Lean much, although I did do a few rounds with coq five or ten years ago; my memory is that I mostly futzed with it and tried things.
Upshot - a solver might be great for a lot of things. But I wonder if it will cut out some of this slow thoughtful back-and-forth that leads to internalization, conceptualization, and sometimes new ideas. Any thoughts on this?
What I was surprised is that the students learn some patterns of proof properly, but only if you make sure that they are explicitly exposed by the proof assistant (so the more automation the less learning also in this case).
You can find a lot of the work summarized in Jelle’s PhD thesis at https://research.tue.nl/nl/publications/waterproof-transform...
This is how Acorn works, so that when proving fails but you are "close", you get suggestions in VS Code like:
Try this:
reduce(r.num, r.denom) = reduce(a, b)
cross_equals(a, b, r.num, r.denom)
r.denom * a = r.num * b
It doesn't use LLMs, though, there's a small local model running inside the VS Code extension. One day hopefully that small local model can be superhumanly strong. For more info: https://acornprover.org/docs/tutorial/proving-a-theorem/Why no HoTT, though?
"Should Type Theory (HoTT) Replace (ZFC) Set Theory as the Foundation of Math?" https://news.ycombinator.com/item?id=43196452
Additional Lean resources from HN this week:
"100 theorems in Lean" https://news.ycombinator.com/item?id=44075061
"Google-DeepMind/formal-conjectures: collection of formalized conjectures in lean" https://news.ycombinator.com/item?id=44119725
A lot less work has gone into making HoTT theorem provers ergonomic to use. Also, the documentation is a lot more sparse. The benefits of HoTT are also unclear - it seems to only save work when dealing with very esoteric constructs in category theory.
This is a bit like asking the developers of a Javascript framework why they they didn't write a framework for Elm or Haskell.
It's also a bit of a controversial topic in formalized mathematics. See the following relevant comments by Kevin Buzzard (an expert in Lean and also in the sort of abstract nonsense that might arguably benefit directly from HoTT) - Links courtesy of https://ncatlab.org/nlab/show/Kevin+Buzzard
* Is HoTT the way to do mathematics? (2020) https://www.cl.cam.ac.uk/events/owls/slides/buzzard.pdf ("Nobody knows because nobody tried")
* Grothendieck's use of equality (2024) https://arxiv.org/abs/2405.10387 - Discussed here https://news.ycombinator.com/item?id=40414404
Sort of a weird question to ask imo.
Terrence Tao has a couple of analysis textbooks and this is his companion to the first of those books in Lean. He doesn’t have a type theory textbook, so that’s why no higher-order type theory - it’s not what he’s trying to do at all.
Also, there's a new textbook coming out later this year that's a more modern update to the original HoTT book [2] which also has an Agda formalization. [3]
[1] https://martinescardo.github.io/HoTT-UF-in-Agda-Lecture-Note...
[2] https://www.cambridge.org/core/books/introduction-to-homotop...
A couple immediate worries:
1. Core analysis results in Mathlib work with limits in a general, unified way using the notion of filters. There are nevertheless some ad-hoc specializations of these results to their epsilon-delta form. I presume that Tao's Analysis uses a more traditional epsilon-delta approach.
2. Mathlib moves fast and breaks things. Things get renamed and refactored all the time. Downstream repos needs continual maintenance.
https://link.springer.com/chapter/10.1007/978-981-19-7261-4_...
I already posted the following in the private Discord server for the book, but this seems like possibly a good public space to share the following, in case anyone here may find it useful:
- https://github.com/cruhland/lean4-analysis which pulls from https://github.com/cruhland/lean4-axiomatic
- https://github.com/Shaunticlair/tao-analysis-lean-practice
- https://github.com/vltanh/lean4-analysis-tao
- https://github.com/gabriel128/analysis_in_lean
- https://github.com/mk12/analysis-i
- https://github.com/melembroucarlitos/Tao_Analysis-LEAN
- https://github.com/leanprover-community/NNG4/ (this one does not follow Tao's book, but it's the Lean4 version of the natural numbers game, so has very similar content as Chapter 2)
- https://github.com/djvelleman/STG4/ (similar to the previous one, this is the Lean4 set theory game, so it's possibly similar content as Chapter 3; however, in https://github.com/djvelleman/STG4/blob/main/Game/Metadata.l... I see "import Mathlib.Data.Set.Basic" so this seems to just import the sets from Lean rather than defining it anew and setting down axioms, so this approach might mean that Lean will "know" too much about set theory, which is not good for our purposes)
- https://gist.github.com/kbuzzard/35bf66993e99cbcd8c9edc4914c... -- for constructing the integers
- https://github.com/ImperialCollegeLondon/IUM/blob/main/IUM/2... -- possibly the same file as above
- https://github.com/ImperialCollegeLondon/IUM/blob/main/IUM/2... -- for constructing the rationals
- https://lean-lang.org/theorem_proving_in_lean4/axioms_and_co... -- shows one way of defining a custom Set type
Now as to why he picked Lean a few years ago as his go-to theorem prover, I do think that would be an interesting backstory. As far as I know he hasn't written anything explicit about it.
[1] E.g., https://terrytao.wordpress.com/2023/11/18/formalizing-the-pr...
https://leanprover-community.github.io/mathematics_in_lean/i...
It doesn't get to linear algebra till chapter 9, and it tends to be cumulative, but you could try the first 4 chapters to get the basics and jump ahead to see how you get on
I feel our current system's focus on endless manual calculations that seem so useless is boring and tedious and makes people hate math.
danabramov•1d ago
dilawar•1d ago