Libraries of theorems and mathematical objects, with well defined abstractions that are ergonomic to apply in target use cases. Accompanied by good documentation, focused less on how the theorems are proven (how the functions are implemented), and more on what to use them for and how. With proper version control and package management.
I believe that all these practices could vastly improve collaboration and research velocity in maths, as much or more than AI, although they are highly complementary. If maths is coding, AI will be much better at it, and AI will be more applicable to it.
They both can be useful or harmful, do to their respective strengths and trade offs.
PAC/statistical learning is good at needles in the haystack problems assuming that the tail losses, simplicity bias, and corpus representation issues are acceptable and you understand that it is fundamentally existential quantification and control for automation bias etc…
Lean is a wonderful collection of concepts and heuristics but due to Rice and Gödel etc… will not solve all problems with software development.
How Gödel’s second incompleteness theorem shows that you can prove anything, without that proof being meaningful is a lens into that.
It is horses for courses, and remember that even in sub-TC total functional programming, proving and arbitrary functions is very hard, while constructing one is far more tractable.
Even those proofs don’t demonstrate semantic correctness. History is riddled with examples of people using powerful tools that elegantly explain flawed beliefs.
The 2009 crash and gaussian copula as an example.
Get all the value you can out of these tools, but use caution, especially in math, where superficially similar similarities often have conflicting conventions, constraints, and assumptions.
Obviously if you problem is ergotic with the Markov property, both will help, but Automated theorem proving and PAC learning will never be a meta theory of the other IMHO.
What has Gödel incompleteness to do with that? We can just take any sentence φ as an axiom, and we’ve a trivial proof thereof.
That is not at all what it says.
> They both can be useful or harmful,
If a proof is admitted into lean, there is no doubt as to its truth. There is no way in which lean can be construed as harmful.
> The 2009 crash and gaussian copula as an example.
There is nothing mathematical about the economics behind the 2009 crash. Such things are statistical measurements, which admit the possibility of failure, not mathematical conclusions that are demonstrably true.
He is also a serious research mathematician at the top of his game, considered by many one of the best mathematicians alive. This might be biased by the fact that he is such a good communicator, he is more visible than other similarly good mathematicians, but he is a Fields medallist all the same.
This is a recent talk where he discusses putting it together with LLMs (he's somewhat sceptical it'll be revolutionary for producing new mathematics any time soon)
> Libraries of theorems and mathematical objects, with well defined abstractions that are ergonomic to apply in target use cases. Accompanied by good documentation, focused less on how the theorems are proven (how the functions are implemented), and more on what to use them for and how.
How is any of that different from what we had in math before Lean?
It provides tools for discovering relevant proofs, theorems, etc. Toying around with lean has actively taught me math that I didn't know before. The entire time it catches me any time I happen to fall into informal thinking and start making assumptions that aren't actually valid.
I don't know of any way to extract the abbreviation engine that lean plugins use in the relevant editors for use in other contexts, but man, I'd honestly love it if I could type \all or \ne to get access to all of the mathematical unicode characters trivially. Or even extend it to support other unicode characters that I might find useful to type.
I am curious to try out lean to understand how definitions in lean are able to operationally capture meaning in an unambiguous manner.
[1] https://www.amazon.com/Mathematica-Secret-World-Intuition-Cu...
No formalism is ontologically neutral in the sense that there is always an implied ontology or range of possible ontologies. And it is always important to make a distinction between the abstractions proper to the formalism and the object of study. A common fallacy involves reifying those abstractions into objects of the theory, at least implicitly.
Read Andrej Bauer on them many foundations of math, for example. Clearly he is a believer in "no one true ontology".
> Clearly he is a believer in "no one true ontology".
My point wasn't that you should aim for some kind of fictitious absence of ontological commitments, only that whatever language you use will have ontological commitments. Even the type judgement e:t has ontological implications, i.e., for the term e to be of type t presupposes that the world is such that this judgement is possible.
You can still operate under Fregean/Russellian presuppositions without sets. For example, consider the problem of bare particulars or the modeling of predicates on relations.
Similar in mathematics, formalization was driven by this concern, so that we wouldn't rely on potentially wrong intuition.
I am now in favor of formalizing all serious human discourse (probably in some form of rich fuzzy and modal logic). I understand the concern for definition, but in communication, it's better to agree on the definition (which could be fuzzy) rather than use two random definitions and hope for their match. (I am reminded of koan about Sussman and Minsky http://www.catb.org/jargon/html/koans.html)
For example, we could formally define an airplane as a machine that usually has wings, usually flies. This would be translated into a formula in fuzzy logic which would take, for a given object, our belief this object is a machine, has wings and flies, and would return how much it is an airplane under some notion of usually.
I freely admit this approach wouldn't work for dadaist literary writers, but I don't want lawyers or politicians or scientists to be that.
https://metarationality.com/sort-of-truth
Formalism isn't the right tool for a lot of semi-factual fields like journalism or law. Even in business, numbers are of course used in accounting, but much of it depends on arbitrary definitions and estimates. (Consider depreciation.)
I agree 100% and feel like I have seen a lot of people in physics kind of fall into this trap. The model is not the thing itself.
> powering various math tools
I don't think going through a math proof like they were computer programs is a good way to approach mathematics. In mathematics I think the important thing is developing a good intuition and mental model of the material. It's not a huge problem if the proof isn't 100% complete or correct if the general approach is good. Unlike programming, where you need a program to work 99.9% of the time, you have to pay close attention to all the minute details.
> analyzing meta-math trends
I'm highly skeptical of the usefulness of this approach in identifying non-trivial trends. In mathematics the same kinds of principles can appear in many different forms, and you won't necessarily use the same language or cite the same theorems even though the parallels are clear to those who understand them. Perhaps LLMs with their impressive reasoning abilities can identify parallels but I doubt a simple program would yield useful insights.
> Basically, the process of doing math will become more efficient and hopefully more pleasant.
I don't see how his points make things more efficient. It seems like it's adding a bunch more work. It definitely doesn't sound more pleasant.
From https://news.ycombinator.com/item?id=44214804 sort of re: Tao's Real Analysis formalisms:
> So, Lean isn't proven with HoTT either.
Ultimately we want all of the math literature to become training material, but that would likely require automated techniques for converting it to formalized proofs. This would be a back-and-forth thing that would build on itself.
The reason to formalize mathematics is to automate mathematical proofs and the production of mathematical theory.
I know nothing of mathematics but found it fascinating, especially the idea that if outside information changes that affects your proof, you can have the Lean compiler figure out which lines of your proof need updating (instead of having to go over every line, which can take days or more).
This is kinda like asking, why write Clang when we already had GCC? Or, why making Python if we already have Perl?
It's good to have some competition for these things, Rocq I believe felt the heat and has been also doing some good things in recent years.
`The Rocq Prover (formerly named Coq) [...] `
Coq has always focused on proving program correctness, so it sees lots of use by computer scientists. It also does code extraction, so after you prove a program correct in Coq you can generate a fast version of that program without the proof overhead.
Say I'm wanting to formalize a proof. How do I know that what I'm writing is actually a correct formulation?
If it gets more complicated, this problem gets worse. How do I know the thing it is checking is actually what I thought it was supposed to check?
I guess this is a bit like when you write a program and you want to know if it's correct, so you write some tests. But often you realize your tests don't check what you thought.
I don't see why they would.
If anyone is curious about the phenomenon, the second problem in session 7 at https://incredible.pm/ [ ∀x.(r(x)→⊥)→r(f(x)) ⟹ ∃x.r(x)∧r(f(f(x))) ] is one where the proof is straightforward, but you're unlikely to get to it by just fooling around in the prover.
I love the analogy in David Bessis's wonderful book Mathematica (nothing to do with Wolfram). We all know how to tie our shoes. Now, write in words and symbols to teach someone how you tie your shoes. This is what a proof is.
Often even people with STEM degrees confuse what mathematicians do with the visible product of it - symbols and words on a page. While the formalism of mathematics has immense value for precision, and provides a "serialization language" (to borrow a CS analogy), it would be akin to confusing a Toaster with the Toaster manual, or shoelaces with the instructions.
Personally, at times, I struggled with the dual nature of mathematics; its extreme precision in meaning combined with vague and inconsistent use of symbols is challenging... Especially frustrating when learning something new and some symbols that you think you understand turn out to mean something else; it creates distrust towards maths itself.
The idea is that there's a small AI built into the VS Code extension that will fill in the details of proofs for you. Check it out if you're interested in this sort of thing!
I just released my treatise yesterday, at https://leanpub.com/elementsofprogramming
Elements of Programming presents programming as a mathematical discipline built on structure, logic, and proof. Written in the style of Euclid’s Elements, it defines computation through clear axioms, postulates, and propositions. Each book develops one aspect of programming as a coherent system of reasoning.
Book I establishes identity, transformation, and composition as the foundations of computation.
Book II introduces algebraic structures such as categories, functors, and monads.
Book III unites operational and denotational semantics to show that correctness means equivalence of meaning.
Book IV formalizes capability-based security and verification through invariants and confinement.
Book V connects type theory with formal assurance, explaining how types embody proofs.
Book VI extends these ideas into philosophy and ethics, arguing that software expresses human intention and responsibility.
constantcrying•14h ago
Computerized mathematics is just another step in that direction.
dboreham•13h ago
zozbot234•8h ago
They had a whole lot of computers, actually. But back then the "computers" were actual people whose job was to do computations with pen and paper (and a few very primitive machines).
constantcrying•6h ago
The idea actually goes back to Leibnitz, who was very much overoptimistic about computability, but already conceived of the idea of a logic machine, which could deter the truth value of any statement.
ljlolel•5h ago
jojomodding•2h ago