Treating complex types, structure, arrangement, entanglement, etc. as if it has the same realness or priority as primitive types is a mistake.
It's like acting like recursion is physically real (it's not) when it smuggles in the call stack.
But even with Grothendieck topology (total and co-total categories) requires sets with least upper bound (join) and a greatest lower bound (meet).
The problem is that you get semi-algorthms, thus will only halt if true.
IMHO it is better to think of recursion as enabling realizability, and forcing fixed points. The ycombinator can be used for recursion because it is a fixed point combinator.
From descriptive complexity FO+LFP=SO+Krom=P, while FO by itself is AC^0 IIRC?
Using model logic+relations is another path.
The problem I have found is those paths tend to require someone to at least entertain intuitionistic concepts and that is a huge barrier.
Mostly people tend to use type theory to convert symantic properties to trivial symantic properties for convenience.
Blackburn/Rijke/Venema's “Modal Logic” is the most CS friendly book for that if you want to try.
But yes: semantic, run-time, extensional, etc... from Rice/Godel/etc...
And that, if you think about it a bit, might help you understand what recursion is, because your current understanding seems a bit off.
The only time you need a "call stack" with recursive calls is if the calls need to ultimately return to the call site and continue executing. It's still recursive.
By contrast, the model in which every function call pushes a return address, whether or not it's really needed, is a kind of mistake that comes from not understanding recursion.
To give a non-cynical answer: a good reason to look at type theory - if you’ve been exposed to computer science - is that it treats syntax and syntactical manipulation as foundational and can be viewed as the branch of mathematics that ties syntax and with the notion of computation.
And anyone who thinks their own area doesn't have jargon has just forgotten how to recognize it.
I could get answers about the national anthem, about Camden Yards, about good restaurants and bars. Those are "tourist" answers.
Or I could get answers about job availability and cost of housing. Those are "move there" answers.
It seems to me that this article (if you can call it that) gives mostly "tourist" answers, not "move there" answers. Well, that's all right in a way - for an academic topic, you probably have to get tourists before you get people moving there. But the problem is that mathematicians have a large number of areas that they could visit as tourists, and nothing here tells them why type theory is better to visit than, say, algebraic topology.
And that doesn't solve the original complaint. Why are so few people working in type theory? Because more of the action is in set theory and category theory, and type theory offers too little that the others don't, so nobody's going to switch.
I mean, most mathematicians aren't really in foundations anyway. They're in differential equations or prime numbers or complex analysis or algebraic topology or something like that, and making your foundation type theory instead of set theory makes no difference whatsoever to those people. Only the people working on foundations care, and as I said, most of those people already work in set theory or category theory, and type theory doesn't give them a good enough reason to uproot and move.
Type theory’s stomping ground is in the foundations of computation. I’m guessing if you sampled a theoretical comp. sci. audience, I’m pretty sure type theory would “win” the popularity contest.
Somewhat similar to intuitionism and constructivism - interesting to people interested in computability but considered almost silly or at least niche by “mainstream” math theoreticians.
This was a source of minor culture clash for me: I spent some time over the past couple of years working through some Coq-based educational materials (mainly Software Foundations led by Pierce) and talking to one of the grad student coauthors about his (PL theory/type theory) research. So I rather strongly internalized from this material that "we" don't automatically assume the Law of the Excluded Middle, but at least warn people if we had to use it (as it makes proofs nonconstructive: you no longer have a method for calculating the objects you are talking about). After this, I commented on something on Scott Aaronson's blog and I happened to say something like "wait, but doesn't that require excluded middle, and you didn't explicitly say so?" and Aaronson was rather confused, basically saying something like "we normal mathematicians assume excluded middle all the time, and we don't say so".
So yeah, there is a strong trend in favor of those things in some parts of type theory, but other mathematicians are then surprised that anyone would care. (And Aaronson in particular is someone very interested in and expert about computability, though one might say from a more traditional core computer science perspective.)
It's an intuitionistic logic theorem that the excluded middle is "irrefutable", that is, that you can never exhibit a counterexample. You can prove this constructively! This leads to something called the double negation translation, where all theorems that could be proved using excluded middle can be proven constructively to be "irrefutable" (rather than "true"), that is, that it would be impossible for an exception or counterexample to the theorem to exist. So you will get ~~P where classical logic could establish P (and deducing P from ~~P is equivalent to excluded middle).
One argument for not assuming excluded middle in proofs is then that the deductive system automatically shows you exactly which conclusions are constructive and which are not constructive, and you can interpret your deductive system in such a way that you're getting the same answers, just with this additional information or metadata about whether the answer was obtained constructively or nonconstructively. If you prove P, you did it constructively (you can exhibit the objects that P talks about, or extract an algorithm for locating or identifying them); if you prove ~~P, you did it nonconstructively, if "it" is considered as "proving P itself" (your argument shows that it would be impossible for the objects in question definitively not to exist, but you don't necessarily know how to find or calculate them).
So one can say, again, that in working this way you're learning the same stuff about your underlying subject matter, but with additional information attached at the end!
This is described in a more formal way at
I actually find that category theory is remarkably simple. At least at the level where it applies to day-to-day programming concepts. The one major benefit I got from it, was that it helped me reason about high level/abstract structure better (which helps my architectural designs/thinking).
[1] https://youtube.com/playlist?list=PLbgaMIhjbmEnaH_LTkxLI7FMa...
"Type Theory and Functional Programming" by Simon Thompson (PDF available via quick search)
First 120 pages of "Type Theory and Formal Proof" by Rob Nederpelt / Herman Geuvers
For some early stuff on dependent types (also covered in the above), check out "Intuitionistic Type Theory" by Per Martin-Lof
"Computation and Reasoning: A Type Theory for Computer Science" by Zhaouhui Luo
"Type-Driven Development with Idris" by Edwin Brady
"Gentle Introduction to Dependent Types with Idris" by Boro Sitnikovski
"Types and Programming Languages" by Pierce
"Practical Foundations for Programming Languages" by Robert Harper (or any of his YouTube videos re/Type Theory)
"The Little Typer" by Friedman and Christiansen
"Modal Homotopy Type Theory: The Prospect of a New Logic for Philosophy" by David Corfield (just took this one to OBX North Carolina to enjoy on my beach vacation last week! Got some salt water in those pages. Also speaks to Category Theory.)
For Category Theory:
"The Joy of Abstraction" (2023) by Euginia Cheng
"Basic Category Theory for Computer Scientists" by Benjamin C. Pierce
"Conceptual Mathematics: A first introduction to categories" by Lawvere and Schanuel
"Categories for the Working Philosopher" by Landry
I think the answer by (perhaps infamous) Paul Taylor in this mathoverflow post[1] is a clearer expression of the positions of those who want to jettison traditional set theory as the foundation of mathematics.
"The crucial issue is whether you are taken in by the Great Set-Theoretic Swindle that mathematics depends on collections (completed infinities)"
Yes, it's important to remember that everything labeled "infinity" in math doesn't "exist-exist" but rather is some combination of unending process and arbitrary labeling (see Skolem's paradox etc). But that doesn't mean such nominally completed infinities aren't useful or even necessary for a human being to grasp mathematics.
Anyway, for this debate it's worth reading all of Taylor's comment (which isn't to say I share his position but it's good to understand where the "finitests" are coming from).
[1] https://mathoverflow.net/questions/8731/categorical-foundati...
At around 2000, I studied applied mathematics. I was curious about automated theorem proving, but I didn't understand the language (being, as most mathematicians, trained in proving with classical logic rather than type theory). Years later I discovered Haskell, which became a "gateway drug" to type theory. I wish something like Lean had existed 25 years ago, when I was a student.
I think fully formal methods in mathematics are the future, and every math student should learn a bit of Lean (or a similar tool, but IMHO Lean has the best tutorials, aimed at people with traditional education). Hence, everybody should learn basics of type theory.
I suggest you look at simply typed lambda calculus, it gives a basic rationalization of types in functional programming.
- a factory for objects, that hides some internal state of the object;
- a static construct that gives a name to some (‘public’) subset of the behaviour of those objects, and which a type-checker can then use to ensure that the use of the objects adheres to the intended contract.
They're both mechanisms for encapsulation of internal behaviour (that may change at any time) while providing a blessed public API that you promise will stay the same (or at least only change in certain ways, under a strict discipline like semver or ‘we'll tell you on the mailing list’). The difference is that the former is runtime (dynamic) behaviour, while the latter is compile-time (static) behaviour. The former maps to functions (or, with state, closures), not types, in functional languages (see ‘objects are a poor man's closures, and vice versa’ [1]), and the latter… is types :) The same typing concept that ensures correct interaction with objects can also be used to ensure correct interaction with functions or other data types, though since functional languages are easier to type than OO languages strictly-typed functional languages tend to have stronger type systems that capture more properties, like effects.
[1]: https://people.csail.mit.edu/gregs/ll1-discuss-archive-html/...
From one perspective, there's a spectrum imperative <-> procedural <-> OOP <-> functional that is about how tightly scoped state changes are. In that view, functional languages are just the natural next step: if the whole point of OOP is to scope state, a functional language is just an OOP language with _even more_ tightly scoped state. The philosophy (uncontrolled state changes are scary and hard to reason about: add language constructs that allow the reader to see at a glance how far they have to look for the knock-on effects of a given state changes) is the same, just the implementation is different.
What I can't tell you is if data types in computer programs are related to type theory.
When I hear “type theory” - I think Martin-Löf, Girard’s Proofs and Types, Curry-Howard, etc.
Which is not a “theory of programming language types” - although the subjects are not completely isolated from each other.
Then again, it’s been decades since I actually studied this stuff and haven’t really followed either subject since.
As far as the latter is concerned (programming language type systems), as an engineer, I’ve seen nothing in mainstream programming languages that advances the state-of-the-art compared to what was known back in the 1980s - when Hindley-Milner became mainstream and Luca Cardelli wrote the seminal survey papers on the subject.
I'm far from an expert on this, but my understanding is that there is some relation between the two notions. But I couldn't even begin to quantify the strength of that relation, or explain any of the details. Which is sort of what led me down this path in the first place. Some random thing I read got me thinking that deeper knowledge of type theory could be useful from an AI perspective, and that led me down the rabbit hole of looking for info on type theory. And from what I've read so far, it seems like type theory (more so than, say, set theory, or maybe even category theory) is "closer" in some sense to computer science and does in some sense relate to how types (as used in programming languages) can be constructed and reasoned about.
I have a lot more to learn, needless to say. :-)
Non-OOP statically typed languages are basically just saying "ok that's fine, but there's no need to attach state or behaviour to types".
- Basic compound data structures (product types). Most imperative languages don't have sum types, which is the root of a lot of problems in that style
- Interfaces (some more enlightened imperative languages separate these out) and the adapter/impl/typeclass/what-have-you that ties together an implementation of that interface for that datastructure
- Mutable references - some way of associating an identity or name with a value that can be switched for a different value accessed via the same identity/name
ttoinou•6mo ago
HappyPanacea•6mo ago
ttoinou•6mo ago
Also, there's always some kind of minimal size required for spaces to be interesting for people to participate. And networks effects.
HappyPanacea•6mo ago