The main author Michael Clarkson also started a similar lecture series on Software Foundations using Rocq (Coq)[1]. Not sure if that's still updated though.
Zero evidence that this is the case.
I can tell you that debugging a compiler written in ML is a dumpster fire compared to debugging a compiler written in C++. If take C++ over any FP language for compilers any day of the week.
It’s all feels
People who like rejecting this kind of stuff as 'feels' are ironically also being guided by their 'feels'.
There's no scientific experiment you could run that proves, or disproves, that type systems lead to more correctness. It's a thing you cannot possibly know.
This is why I say that people who keep denying this are vibing based on their feels. Instead of asking for the evidence they just keep saying there can't possibly be any evidence.
Better spend some time having fun with std::variant, visit, and ranges transformers.
Depends on the correctness requirements in question. But overall you are 100% correct about this.
FP, among other aspects, enables and promotes some ways of reasoning, for instance when mutation is avoided, that can be relatively easy to use to verify correctness of certain types of properties. For instance, induction proofs and some other kinds of mathematical proofs. However, for some other types of correctness properties, imperative programming can be easier to reason about than FP. One possible example is in regards to implementation of algorithms, where for instance an implementation of quicksort in C is likely to be more concise and clearer than a "true" quicksort implementation in Haskell. Another possible example is (if one assumes that FP requires garbage collection) that of hard real-time systems, for instance some types of medical devices, where even though some types of garbage collection may be viable, approaches like forgoing dynamic memory allocation (no malloc, no reference counting, no types of garbage collection, etc.) may be easier to reason about regarding achieving the correctness requirement of hard real-time.
Overall, I definitely agree that ML and FP are not the best for achieving correctness in all cases.
I personally like to pick and choose between FP and other approaches, and mix them in different ways dependent on the project or task at hand. Like, a purely functional interface with internal mutation in the implementation for optimization. Or, some mutable API that uses FP for some aspects of the implementation where FP is easier to reason about.
> I can tell you that debugging a compiler written in ML is a dumpster fire compared to debugging a compiler written in C++. If take C++ over any FP language for compilers any day of the week.
For larger compilers for some requirements, I could for some projects imagine that this is true. But, for smaller compilers with relatively few requirements, the pattern matching and tagged union features of modern ML languages are very popular, and I like having access to those features when writing smaller compilers. If you do not mind spending the time to expound on this topic, I would very much like to know more. Or, maybe some links, like blog posts, that discuss this topic. I am genuinely curious. A blog post could also be shared elsewhere, instead of just lost in some Hacker News discussion. (Maybe I should start blogging myself).
Also, Hacker News/Ycombinator is censorship-infested.
But larger compilers tend to converge to two architectural features due to the sense that us compiler folk have that it leads to better maintainability. I think there is weak empirical evidence to support these choices (weak because you can’t run a repeatable experiment to prove it; it’s just experience we have)
1. Drop the AST as soon as possible and convert to an IR like a CFG with SSA or something like that. This allows for easier transforms, easier pattern matching, and easier analysis. It’s easier to get the compiler right in CFG than AST and it takes less code to do it.
2. Mutation. Compilers mutate the IR in place. This makes it easier to decouple transforms from one another and encourages writing finer grained passes that just do one thing well.
So you end up with the heart of the compiler being a CFG+SSA IR that is mutable. FP doesn’t help you do that, but OOP does help a lot. And you need state/mutation, ie imperative programming.
Do you simply dislike the OCaml syntax or is it some particular quirk?
> the variable scope thing
The what thing? Variables are just lexically scoped, are you referring to shadowing?
(When you look at the blog, you'll see that the last blog update was in 2018, and you might conclude that the project is dead. But it's actually not -- their Github repo is still getting new commits!)
a = 10; { a = 20; } print(a)
This would print 10. Something like that. I just remembered that the first time I encountered this, I thought "this is going to be one of those things where I will unnecessarily trip over" and closed the page.
let a = 10;
{
let a = 20;
}
print(a);
And even more clear in OCaml: let a = 10 in
begin
let a = 20 in
()
end;
print a
BTW you should also close the page on Rust then, which also has OCaml-like variable shadowing (and many other languages use very similar forms of shadowing, such as local variables shadowing object fields in Java/C#).
teiferer•6mo ago
skybrian•6mo ago
Edit: but in this case, apparently the book was written for a course at Cornell where they teach both functional and imperative programming using the same language.
derriz•6mo ago
Having worked on a code-base written in a lazy (Haskel-like) FPL where the architecture relied heavily on laziness, the least pleasant issue to deal with was sudden and obscure explosions in memory consumption.
Which would of course have a big impact on performance so maybe this was your point also.
hermanhermitage•6mo ago
OneDeuxTriSeiGo•6mo ago
So my guess would be less that Haskell is not these things (nor couldn't it be) but rather that OCaml has had the external forces necessary to optimise for these things above all else.
AnimalMuppet•6mo ago
sestep•6mo ago
KiranRao0•6mo ago
src: https://signalsandthreads.com/future-of-programming/
teiferer•6mo ago
dev_hugepages•6mo ago
See: https://wiki.haskell.org/index.php?title=Thunk https://wiki.haskell.org/index.php?title=Performance/Strictn...
shawn_w•6mo ago
Plus, though both languages allow defining new infix operators, ocaml coders are much more restrained about it than haskellers, and I hate having to figure out what >>>>$=<<< does when it shows up in code.
Quekid5•6mo ago
:)
faldor20•6mo ago
- Pragmatism: The ocaml community has a stronger focus on practical real projects. This is shown by the kind of packages available on the ecosystem and the way those packages are presented. (A number of Haskell packages I've tried to use often seen to be primarily intellectual pursuit with little documentation on actually using them).
- Simplicity: Haskell does have some amazing features, but it has so many different ways to do things, so many compiler flags, code can look vastly different from one codebase to another. It's kind of the c++ of FP. Ocaml is definitely more cohesive in its approach.
Tooling: last I used Haskell the whole ecosystem was pretty rough, with two package managers and a lot of complexity in just setting up a project. Ocaml is not at the level of rust or other modern languages, but is definitely a stop above I'd say.
Syzygies•6mo ago
Everyone talks a good line about more powerful, expressive programming languages till they need to put in the work. Ten years effort to program like ten people the rest of your life? I'm 69 now, I can see how such an investment pays off.
I moved to OCaml. Studying their library source code is the best intro ever to functional programming, but I felt your "all the way" pull and switched to Haskell. (Monads make explicit what a C program is doing anyway. Explicit means the compiler can better help. What it comes down to is making programming feel like thinking about algebra. This is only an advantage if one is receptive to the experience.)
I'm about to commit to Lean 4 as "going all the way". Early in AI pair programming, I tested a dozen languages including these with a challenging parallel test project, and concluded that AI couldn't handle Lean 4. It keeps trying to write proofs, despite Lean's excellence as a general purpose programming language, better than Haskell. That would be like asking for help with Ruby, and AI assuming you want to write a web server.
I now pay $200 a month for Anthropic Max access to Claude Code Opus 4 (regularly hitting limits) having committed to Swift (C meets Ruby, again not just for macOS apps, same category error) so I could have first class access to macOS graphics for my 3-manifold topology research. Alas, you can only build so high a building with stone, I need the abstraction leverage of best-in-category functional languages.
It turns out that Opus 4 can program in Lean 4, which I find more beautiful than any of the dozens of languages I've tried over a lifetime. Even Scheme with parentheses removal done right, and with far more powerful abstractions.
pmarreck•6mo ago
I'm 53, impressed that you're still going at it at 69!
Syzygies•6mo ago
Lean 4 is a better supported effort, with traction among mathematicians because of the math formalization goal.
I have more reasons to want to learn Lean 4. Peel away their syntax, and Lean 4 proofs are the word problem for typed trees with recursion. I find the reliance of AI on artificial neurons as arbitrary as so many advanced life forms on Earth sharing the same paltry code base for eyes, a nose, a mouth, and GI tracts. Just as many physicists see life as inevitable, in a billion runs of our simulation I'm sure AI would arise based on many foundations. Our AI fakes recursion effectively using many layers, but staring at the elementary particles that make up Lean proofs one sees a reification of thought itself, with recursion a native feature. I have to believe this would make a stronger foundation for AI.
I don't get that same rush looking at Idris. Using Lean 4 for general purpose programming? It must be good training.
pmarreck•6mo ago
The simulation hypothesis has a flaw IMHO- If it is modelable and therefore computable, it may be subject to the halting problem
yodsanklai•6mo ago
That being said, Haskell is pretty nice as well but I'd pick OCaml for real world stuff.
One thing that bothered me with both these languages is that people not fluent with FP could write code that isn't idiomatic at all. It's probably a bit harder to do in Haskell.
mbac32768•6mo ago
OCaml is immediate by default instead of lazy, and allows imperative code with side-effects. Both escape hatches from the pure FP world.
So, performance is easier to reason about and you can interact with your side-effecty real world stuff without having to reorganize your whole program around the correct monad.
Most of the time you want your loops to be higher order functions but once in awhile you want to just build a vector from a for loop. OCaml let's you do it without it being a whole intervention.
droideqa•6mo ago
kcsrk•6mo ago
nextos•6mo ago
djtango•6mo ago
Laziness by default was definitely an opinionated design choice for a language when using it in production
tome•6mo ago
Mk2000•6mo ago
mbac32768•6mo ago
anyway, it's a fairly trivial wrapper to handle the odd annoying thing that raises
Option.try_with (fun () -> thing_that_exns ())
Result.try_with (fun () -> thing_that_exns ())
(it would be nice if you could tell if something raises by the type signature somehow though)
KevinMS•6mo ago
aguluman•6mo ago
jganetsk•6mo ago
sideeffffect•6mo ago
https://www.youtube.com/playlist?list=PL0DsGHMPLUWVy9PjI9jOS...