You can use Bun to compile to native binaries without jumping through hoops. It's not mature, but it works well enough that we use it at work.
Rust allows low level programming and static compilation, while still providing abstraction and safety. A good ecosystem and stable build tools help massively as well.
It is one of the few languages which managed to address a real life need in novel ways, rather than incrementing on existing solutions and introducing new trade offs.
It's a detail, but this is a little bit off. RAM latency is roughly around ~100ns, CPUs average a couple instructions per cycle and a few cycles per ns.
Then in the analogy, a stall on RAM is about a 10 minute wait; not quite as bad as losing entire days.
Take Apple's latest laptops. They have 16 CPU cores, 12 of those clocking at 4.5 GHz and able to decode/dispath up to 10 instructions per cycle. 4 of those clocking at 2.6 GHz, I'm not sure about their decode/dispatch width but let's assume 10. Those decoder widths don't translate to that many instructions-per-cycle in practice, but let's roll with it because the order of magnitude is close enough.
If the instructions are just right, that's 824 instructions per nanosecond. Or, roughly a million times faster than the 6502 in the Apple-II! Computers really have got faster, and we haven't even counted all the cores yet.
Scaling those to one per second, a RAM fetch taking 100ns would scale to 82400 seconds, which 22.8 hours, just short of a day.
Fine, but we forgot about the 40 GPU cores and the 16 ANE cores! More instructions per ns!
Now we're definitely into "days".
For the purpose of the metaphor, perhaps we should also count the multiple lanes of each vector instruction on the CPU, and lanes on the GPU cores, as if thery were separate processing instructions.
One way to measure that, which seems fair and useful to me, is to look at TOPS instead - tera operations per second. How many floating-point calculations can the processor complex do per second? I wasn't able to find good figures for the Apple M4 Max as a whole, only the ANE component, for which 38 TOPS is claimed. For various reasons tt's reasonable to estimate the GPU is the same order of magnitude in TOPS on those chips.
If you count 38 TOPS as equivalent to "CPU instructions" in the metaphor, then scale those to 1 per second, a RAM fetch taking 100ns scales to a whopping 43.9 days on a current laptop!
This scenario where all your 16 cores are doing 10 instructions per clock assumes everything is running without waiting, at full instruction-level and CPU-level parallelism. It's a measure of the maximum paper throughput when you're not blocked waiting on memory.
You could compare that to the maximum throughput of the RAM and the memory subsystem, and that would give you meaningful numbers (for instance, how many bytes/cycle can my cores handle? How many GB/s can my whole system process?).
Trying to add up the combined throughput of everything you can on one side and the latency of a single fetch on the other side will give you a really big number, but as a metaphor it will be more confusing than anything.
For the other 10% software that is performance-sensitive or where I need to ship some binary, I haven't found a language that I'm "happy" with. Just like the author talks about, I basically bounce between Go and Rust depending on what it is. Go is too simple almost to a fault (give me type unions please). Rust is too expressive; I find myself debugging my knowledge of Rust rather than the program (also I think metaprogramming/macros are a mistake).
I think there's space in the programming language world for a slightly higher level Go-like language with more expressiveness.
Too bad the binaries are 60MB at a minimum :(
You can't synthesize ad hoc union types (https://www.typescriptlang.org/docs/handbook/2/everyday-type...)
There's no notion of literal types (https://www.typescriptlang.org/docs/handbook/2/everyday-type...)
There's no type narrowing (which gets you a kind of sum type) (https://www.typescriptlang.org/docs/handbook/2/narrowing.htm...)
Most of the type maniuplation features (keyof, indexed access types, conditional types, template literal types...) are missing (https://www.typescriptlang.org/docs/handbook/2/types-from-ty...)
All the niche languages have a chicken and egg problem.
Only way around that is to be able to piggy back on C or JavaScript or Java.
The worst part is that `no-floating-promises` is strange. Without it, Knex (some ORM toolkit in this codebase) can crash (segfault equivalent) the entire runtime on a codebase that compiles. With it, Knex's query builders will fail the lint.
It was confusing. The type system was sophisticated enough that I could generate a CamelCaseToSnakeCase<T> type but somehow too weak to ensure object borrow semantics. Programmers on the codebase would frequently forget to use `await` on something causing a later hidden crash until I added the `no-floating-promises` lint, at which point they had to suppress it on all their query builders.
One could argue that they should just have been writing SQL queries and I did, but it didn't take. So the entire experience was fairly nightmarish.
And then gave up in disgust.
Look, I'm no genius, not by a long shot. But I am both competent and experienced. If I can't make these things work just by messing with it and googling around, it's too damned hard.
The author doesn't want manual memory management, but still decides to go with Rust.
But it requires me to manage memory and lifetimes, which I think is something the compiler should do for me.
The author wants the compiler to do memory management? How does Rust achieve this?You’re not often manually managing memory in the same sense as other low-level languages. You’re not invoking malloc and free directly. Thanks to ownership, when you do allocate, Rust will call free for you.
It’s somewhere in between fully manual and fully automatic. It can feel more like one or the other based on what you’re doing, but most of the time, in average, it feels like automatic.
> Rust... But it requires me to manage memory and lifetimes, which I think is something the compiler should do for me.
I really wanted to like rust and I wrote a few different small toy projects in it. At some point knowledge of the language becomes a blocker rather than knowledge the problem space, but this is a skill issue that I'm sure would lessen the more I used it.
What really set me off was how every project turned into a grocery list of crates that you need to pull in in order to do anything. It started to feel embarrassing to say that I was doing systems programming when any topic I would google in rust would lead me to a stack overflow saying to install a crate and use that. There seemed to be an anti-DIY approach in the community that finally drew me away.
If that creator's vibe happens to match yours this could be beautiful, at least for personal projects. It's hard to imagine this scaling. A triple A studio hiring panel: "You've applied for a job but we write only Jai here. We notice you haven't submitted any obsessive fan art about Jonathan Blow. Maybe talk us through the moment you realised he was right about everything?"
It's a byte string.
> rune is the set of all Unicode code points.
We copied the awful name from Go … and the docs are wrong.
Five different boolean types?
Zero values. (Every value has some default value, like in Go.)
Odin also includes the Billion Dollar Mistake.
> There seemed to be an anti-DIY approach in the community that finally drew me away.
It's a "let a thousand flowers bloom" approach, at least until the community knows which design stands a good chance of not being a regretted addition to the standard library.
There are some things that feel a little weird, like the fact that often when you want a more complex data structure you end up putting everything in a flat array/map and using indices as pointers. But I think I've gotten used to them, and I've come up with a few tricks to make it better (like creating a separate integer type for each "pointer" type I use, so that I can't accidentally index an object array with the wrong kind of index).
Rust is one of those languages that change how you think, like Haskell or Lisp or Forth. It won't be easy, but it's worth it.
I am a fan of Rust but it’s definitely a terse language.
However there are definitely signs that they have thought about making it as readable as possible (by omitting implicit things unless they’re overwritten, like lifetimes).
I’m reminded also about a passage in a programming book I once read about “the right level of abstraction”. The best level of abstraction is the one that cuts to the meat of your problem the quickest - spending a significant amount of time rebuilding the same abstractions over and over (which, is unfortunately often the case in C/C++) is not actually more simple, even if the language specifications themselves are simpler.
C codebases in particular, to me, are nearly inscrutable unless I spend a good amount of time unpicking the layers of abstractions that people need to write to make something functional.
I still agree that Rust is a complex language, but I think that largely just means it’s frontloading. a lot of the understanding about certain abstractions.
Adding features in particular is a breeze and automatically the compiler/language will track for you the places that use only old set of traits.
Tooling is still newer though and needs polish. Generic handling is interesting at times and there are related missing features for that in the language, vis a vis specializations in particular.
Basic concurrency handling is also quite different in Rust than other languages, but thus usually safer.
But it does! To qoute my top-level comment:
> What about about race conditions, null pointers indirectly propagated into functions that don't expect null, aliased pointers indirectly propagated into `restrict` functions, and the other non-local UB causes?
In other words: you set some pointer to NULL, this is OK in that part of your program, but then the value travels across layers, you've skipped a NULL check somewhere in one of those layers, NULL crosses that boundary and causes UB in a function that doesn't expect NULL. And then that UB itself also manifests in weird non-local effects!
Rust fixes this by making nullability (and many other things, such as thread-safety) an explicit type property that's visible and force-checked on every layer.
Although, I agree that things like macros and trait resolution ("overloading") can be sometimes hard to reason about. But this is offset by the fact that they are still deterministic and knowable (albeit complex)
> in fact it helps because it allows compilers to turn it a trap without requiring it on weak platforms
The "shared xor mutable" rule in Rust also helps the compiler a lot. It basically allows it to automatically insert `restrict` everywhere. The resulting IR is easier to auto-vectorize and you don't need to micro-optimize so often (although, sometimes you do when it comes to eliminating bound checks or stack copies)
> Restrict is certainly dangerous, but also rarely used and a clear warning sign, compare it to "unsafe".
It's NOT a clear warning sign, compared to `unsafe`. To call an unsafe function, the caller needs to explicitly enter an `unsafe` block. But calling a `restrict` function looks just like any normal function call. It's easy to miss an a code review or when upgrading the library that provides the function. That the problem with C and C++, really. The `unsafe` distinction is too useful to omit.
That's exactly because it's too dangerous and the developers quickly learn to avoid it instead of using it where appropriate! Same with multithreading. C leaves a lot of optimization on the table by making the available tools too dangerous and forcing people to avoid them altogether.
That's how you get stuff like memory-safe Rust PNG decoders being 1.5x faster than established C alternatives that had much more effort put into them (https://www.reddit.com/r/rust/comments/1ha7uyi/memorysafe_pn...). Or the first parallel CSS engine being written in Rust after numerous failed attempts in C++ (https://www.reddit.com/r/rust/comments/7dczj9/can_stylo_be_i...). Read those threads in full, there are some good explanations there.
> you need to exaggerate the practical problems in C
I thought, the famous "70% of vulnerabilities" report settled this once and for all.
Care to elaborate why?
> And cherry picking individual benchmarks too.
Do you have any general, comprehensive benchmarks or statistics that would indicate the opposite? I would include one if I had one at hand, because that would be a stronger argument! But I'm not aware of such benchmarks. I have to cherry pick individual projects. I don't want to.
I still claim that, as a general trend, Rust replacements are faster while also being less bug-prone and taking much less time to write. Another such example is ripgrep.
C can't match that. In C, you're basically acting as a human compiler, writing lots of code that could be generated if you used a more expressive language. Plus, as has been mentioned, it supports refactoring easily and safely better than any language outside of the Haskell/ML space.
The advantages of Rust are a package which includes safety, expressiveness, refactoring support. You don't need to exaggerate anything for that package to make sense.
The fact that you, as an individual, prefer to use an unsafe, weakly typed language isn't very relevant to that.
The issue is not that it's not possible to write secure programs in C, the issue is that in practice, on average, people don't.
Pushing the use of memory-safe languages will reduce the number of security vulnerabilities in the entire software ecosystem.
This is a line of thinking I used to see commonly when dynamic typing was all the rage. I think the difference comes from people who view primarily work on projects where they are the sole engineer vs ones where they work n+1 other engineers.
"just add assertions" only works if you can also sit on the shoulder of everyone else who is touching the code, otherwise all it takes is for someone to come back from vacation, missing the refactor, to push some code that causes a NULL pointer dereference in an esoteric branch in a month. I'd rather the compiler just catch it.
Furthermore, expressive type systems are about communcation. The contracts between functions. Your CPU doesn't case about types - types are for humans. IMO you have simply moved the complexity from the language into my brain.
You mean obsolete and subtly wrong comments buried among preprocessor directives in some far-upstream header files?
This happens a lot in discussion about programming complexity. What you are doing is changing the original problem to a much simpler one.
Consider a parsing function parse(string) -> Option<Object>
This is the original problem, "Write a parsing function that may or may not return Object"
What a lot of people do is they sidetrack this problem and solve a much "simpler problem". They instead write parse(string) -> Object
Which "appears" to be simpler but when you probe further, they handwave the "Option" part to just, "well it just crashes and die".
This is the same problem with exceptions, a function "appears" to be simple: parse(string) -> Object but you don't see the myriads of exceptions that will get thrown by the function.
But in the end, you can write Option just fine it C. I agree though that C sometimes can not express things perfectly in the type system. But I do not agree that this is crucial for solving these problems. And then, also Rust can not express everything in the type system. (And finally, there are things C can express but Rust can't).
No, you can't. In the sense that the compiler doesn't have exhaustiveness checks and can't stop you from accessing the wrong variant of a union. An Option in C would be the same as manually written documentation that doesn't guarantee anything.
std::optional in C++ is the same too. Used operator* or operator-> on a null value? Too bad, instant UB for you. It's laughably bad, given that C++ has tools to express tagged unions in a more reliable way.
> And then, also Rust can not express everything in the type system. (And finally, there are things C can express but Rust can't).
That's true, but nobody claims otherwise. It's just that, in practice, checked tagged unions are a single simple feature that allows you to express most things that you care about. There's no excuse for not having those in a modern language.
And part of the problem is that tagged unions are very hard to retrofit into legacy languages that have null, exceptions, uninitialized memory, and so on. And wouldn't provide the full benefit even if they could be retrofitted without changing these things.
The solution have to scale linearly with the problem at hand, that is what it means to have a good solution.
I agree with the article that Rust is overkill for most use cases. For most projects, just use a GC and be done with it.
> But I do not agree that this is crucial for solving these problems. And then, also Rust can not express everything in the type system.
This can be taken as a feature. For example, is there a good reason this is representable?
struct S s = 10;
I LOVE the fact that Rust does not let me get away with half-ass things. Of course, this is just a preference. Half of my coding time is writing one-off Python scripts for analysis and data processing, I would not want to write those in Rust.
> But in the end, you can write Option just fine it C.
Even this question have a deeper question underneath it. What do you mean by "just fine"? Because to me, tagged enums or NULL is NOT the same thing as algebraic data types.
This is like saying floating points are just fine for me for integer calculations. Maybe, maybe for you its fine to use floating points to calculate pointers, but for others it is not.
You won't understand it unless refactor some Rust programs.
Bunny summed it up rather well. He said in most languages, when pull on some thread, you end disappearing into a knot and you're changes are just creating a bigger knot. In Rust, when pull on a thread, the language tells you where it leads. Creating a bigger knot generally leads to compile errors.
Actually he didn't say that, but I can't find the quote. I hope it was something like that. Nonetheless he was 100% spot on. That complexity you bemoan about the language is certainly there - but it's different to what you have experienced before.
In most languages, complex features tend lead to complex code. That's what made me give up on Python in the end. When you start learning Python, it seems a delightfully simple yet powerful language. But then you discover metaclasses, monkey patching, and decorators, which all seem like powerful and useful tools, and you use them to do cool things. I twisted Python's syntax into grammar productions, so you could write normal looking python code that got turned into an LR(1) parser for example. Then you discover other peoples code that uses those features to produce some other cute syntax, and it has a bug, and when you look closely your brain explodes.
As you say C doesn't have that problem, because it's such a simple language. C++ does have that problem, because it's a very complex language. I'm guessing you are making deduction from those two examples that complex languages lead to hard to understand code. But Rust is the counter example. Rust's complexity is forces you to write simple code. Turns out it's the complexity of the code that matters, not the complexity of the language.
In C, you can make partial changes and accept a temporary inconsistency. This gives you a lot of flexibility that I find helpful.
Then we have the functions that might be re-entrant or not, in the presence of signals, threads,...
But you're willing to write many comments complaining that Rust is hard to refactor. Rust is the easiest language to refactor with I've ever worked in, and I've used a couple dozen or so. When you want to change something, you change it, and then fix compiler errors until it stops complaining. Then you run it, and it works the first time you run it. It's an incredible experience to worry so little about unknown side-effects.
Their refactoring comments look focused on C versus C++ to me, with a bit of guessing Rust is like C++ in a way that is clearly labeled as speculation.
So I don't see the problem with anything they said about refactoring.
Oh? So how do you refactor a closure into a named function in Rust?
I have found this to be of the most common failure modes that makes people want to punch the monitor.
(Context: in almost all programming languages, a closure is practically equivalent to an unnamed function--refactoring a closure to a named function tends to be pretty straightforward. This isn't true for Rust--closures pick up variable lifetime information that can be excruciatingly difficult to unwind to a named function.)
Really? How confident are you to change a data structure that uses an array with linear search lookup to a dictionary? Or a pointer that now is nullable (or is now never null)?
Unless you have rigorous test or the code is something trivial, this would be a project of its own.
I am pretty sure I can swap out the implementation of the dictionary in the rust compiler and by the time the compilation issues are worked out, the code would be correct by the end of it (even before running the tests)
Refactoring Rust projects is clearly the easiest because the compiler and type system ensure the program is correct at least in terms of memory access and shared resource access. It doesn't protect me from memory leaks and logical errors. But since Rust has a built-in testing framework, it's quite easy to prepare tests for logical errors before refactoring.
C/C++ refactoring is a nightmare - especially in older projects without modern smart pointers. Every change in ownership or object lifetime is a potential disaster. In multi-threaded applications it's even worse - race conditions and use-after-free bugs only manifest at runtime, often only in production. You have to rely on external tools like Valgrind or AddressSanitizer.
Python has the opposite problem - too much flexibility. You can refactor an entire class, run tests, everything passes, but then in production you discover that some code was dynamically accessing an attribute you renamed. Type hints help, but they're not enforced at runtime.
Rust forces you to solve all these problems at compile time. When you change a lifetime or ownership, the compiler tells you exactly where you need to fix it. This is especially noticeable in async code - in C++ you can easily create a dangling reference to a stack variable that an async function uses. In Rust, it simply won't compile.
The only thing where Rust lags a bit is compilation speed during large refactors. But that's a small price to pay for the certainty that your code is memory-safe.
Another area where Rust absolutely excels is when using AI agents like Claude Code. It seems to me that LLMs can work excellently with Rust programs, and thanks to the support of the type system and compiler, you can get to functional code quickly. For example, Claude Code can analyze Rust programs very well and generate documentation and tests.
I think Rust with an AI agent has the following advantages:
Explicit contract - the type system enforces clear function interfaces. The AI agent knows exactly what a function expects and what it returns.
Compiler as collaborator - when AI generates code with an error, it gets a specific error message with the exact location and often a suggested solution. This creates an efficient feedback loop.
Ownership is explicit - AI doesn't have to guess who owns data and how long it lives. In C++ you often need to know project conventions ("here we return a raw pointer, but the caller must not deallocate it").
Fewer implicit assumptions - in Python, AI can generate code that works for specific input but fails on another type. Rust catches these cases at compile time.
Fair, but this relative. C++ has 50 years of baggage it needs to support--and IMO the real complexity of C++ isn't the language, it's the ecosystem around it.
Anothere way of putting it is, if you didn't care about backwards-compatibility, you could greatly simplify C++ without losing anything. You can't say the same about Rust; the complexity of Rust is high-entropy, C++'s is low-entropy.
For Rust, I think it is a bit of a different story and it is harder to point to specific features. The language is clearly much better designed (because it was more designed and did not evolve so much) and because of its roots in functional programming. Just overall, the complexity is too high in my opinion and it a bit too idealistic and not pragmatic enough.
This is an ideal programming language for certain types of people. It also gives the programming language certain properties that make it useful when provable correctness is a concern (see Ferrocene).
I haven't been following their work, though. It seems they are working on stacked borrows.
They didn’t do the best with what they had. Sure, some problems were caused by C backwards compatibility.
But so much of the complexity and silliness of the language was invented by the committee themselves.
Those folks eventually move to something else and adopt "C++ the good parts" instead.
The big weak spot really is lack of community outside of Apple platforms.
Rust is certainly not the simplest language you'll run into, but C++ is incredibly baroque, they're not really comparable on this axis.
One difference which is already important and I think will grow only more important over time is that Rust's Editions give it permission to go back and fix things, so it does - where in C++ it's like venturing into a hoarder's home when you trip over things which are abandoned in favour of a newer shinier alternative.
If you're trying to remember what the language is where there's no immediately obvious straightforward way to get the length of an array, it's not Rust or C++; you must have been thinking of C.
In Rust the arrays aren't stunted left over primitive types which weren't gifted modern features, array.len() works because all Rust's types get features like this, not just the latest and greatest stuff.
(C++ arrays are different from arrays in many other programming languages, though not necessarily Rust, in that their type specifies their length, so in a way this is something you already "have" but certainly there are cases where it is convenient to "get" this information from an object.)
Perl is so bad about this that I once worked on a very old codebase in which I could tell approximately when it was written based on which features were being used.
That's one difference. And the other important differences are:
- Rust apps can depend on library "headers" written in other editions. That's the whole deal with editions! Breaking changes are local to your own code and don't fracture the ecosystem.
- Rust has a built-in tool that automatically migrates your code to the next edition while preserving its behavior. In C++, upgrading to the next standard is left as an exercise for the reader (just like everything else). And that's why it's done so rarely and so slowly.
Also it requires everything to be compiled with the same compiler, from source code.
There are tools available in some C++ compilers for migration like clang, note the difference between ISO languages with multiple implementations, and one driven by its reference compiler.
Not sure what you're talking about. Any specific examples?
> Also it requires everything to be compiled with the same compiler, from source code.
It's not related to editions at all. It's related to not having an implicit stable ABI.
It's possible have a dynamic Rust library that exposes a repr(C) interface, compile it into an .so using one version of the compiler, and then compile the dependent "pure Rust" crates using another compiler that's just going to read the metadata ("headers") of that library and link the final binary together. Same as in C and C++. You just can't compile any Rust code into a stable dynamic library, by defalut. (You can still always compile into a dylib that needs a specific compiler version)
You're correct that the possible changes in editions are very limited. But editions don't hinder interoperability in any way. They are designed not to. Today, there are no interoperability problems caused by editions specifically.
> compromises will be required, specially regarding possible incompatible semantic differences across editions.
That's just an assumption in your head. 4 editions later, it still hasn't manifested in any way.
Furthermore some programmers really like complicated languages like Rust, Haskell, etc others like straightforward languages like Go, Python, etc.
Python belongs to the complicated languages section, people that think Python is straightforward never bothered reading all the manuals, nor had to put up with all the breaking changes throughout its history, it wasn't only 2 => 3, that was the major event, every release breaks something, even if little.
Python is absolutely not straighforward, it's a huge language with many moving parts and gotchas.
Although, I admit, both are very easy to start programming in and to start shipping broken projects that appear working at first. They are good for learning programming, but terrible for production.
> Rust didn’t really solve memory safety, it just pushed all the complexity into the type system.
Yes, that's what it did, and that's the right tradeoff in most cases. Where the compiler can't make a reasonable default choice, it should force the programmer to make a choice, and then sanity-check it.
> If one was struggling with memory errors in C++ that’s nice. If one was using Java, that still sucks.
It's nice for those struggling with uncaught exceptions, null pointer bugs, mutithreading bugs, and confusing stateful object graphs in Java.
Yes, Golang and Python and Java are very easy to start programming in. And unless we’re dealing with some really complex problem, like next-gen cryptocurrencies ;), by the time the Rust teams have gotten their code working and nice and proper as any Rust code needs to be, the Golang/Python/Java teams have already released to customers.
If one wants to be super-cautios and move accordingly slower to be really really sure they have no memory errors then that’s fine. There’s a market for that stuff. But selling this approach as a general solution is disingenuous.
...have already released a broken prototype that appears to be working for now.
I'm yet to see a case where manually "hardening" your software is faster than writing a "similarly-good" program in Rust. That's just anti-automation. Why repeat the same work in every project and bloat your codebase, when the compiler can carry that burden for you? In my experience, Rust makes you write production-grade software faster than when using other languages.
> But selling this approach as a general solution is disingenuous.
I agree! There are legitimate cases where releasing a broken prototype as quickly as possible is important. There's room for that.
But I agrue that it's not the case for most "serious" production software that would be maintained for any period of time. And that Rust is the preferable option for writing such production software.
Maybe Rust isn't optimized for throwaway projects and that's fine.
Additionally, given its ML influence, too many people enjoy doing Haskell level FP programming in Rust, which puts off those not yet skilled in the FP arts.
Also the borrow checker is the Rust version of Haskell burrito blogs with monads, it is hard to get how to design with it in mind, and when one gets it, it isn't that easy to explain to others still trying to figure it out.
Hence why from the outside people get this opinion over Rust.
Naturally those of us with experience in compilers, type systems theory and such, see it differently, we are at another level of understanding.
Eh. Haskell monads are a math-centric way of performing operations on wrapped types (stuff inside a monad). Rust borrow checker is way more pragmatic-centric, i.e. how can we prevent certain behaviors.
The difference being you don't see Monads being replaced by Tree-Monads, without impacting the code.
> and when one gets it, it isn't that easy to explain to others still trying to figure it out.
So is going from 0-based to X-based arrays (where X is an integer); Or learning a new keyboard layout. Just because it's hard (unfamiliar) doesn't mean it's impossible.
(1) The "intimidating syntax". Hey, you do not even need to be using <$> never mind the rest of those operators. Perl and Haskell can be baroque, but stay away from that part of the language until it is useful.
(2) "Changes are not localized". I'm not sure what this means. Haskell's use of functions is very similar to other languages. I would instead suggest referring to the difficulty of predicting the (time|space) complexity due to the default lazy evaluation.
FTA:
> In contrast, Haskell is not a simple language. The non-simplicity is at play both in the language itself, as evidenced by its intimidating syntax, but also in the source code artifacts written in it. Changes are not localized, the entire Haskell program is one whole — a giant equation that will spit out the answer you want, unlike a C program which is asked to plod there step by step.
Edited to make the critique more objective.
Rust is a dead simple language in comparison.
Rust doesn’t have a standard, it has a book, so you should refer to the initialization section from Stroustrup’s C++ book to keep things fair.
In Rust, for most users, the main source of complexity is struggling with the borrow checker, especially because you're likely to go through a phase where you're yelling at the borrow checker for complaining that your code violates lifetime rules when it clearly doesn't (only to work it out yourself and realize that, in fact, the compiler was right and you were wrong) [1]. Beyond this, the main issues I run into are Rust's auto-Deref seeming to kick in somewhat at random making me unsure of where I need to be explicit (but at least the error messages basically always tell you what the right answer is when you get it wrong) and to a much lesser degree issues around getting dyn traits working correctly.
By contrast C++ has just so much weird stuff. There's three or four subtly different kinds of initialization going on, and three or four subtly different kinds of type inference going on. You get things like `friend X;` and `friend class X;` having different meanings. Move semantics via rvalue references are clearly bolted on after the fact, and it's somewhat hard to reason about the right things to do. It has things like most-vexing parse. Understanding C++ better doesn't give you more confidence that things are correct; it gives you more trepidation as you know better how things can go awry.
[1] And the commonality of people going through this phase makes me skeptical of people who argue that you don't need the compiler bonking you on the head because the rules are easy to follow.
† This phrase would have been idiomatic many years ago but it is still used with the same intent today even though its meaning is no longer obvious, the idea is that a farmer at market told you this sack you can't see inside ("poke") has a piglet in it, so you purchase the item for a good price, but it turns out there was only a kitten in the bag, which (compared to the piglet) is worthless.
Move by default is the thing which complicates Rust so much.
I've heard this story be accounted to Gauss, not Euler.
The earliest reference is a biography of Gauss published a year after his death by a professor at Gauss' own university (Gottingen). The professor claims that the story was "often related in old age with amusement and relish" by Gauss. However, it describes the problem simply as "the summing of an arithmetic series", without mention of specific numbers (like 1-100). Also, it was posed to the entire classroom - presumably as a way to keep them busy for a couple of hours - rather than as an attempt to humiliate a precocious individual.
Over the last year I’ve started to write every new project using it. On windows, on linux and mac.
It is honestly a wonderful language to work with. Its mature, well designed, has a lot of similarities to rust. Has incredible interop with C, C++, Objective-C and even Java as of this year which feels fairly insane. It also is ergonomic as hell and well understood by LLM’s so is easy to get into from a 0 starting point.
Also, how is its type system and metaprogramming? Does it have type polymorphism, typeclasses, macros, etc?
In terms of its language features it has all of those and more, sometimes too many in my opinion.
I personally favour languages that are clear in their vision. However at its core Swift is a highly performant language that is designed really well, has beautiful syntax, and in the last 5 or so years I have been impressed with its direction. It is being developed by a good team who listen to their community but not at the expense of the languages vision.
My favourite aspect of using it though is its versatility. Wether you’re working on embedded systems, a game, a web server, or even a static site generator, it always feels like the language is there to support your vision, but still give you the fine grained control you need to optimise for performance.
I also collaborate with a friend who is a Rust developer and he’s always super happy to work on a Swift project with me so I feel like that’s enough praise when you can pull a rest dev away from their beloved.
Function calling is irksome, with implicit parameters and mandatory parameters vaguely mixing? And the typing is appalling - there are multiple bottom types with implicit narrowing casts, one of them being NSObject, so if you’re doing any work with the Apple APIs you end up with a mess.
We got it right with Java and Rust; C++ does a passable job; why Swift had to be as incomprehensible as typescript, I cannot fathom.
regarding redefining functions, what could the author mean? using global function pointers that get redefined? otherwise redefining a function wouldn't effect other modules that are compiled into separate object files. confusing.
C is simple in that it does not have a lot of features to learn, but because of e.g. undefined behavior, I find its very hard to call it a simple language. When a simple bug can cause your entire function to be UB'd out of existence, C doesn't feel very simple.
In haskell, side effects actually _happen_ when the pile of function applications evaluate to IO data type values, but, you can think about it very locally; that's what makes it so great. You could get those nice properties with a simpler model (i.e. don't make the langague lazy, but still have explicit effects), but, yeah.
The main thing that makes Haskell not simple IMO is that it just has such a vast set of things to learn. Normal language feature stuff (types, typeclasses/etc, functions, libraries), but then you also have a ton of other special haskell suff: more advanced type system tomfoolery, various language extensions, some of which are deprecated now, or perhaps just there are better things to use nowadays (like type families vs functional dependencies), hierarchies of unfamiliar math terms that are essentially required to actually do anything, etc, and then laziness/call-by-name/non-strict eval, which is its own set of problems (space leaks!). And yes, unfamiliar syntax is another stumbling block.
IME, Rust is actually more difficult than Haskell in a lot of ways. I imagine that once you learn all of the things you need to learn it is different. The way I've heard to make it "easier" is to just clone/copy data any time you have a need for it, but, what's the point of using Rust, then?
I wonder if the author considered OCaml or its kin, I haven't kept track of whats all available, but I've heard that better tooling is available and better/more familiar syntax. OCaml is a good language and a good gateway into many other areas.
There are some other langs that might fit, like I see nim as an example, or zig, or swift. I'd still like to do more with swift, the language is interesting.
I think the author means that the language constructs themselves have well-defined meanings, not that the semantics don't allow surprising things to happen at runtime. Small changes don't affect the meaning of the entire program. (I'm not sure I agree that this isn't the case for e.g. Haskell as well, I'm just commenting on what I think the author means.)
> IME, Rust is actually more difficult than Haskell in a lot of ways. I imagine that once you learn all of the things you need to learn it is different.
Having written code in both, Rust is quite a lot easier than Haskell for a programmer familiar with the "normal" languages like C, C++, Python, whatever. The pure functionality of Haskell is quite a big deal that ends up contorting my programs into weird poses, e.g. once you run into the need to compose Monads the complexity ramps way up.
> The way I've heard to make it "easier" is to just clone/copy data any time you have a need for it, but, what's the point of using Rust, then?
Memory safety. And the fact that this is the example of Rust complexity just goes to show what a higher level Haskell's difficulty is.
Composing monads is another one of those painful parts of haskell. I remember being so frustrated while learning Haskell that there was all of this "stuff" to learn to "use monads" but it seemd to not have anything to _do_ with `Monad`, and people told me what I needed to know was `Monad`. Someday I wanna write all that advice I wish I had received when learning Haskell. A _lot_ of it will be about dealing with general monad "stuff".
The thing that frustrated me in Rust coming from something like Ruby was how frequently I could not _do_ a very straightforward thing, because, for example, some function is a fnOnce instead of fnMulti, or the other way around, or whatever. Here's some of the experience from that time https://joelmccracken.github.io/entries/a-simple-web-app-in-.... It became clear to me eventually that some very minor changes in requirements could necessitate massive changes in how the whole data model is structured. Maybe eventually I'd get good enough at rust that this wouldn't be a huge issue, but I had no way of seeing how to get to that point from where I was.
In contrast, I can generally predict when some requirement is going to necessitate a big change in haskell: does it require a new side effect? if so, it may need a big change. If not, then it probably doesn't. But, I've found it surprisingly easy to make big changes from the nice type system.
I really don't get when rust folks claim "memory safety" like this; we've had garbage collection since 1959. Rust gives you memory safety with tight control over resource usage; memory safety is an advantage that Rust has over C or C++, but not over basically every other language people still talk about.
If you just clone/copy every data structure left and right, then you're at a _worse_ spot than with garbage collection/reference counting when it comes to memory usage. I _guess_ you are getting the ability to avoid GC pauses, but, why not use a reference counted language if that's the problem? copy/clone data all of the time can't be faster than the overhead from a reference counting, can it??
In haskell, I did find that once I understood the various pieces I needed to work with, actually solving problems (e.g. composing monads) is much easier. I don't generally have a hard time actually programming Haskell. All that effort is front-loaded though, and it can be hard to know exactly what you need to learn in order to understand some new unfamiliar thing.
Your preferring Rust over Haskell is totally fine BTW, I'm just trying to draw a distinction between something that's hard to _use_ vs something that's hard to _learn_. Many common languages are much harder to use IME; I feel like I have to think so hard all of the time about every line of code to make sure I'm not missing something, some important side effect that I don't know about that is happening at some function call. With Haskell, I can generally skim the code and find what's important quite quickly because of the type system.
I do plan to learn Rust at some point still whenever the planets align and I need to know something like it. Until then, there are so many other things that interest me, and not enough hours in the day. I still wonder if I have really missed out on some benefit from learning to think more about data ownership in programs.
I understand your frustration, and Rust does get too low-level sometimes (see https://without.boats/blog/notes-on-a-smaller-rust/). But the semantic difference between FnOnce and Fn is actually important. Fn consumes its environment and makes it unavailable later. This is an important property. When you don't want that, "just" use Fn, wrap everything in an Arc and clone everything (I understand that this is more ceremony than in other languages, and that can be unjustified).
> I really don't get when rust folks claim "memory safety" like this; we've had garbage collection since 1959.
Agree 100%! What Rust actually gives you is predictability, reliability and compile time checks, while still allowing to write relatively ergonomic imperative and "impure" code. And a sane ecosystem of tools that are designed to be reliable and helpful. I'm currently writing a post about this.
It also gives compile-time data race protection, which is still missing from some other memory-safe languages.
> I still wonder if I have really missed out on some benefit from learning to think more about data ownership in programs.
Yeah :) Affine types + RAII (ownership) allow you to express some really cool things, such as "Mutex<T> forces you to lock the mutex before accessing the T and automatically unlocks it when you're done", or "commits and rollbacks destroy the DatabaseTransaction and make it statically impossible to interact with", or "you'll never forget to run cleanup code for objects from external C libraries" (https://www.reddit.com/r/programming/comments/1l1nhwz/why_us...)
The whole "memory management" thing is mostly a historical accident. We could have a "smaller Rust" that auto-boxes values, has a runtime that handles reference cycles for you, and doesn't guarantee anything about the stack vs heap: https://without.boats/blog/notes-on-a-smaller-rust/
Lol, wut? What about about race conditions, null pointers indirectly propagated into functions that don't expect null, aliased pointers indirectly propagated into `restrict` functions, and the other non-local UB causes? Sadly, C's explicit control flow isn't enough to actually enable local reasoning in the way that Rust (and some other functional languages) do.
I agree that Go is decent at this. But it's still not perfect, due to "downcast from interface{}", implicit nullability, and similar fragile runtime business.
I largely agree with the rest of the post! Although Rust enables better local reasoning, it definitely has more complexity and a steeper learning curve. I don't need its manual memory management most of the time, either.
Related post about a "higer-level Rust" with less memory management: https://without.boats/blog/notes-on-a-smaller-rust/
But that doesn't mean it's a good idea to use such style for PRs, lol.
That's just because you got used to it ;) (same as with modern C++ really, if you've used C++ long enough you become blind to its problems)
"Regular" Rust and C++ is fairly readable, but a quick Google for "Higher Kinded Types in Rust" ends up with [0]:
fn ap<A, B, F: Fn(&A) -> B>(x: &<Self as Plug<F>>::R, arg: &<Self as Plug<A>>::R) -> <Self as Plug<B>>::R
where Self: Plug<A> + Plug<B> + Plug<F>;
[0] https://hugopeters.me/posts/14/There's so many good high-level languages to choose from, but when you need to go low-level, there's essentially only C, C++, Rust. Maybe Zig once it reaches 1.0.
What we need isn't Rust without the borrow checker. It's C with a borrow checker, and without all the usual footguns.
Would modules be needed or can preprocessing work still. How more advanced will the type system need to be. And how will pointers chanhe to fix all footguns and allow static borrow checking.
I started designing one (C-with-borrow-checker) way back in 2018; never got around to finishing the design or making a MVP, but I believe you can solve maybe 90% of memory problems (use after free/double free, data racing, leaking, etc) with nothing more than some additional syntax[1] to type declarations and support for only fat arrays.
IIRC, I thought that the above (coupled with an escape hatch for self-referential and/or unsafe values) was all that was needed to prevent 90% of memory errors.
Somewhere along the way scope-creep (objects would be nice, can't leave out generics, operator overloads necessary for expressing vector math, etc) turned what would have been a small project into a very large one.
-------------------------------
[1] By additional syntax, I mean using `&` and `&&` in type declarations as a qualifier similar to how `static`, `const`, `volatile`, etc qualifiers are used.
We have many popular high-level languages, but I disagree that they are good. Most of them are fragile piles of crap unsuitable for writing anything larger than a throwaway script.
(In my subjective and biased assessment, which is hovewer based on professional experience)
I use Rust. From what I've read, Swift seems better than the others, with Typescript and Go following closely. All four happen to be widely praised in this thread.
To clarify, I don't have "professional experience" with every popular language, including Swift and Typescript too.
Zig is basically Modula-2 in C syntax clothing.
Which would look a lot like... Rust!
I want Rust, but without panics, without generics (aside for lifetimes), without traits and bounds (aside for lifetimes), without operator overloading, without methods on structs, without RAII, without iterators, etc.
I remember the first time I was using gettext and wonder "wait, why do I have to switch the language for my whole program if I need it for just this request?" and realized that's because GNU gettext was made like that.
And had not GNU/FSF made C the official main language for FOSS software on their manifesto, by the time when C++ was already the main userspace language across Windows, OS/2, Mac OS, BeOS, that "It is the reason for C's endurance" would be much less than it already is nowadays, where it is mostly UNIX/POSIX, embedded and some OS ABIs.
What does that mean, and what is it about native programs (i.e. programs AOT-compiled to machine code) that makes them feel solid? BTW, such programs are often more, not less, sensitive to OS changes.
> realizing that I was just spawning complexity that is unrelated to the problem at hand
Wait till you use Rust for a while, then (you should try, though, if the language interests you).
For me, the benefit of languages with manual memory management is the significantly lower memory footprint (speed is no longer an issue; if you think Haskell and Go are good enough, try Java, which is faster). But this comes at a price. Manual memory management means, by necessity, a lower level of abstraction (i.e. the same abstraction can cover fewer implementations). The price is usually paid not when writing the first version, but when evolving the codebase over years. Sometimes this price is worth it, but it's there, and it's not small. That's why I only reach for low level languages when I absolutely must.
You may be technically correct that they are more sensitive to the kernel interface changes. But the point is that native, static binaries depend only on the kernel interface, while the other programs also depend on the language runtime that's installed on that OS. Typical Python programs even depend on the libraries being installed separately (in source form!)
Many binaries also depend on shared libraries.
> while the other programs also depend on the language runtime that's installed on that OS
You can (and probably should) embed the runtime and all dependencies in the program (as is easily done in Java). The runtime then makes responding to OS selection/changes easier (e.g. musl vs glibc), or avoids less stable OS APIs to begin with.
Yeah, and those are also the opposite of "solid" :) That's why I qualified with "static". I'm so glad that Go and Rust promote static linking as the default (ignoring glibc).
> You can (and probably should) embed the runtime and all dependencies in the program (as is easily done in Java).
Congrats to the Java team and users, then. That makes it similar to the Go approach to binaries and the runtime, which I approve
So if that's what the author meant by "solid", i.e. few environmental dependencies, then it's not really about "native" or not, but about how the language/runtime is designed. Languages that started out as "scripting" languages often do rely on the environment a lot, but that's not how, say, Java or .NET work.
> I'm so glad that Go and Rust promote static linking as the default (ignoring glibc).
That doesn't work so well (and so usually not done) once you have a GUI, although I guess you consider the GUI to be part of the kernel.
A language runtime remains one, independently on how it was linked into the binary.
A language runtime are the set of operations that support the language semantics, which in C's case are everything that happens before main(), threading support (since C11), floating point emulation (if needed), execution hooks for running code before and after main(), delayed linking, and possibly more, depending on the compiler specific extensions.
I can give other non-GNU/Linux examples.
... what? Speed is no longer an issue? Haskell and Go? ??? How'd we go from manual memory management languages to Haskell and Go and then somehow to Java? Gotta plug that <my favorite language> somehow I guess...
It seems to me you have a deep misunderstanding of performance. If one program is 5% faster than another but at 100x memory cost, that program is not actually more performant. It just traded all possible memory for any and all speed gain. What a horrible tradeoff.
This thinking is typical in Java land [1]. You see: 8% better performance. I see: 28x the memory usage. In other words, had the Rust program been designed with the same insane memory allowance in mind as the Java program, it'd wipe the floor with it.
[1]: https://old.reddit.com/r/java/comments/n75pa0/java_beats_out...
TFA also concludes
Since I want native code ...
I think by "solid" they mean as close to metal as possible, because, as you suggest, one can go "native" with AOT. With JS/TS (languages TFA prefers), I'm not sure how far WASM's AOT will take you ... Go (the other language TFA prefers) even has PGO now on top of "AOT".(Of course they actually do want Haskell but they probably need to get there gradually)
> Even rust with all its hype lacks in this area imo.
This is surprising to me! I find that Rust has pretty excellent tooling, and Cargo is substantially better than the package manager in most other languages...
If I were to write such a list, the answer would probably come down to "because I wanted to pick ONE and be able to stick with it, and Rust seems solid and not going anywhere." As much as Clojure and Ocaml are, from what I've heard, right up my alley, learning all these different languages has definitely taken time away from getting crap done, like I used to be able to do perfectly well with Java 2 or PHP 5, even though those are horrible languages.
short-circuited reading this
Rust seems low-level too, but it isn't the same. It allows building powerful high-level interfaces that hide the complexity from you. E.g., RAII eliminates the need for explicit `defer` that can be forgotten
But then I want to chime in and argue that the repetitive syntax isn't even close to being the main problem with Go: https://home.expurple.me/posts/go-did-not-get-error-handling...
However errors do not seem to commonly wrapped, tagged or contextualized as is the case in Rust. This might weight lower verbosity as more important than extremely structured error handling which definitely constitutes an interesting approach.
This was brilliant performance art. Bless your heart Dear Author, I adore you.
I use Rust now for everything from CLIs to APIs and feel more productive in it end to end than python even.
My personal memorable one was bit shifting 32bit values by varying amounts, and our test vectors all failing after a compiler update, because some of the shifts were by 32. Undefined behaviour.
Maybe if you want to skip all the off-by-1 errors, double frees, overflows, underflows, wrong API usage, you don't need to maintain multiplatform build environment, and you don't support multiple architectures.
I mean, in this sense, assembly is even easier than C. Its syntax is trivial, and if that would be the only thing that matters, people should write assembly.
But they don't write assembly, because it's not the only thing that matters. So please stop considering C only in terms of easy syntax. Because syntax is the only thing that's easy in C.
Eh.... yeah? I suppose technically? But not _really_. Rust gives you the option to do that. But most programs outside of "I'm building an operating system" don't really require thinking too hard about it.
It's not like C where you're feeding memory manually, or like C++ where you have to think about RAII just right.
The Rust compiler does manage memory and lifetimes. It just manages them statically at compile-time. If your code can’t be guaranteed to be memory-safe under Rust’s rules, it won’t compile and you need to change it.
I think that of all those options, Typescript and Zig feel closest related. Zig has that same 'lightness' when writing code as Typescript and the syntax is actually close enough that a Typescript syntax highlighter mostly works fine for Zig too ;)
Good luck to the author with trying Rust. I hope he writes an honest experience report.
It's fast, compiles to native code AND javascript, and has garbage collection (so no manual memory management).
As an added bonus, you can mix Haskell-like functional code and imperative code in a single function.
blashyrk•1d ago
Symmetry•15h ago
timeon•3h ago
elcritch•15h ago