What's changed since 2015 is that we ironed out some of the wrinkles in the language (non-lexical lifetimes, async) but the fundamental mental model shift required to think in terms of ownership is still a hurdle that trips up newcomers.
A good way to get people comfortable with the semantics of the language before the borrow checker is to encourage them to clone() strings and structs for a bit, even if the resulting code is not performant.
Once they dip their toes into threading and async, Arc<Lock<T>> is their friend, and interior mutability gives them some fun distractions while they absorb the more difficult concepts.
Great post! It's got a ton of advice for being productive, and it should be especially useful for beginners.
Languages I liked, I liked immediately. I didn’t need to climb a mountain first.
To each his own, I guess….
Almost 90% of the Rust I write these days is async. I avoid non-async / blocking libraries where possible.
I think this whole issue is overblown.
When it came time for me to undo all the async-trait library hack stuff I wrote after the feature landed in stable, I realized I wasn't really held back by not having it.
I very rarely have to care about future pinning, mostly just to call the pin macro when working with streams sometimes.
[0]: https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...
I don't know about C#, but at least in Rust, one reason is that normal (non-async) functions have the property that they will run until they return, they panic, or the program terminates. I.e. once you enter a function it will run to completion unless it runs "forever" or something unusual happens. This is not the case with async functions -- the code calling the async function can just drop the future it corresponds to, causing it to disappear into the ether and never be polled again.
One reason why async-await is trivial in .NET is garbage collector. C# rewrites async functions into a state machine, typically heap allocated. Garbage collector automagically manages lifetimes of method arguments and local variables. When awaiting async functions from other async functions, the runtime does that for multiple async frames at once but it’s fine with that, just a normal object graph. Another reason, the runtime support for all that stuff is integrated into the language, standard library, and most other parts of the ecosystem.
Rust is very different. Concurrency runtime is not part of the language, the standard library defined bare minimum, essentially just the APIs. The concurrency runtime is implemented by “Tokio” external library. Rust doesn’t have a GC; instead, it has a borrow checker who insists on exactly one owner of every object at all times, makes all memory allocations explicit, and exposed all these details to programmer in the type system.
These factors make async Rust even harder to use than normal Rust.
I’m not calling this the pinnacle of async design, but it’s extremely familiar and is pretty good now. I also prefer to write as much async as possible.
A flat learning curve means you never learn anything :-\
In point of fact, I think the intended chart of the idiom is effort (y axis) to reach a given degree of mastery (x axis)
- another think coming -> another thing coming
- couldn't care less -> could care less
- the proof of the pudding is in the eating -> the proof is in the pudding
It's usually not useful to try to determine the meaning of the phrases on the right because they don't have any. What does it mean for proof to be in a pudding for example?
The idiom itself is fine, it's just a black box that compares learning something hard to climbing a mountain. But learning curves are real things that are still used daily so I just thought it was funny to talk as if a flat one was desirable.
“The common English usage aligns with a metaphorical interpretation of the learning curve as a hill to climb.”
Followed by a graph plotting x “experience” against y “learning.”
People (colloquially) use phrases like "steep learning curve" because they imagine learning curve is something you climb up, a.k.a. a hill.
Most explanations of ownership in Rust are far too wordy. See [1]. The core concepts are mostly there, but hidden under all the examples.
- Each data object in Rust has exactly one owner.
- Ownership can be transferred in ways that preserve the one-owner rule.
- If you need multiple ownership, the real owner has to be a reference-counted cell.
Those cells can be cloned (duplicated.)
- If the owner goes away, so do the things it owns.
- You can borrow access to a data object using a reference.
- There's a big distinction between owning and referencing.
- References can be passed around and stored, but cannot outlive the object.
(That would be a "dangling pointer" error).
- This is strictly enforced at compile time by the borrow checker.
That explains the model. Once that's understood, all the details can be tied back to those rules.[1] https://doc.rust-lang.org/book/ch04-01-what-is-ownership.htm...
But if you come from Javascript or Python or Go, where all this is automated, it's very strange.
All the jargon definitely distracted me from grasping that simple core concept.
Rust also has the “single mutable reference” rule. If you have a mutable reference to a variable, you can be sure nobody else has one at the same time. (And the value itself won’t be mutated).
Mechanically, every variable can be in one of 3 modes:
1. Directly editable (x = 5)
2. Have a single mutable reference (let y = &mut x)
3. Have an arbitrary number of immutable references (let y = &x; let z = &x).
The compiler can always tell which mode any particular variable is in, so it can prove you aren’t violating this constraint.
If you think in terms of C, the “single mutable reference” rule is rust’s way to make sure it can slap noalias on every variable in your program.
This is something that would be great to see in rust IDEs. Wherever my cursor is, it’d be nice to color code all variables in scope based on what mode they’re in at that point in time.
C++ does that too with RAII. Go ahead and use whatever STL containers you like, emplace objects onto them, and everything will be safely single-owned with you never having to manually new or delete any of it.
The difference is that C++'s guarantees in this regard derive from a) a bunch of implementation magic that exists to hide the fact that those supposedly stack-allocated containers are in fact allocating heap objects behind your back, and b) you cooperating with the restrictions given in the API docs, agreeing not to hold pointers to the member objects or do weird things with casting. You can use scoped_ptr/unique_ptr but the whole time you'll be painfully aware of how it's been bolted onto the language later and whenever you want you can call get() on it for the "raw" underlying pointer and use it to shoot yourself in the foot.
Rust formalizes this protection and puts it into the compiler so that you're prevented from doing it "wrong".
It's really too bad rust went the RAII route.
Frankly most of the complexity you're complaining about stems from attempts to specify exactly what magic the borrow checker can prove correct and which incantations it can't.
I'm not acquainted with Rust, so I don't really know, but I wonder if the wording plays a role in the difficulty of concept acquisition here. Analogies are often double edged tools.
Maybe sticking to a more straight memory related vocabulary as an alternative presentation perspective might help?
From that angle, it indeed doesn’t seem to make sense.
I think, but might be completely wrong, that viewing these actions from their usual meaning is more helpful: you own a toy, it’s yours to do as tou please. You borrow a toy, it’s not yours, you can’t do whatever you want with it, so you can’t hold on to it if the owner doesn’t allow it, and you can’t modify it for the same reasons.
Ownership is easy, borrowing is easy, what makes the language super hard to learn is that functions must have signatures and uses that together prove that references don't outlive the object.
Also: it's better not store referenced object in a type unless it's really really needed as it makes the proof much much more complex.
Bonus: do it with no heap allocation. This actually makes it easier because you basically don’t deal with lifetimes. You just have a state object that you pass to your input system, then your guest cpu system, then your renderer, and repeat.
And I mean… look just how incredibly well a match expression works for opcode handling: https://github.com/ablakey/chip8/blob/15ce094a1d9de314862abb...
My second (and final) rust project was a gameboy emulator that basically worked the same way.
But one of the best things about learning by writing an emulator is that there’s enough repetition you begin looking for abstractions and learn about macros and such, all out of self discovery and necessity.
If you’re going to write an emulator in this style, why even use an imperative language when something like Haskell is designed for this sort of thing?
It has a built in coach: the borrow checker!
Borrow checker wouldn't get off my damn case - errors after errors - so I gave in. I allowed it to teach me - compile error by compile error - the proper way to do a threadsafe shared-memory ringbuffer. I was convinced I knew. I didn't. C and C++ lack ownership semantics so their compilers can't coach you.
Everyone should learn Rust. You never know what you'll discover about yourself.
It's an abstraction and convenience to avoid fiddling with registers and memory and that at the lowest level.
Everyone might enjoy their computation platform of their choice in their own way. No need to require one way nor another. You might feel all fired up about a particular high level language that you think abstracts and deploys in a way you think is right. Not everyone does.
You don't need a programming language to discover yourself. If you become fixated on a particular language or paradigm then there is a good chance you have lost sight of how to deal with what needs dealing with.
You are simply stroking your tools, instead of using them properly.
My gut feeling says that there's a fair bit of Stockholm Syndrome involved in the attachments people form with Rust.
You could see similar behavioral issues with C++ back in the days, but Rust takes it to another level.
I think that it's happened to some degree for almost every computer programming language for a whiles now - first was the C guys enamoured with their NOT Pascal/Fortran/ASM, then came the C++ guys, then Java, Perl, PHP, Python, Ruby, Javascript/Node, Go, and now Rust.
The vibe coding people seem to be the ones that are usurping Rust's fan boi noise at the moment - every other blog is telling people how great the tool is, or how terrible it is.
I don’t specifically like Rust itself. And one doesn’t need a programming language to discover themselves.
My experience learning Rust has been that it imposes enough constraints to teach me important lessons about correctness. Lots of people can learn more about correctness!
I’ll concede- “everyone” was too strong; I erred on the side of overly provocative.
I know this feels like a positive vibe post and I don’t want to yuck anyone’s yum, but speaking for myself when someone tells me “everyone should” do anything, alarm bells sound off in my mind, especially when it comes to programming languages.
Side note: Stack allocation is faster to execute as there's a higher probability of it being cached.
Here is a free book for a C++ to Rust explanation. https://vnduongthanhtung.gitbooks.io/migrate-from-c-to-rust/...
Why RAII then?
> C++ to Rust explanation
I've seen this one. It is very newbie oriented, filled with trivial examples and doesn't even have Rust refs to C++ smart pointers comparison table.
>Why RAII then?
Their quote is probably better rephrased as _being explicit and making the programmer make decisions when the compiler's decision might impact safety_
Implicit conversion between primitives may impact the safety of your application. Implicit memory management and initialization is something the compiler can do safely and is central to Rust's safety story.
However, for high-performance systems software specifically, objects often have intrinsically ambiguous ownership and lifetimes that are only resolvable at runtime. Rust has a pretty rigid view of such things. In these cases C++ is much more ergonomic because objects with these properties are essentially outside the Rust model.
In my own mental model, Rust is what Java maybe should have been. It makes too many compromises for low-level systems code such that it has poor ergonomics for that use case.
What is the evidence for this? Plenty of high-performance systems software (browsers, kernels, web servers, you name it) has been written in Rust. Also Rust does support runtime borrow-checking with Rc<RefCell<_>>. It's just less ergonomic than references, but it works just fine.
The near impossibility of building a competitive high-performance I/O scheduler in safe Rust is almost a trope at this point in serious performance-engineering circles.
To be clear, C++ is not exactly comfortable with this either but it acknowledges that these cases exist and provides tools to manage it. Rust, not so much.
A trivial example is multiplication of large square matrices. An implementation needs to leverage all available CPU cores, and a traditional way to do that you’ll find in many BLAS libraries – compute different tiles of the output matrix on different CPU cores. A tile is not a continuous slice of memory, it’s a rectangular segment of a dense 2D array. Storing different tiles of the same matrix in parallel is trivial in C++, very hard in Rust.
Most of my applications are written in C#.
C# provides memory safety guarantees very comparable to Rust, other safety guarantees are better (an example is compiler option to convert integer overflows into runtime exceptions), is a higher level language, great and feature-rich standard library, even large projects compile in a few seconds, usable async IO, good quality GUI frameworks… Replacing C# with Rust would not be a benefit.
Thankfully C# has mostly catched up with those languages, as the other language I enjoy using.
After that, is the usual human factor on programming languages adoption.
The compiler knows the returned reference must be tied to one of the incoming references (since you cannot return a reference to something created within the function, and all inputs are references, the output must therefore be referencing the input). But the compiler can’t know which reference the result comes from unless you tell it.
Theoretically it could tell by introspecting the function body, but the compiler only works on signatures, so the annotation must be added to the function signature to let it determine the expected lifetime of the returned reference.
Note that this is an intentional choice rather than a limitation, because if the compiler analyzed the function body to determine lifetimes of parameters and return values, then changing the body of a function could be a non-obvious breaking API change. If lifetimes are only dependent on the signature, then its explicit what promises you are or are not making to callers of a function about object lifetimes, and changing those promises must be done intentionally by changing the signature rather than implicitly.
This. Many trival changes breaks API. This is not ideal for library developers.
You can argue it is broken already, but this is forcing the breakage onto every api caller, not just some broken caller.
To make a compiler automatically handle all of the cases like that, you will need to do an extensive static analysis, which would make compiling take forever.
Maybe autofix as we type, or autofix when it save the document / advance to next line.
That `longest` if defined without explicit lifetimes treated like a lifetime of a return value is the same as of the first argument. It is a rule "lifetime elision", which allows to not write lifetimes explicitly in most cases.
But `longest` can return a second reference also. With added lifetimes the header of the function says exactly that: the lifetime of a return value is a minimum of lifetimes of arguments. Not the lifetime of the first one.
Disclaimer: I haven't taken the time to learn Rust so maybe don't take this too seriously..
Note I’m not being critical of the author here. I think it’s lovely to turn your passion into trying to help others learn.
I think it is a very good example of why "design by committee" is good. The "Rust Committee" has done a fantastic job
Thank you
They say a camel is a horse designed by a committee (https://en.wiktionary.org/wiki/a_camel_is_a_horse_designed_b...)
Yes:
* Goes twice as far as a horse
* On half the food and a quarter the water of a horse
* Carries twice as much as a horse
Yes, I like design by committee. I have been on some very good, and some very bad committees, but there is nothing like the power of a good committee
Thank you Rust!
I’m not sure “it doesn’t have enough features” has ever been anyone’s complaint about C++.
I find it relatively simple. Much simpler than C++ (obviously). For someone who can write C++ and has some experience wth OCaml/Haskell/F#, it's not a hard language.
Complex is the wrong word. Baffling is a better word. Or counterintuitive, or cumbersome. If “easy enough for someone with experience in C++, OCaml, Haskell, and F#” were the same thing as “not hard” then I don’t think this debate would come up so frequently.
Would rather have that than all the issues that JavaScript or any other weakly typed and dynamically typed language.
Before Rust I was hearing the same argument from Haskell or Scala developers trying to justify their language of choice.
I know Rust is here to stay, but I think it’s mostly because it has a viable ecosystem and quality developer tools. Its popularity is _in spite of_ many of its language features that trade that extra 1% of safety for 90% extra learning curve.
I remember both MS and goog having talks about real-world safety issues in the range of 50% of cases were caused by things that safe rust doesn't allow (use after free, dangling pointers, double free, etc). The fact that even goog uses it, while also developing go (another great language with great practical applications) is telling imo.
I'm trying to phrase this as delicately as I can but I am really puzzled.
If someone wrote an article about how playing the harp is difficult, just stick with it... would you also say that playing the harp is a terrible hobby?
I started to learn Rust, but I was put off by the heavy restrictions the language imposes and the attitude that this is the only safe way. There's a lack of acknowledgement, at least in beginner materials, that by choosing to write safe Rust you're sacrificing many perfectly good patterns that the compiler can't understand in exchange for safety. Eventually I decided to stop because I didn't like that tradeoff (and I didn't need it for my job or anything)
why
That doesn't mean you should though. Imagine how much energy is being wasted globally on bad Python code... The difference is of course that anyone can write it, and not everyone can write Rust. I'm not personally a big fan of Rust, I'd chose Zig any day of the week... but then I'd also choose C over C++, and I frankly do when I optimise Python code that falls in those last 5%. From that perspective... of someone who really has to understand how Python works under the hood and when to do what, I'd argue that Rust is a much easier langauge to learn with a lot less "design smell". I suppose Python isn't the greatest example as even those of us who love it know that it's a horrible language. But I think it has quite clearly become the language of "everyone" and even more so in the age of LLM. Since our AI friends will not write optimised Python unless you specifically tell them to use things like generators and where to use them, and since you (not you personally) won't because you've never heard about a generator before, then our AI overlords won't actually help.
The problem with articles like this is that they don't really get to the heart of the problem:
There are programs that Rust will simply not let you write.
Rust has good reasons for this. However, this is fundamentally different from practically every programming language that people have likely used before where you can write the most egregious glop and get it to compile and sometimes even kinda-sorta run. You, as a programmer, have to make peace with not being able to write certain types of programs, or Rust is not your huckleberry.
Weather the ferocious storm
You will find, true bliss
At that point you might as well be writing Java or Go or whatever though. GC runtimes tend actually to be significantly faster for this kind of code, since they can avoid all those copies by sharing the underlying resource. By the same logic, you can always refactor the performance-critical stuff via your FFI of choice.
Yes the borrow checker is central to Rust, but there are other features to the language that people _also_ need to learn and explore to be productive. Some of these features may attract them to Rust (like pattern matching / traits / etc.)
In my experience, hobbyist Rust projects end up using unwrap and panic all over the place, and it’s a giant mess that nobody will ever refactor.
Cloning small objects is lightning fast, turns out in a lot of these cases it makes sense to just do the clone, especially when it's a first pass. The nice thing is that at least rust makes you explicitly clone() so you're aware when it's happening, vs other languages where it's easy to lose track of what is and isn't costing you memory. So you can see that it's happening, you can reason about it, and once the bones of the algorithm are in place, you can say "okay, yes, this is what should ultimately own this data, and here's the path it's going to take to get there, and these other usages will be references or clones.
It's really not, it's the way python works. Heap allocations are "fast" on modern CPUs that are too fast to measure for most stuff, but they're much (much) slower than the function call and code you're going to use to operate on whatever the thing it was you cloned.
Code that needs memory safety and can handle performance requirements like this has many options for source language, almost none of which require blog posts to "flatten the learning curve".
(And to repeat: it's much slower than a GC which doesn't have to make the clone at all. Writing Rust that is "Slower Than Java" is IMHO completely missing the point. Java is boring as dirt, but super easy!)
After all this ordeal, I can confidently say that learning Rust was one of the best decisions I’ve made in my programming career. Declaring types, structs, and enums beforehand, then writing functions to work with immutable data and pattern matching, has become the approach I apply even when coding in other languages.
It's so hard for me to take Rust seriously when I have to find out answers to unintuitive question like this
I have a hard time understanding why people have such a hard time accepting that you need to convert between different text representations when it's perfectly accepted for numbers.
dmitrygr•7h ago
Why would I pair-program with someone who doesn’t understand doubly-linked lists?
dwattttt•7h ago
mre•6h ago
It is doable, just not as easy as in other languages because a production-grade linked-list is unsafe because Rust's ownership model fundamentally conflicts with the doubly-linked structure. Each node in a doubly-linked list needs to point to both its next and previous nodes, but Rust's ownership rules don't easily allow for multiple owners of the same data or circular references.
You can implement one in safe Rust using Rc<RefCell<Node>> (reference counting with interior mutability), but that adds runtime overhead and isn't as performant. Or you can use raw pointers with unsafe code, which is what most production implementations do, including the standard library's LinkedList.
https://rust-unofficial.github.io/too-many-lists/
Animats•6h ago
I've discussed this with some of the Rust devs. The trouble is traits. You'd need to know if a trait function could borrow one of its parameters, or something referenced by one of its parameters. This requires analysis that can't be done until after generics have been expanded. Or a lot more attributes on trait parameters. This is a lot of heavy machinery to solve a minor problem.
umanwizard•4h ago
In practice, it really doesn't. The difficulty of implementing doubly linked lists has not stopped people from productively writing millions of lines of Rust in the real world. Most programmers spend less than 0.1% of their time reimplementing linked data structures; rust is pretty useful for the other 99.9%.
Animats•2h ago
bigstrat2003•1h ago
It has one: use raw pointers and unsafe. People are way too afraid of unsafe, it's there specifically to be used when needed.
worik•3h ago
Stop!
If you are using a doubly linked list you (probably) do not have to, or want to.
There is almost no case where you need to traverse a list in both directions (do you want a tree?)
A doubly linked list wastes memory with the back links that you do not need.
A singly linked list is trivial to reason about: There is this node and the rest. A doubly linked list more than doubles that cognitive load.
Think! Spend time carefully reasoning about the data structures you are using. You will not need that complicated, wasteful, doubly linked list
dmitrygr•3h ago
But you might need to remove a given element that you have a pointer to in O(1), which a singly linked list will not do
dwattttt•3h ago
Whether it's more efficient to carry a second pointer around when manipulating the list, or store a second pointer in every list node (aka double linked list) is up to your problem space.
Or whether an O(n) removal is acceptable.
MeetingsBrowser•3h ago
Linked lists are perfect for inserting/deleting nodes, as long as you never need to traverse the list or access any specific node.
sbrother•3h ago
khuey•2h ago
pornel•6h ago
Trying to construct permanent data structures using non-owning references is a very common novice mistake in Rust. It's similar to how users coming from GC languages may expect pointers to local variables to stay valid forever, even after leaving the scope/function.
Just like in C you need to know when malloc is necessary, in Rust you need to know when self-contained/owning types are necessary.
mplanchard•5h ago
An example: parsing a cookie header to get cookie names and values.
In that case, I settled on storing indexes indicating the ranges of each key and value instead of string slices, but it’s obviously a bit more error prone and hard to read. Benchmarking showed this to be almost twice as fast as cloning the values out into owned strings, so it was worth it, given it is in a hot path.
I do wish it were easier though. I know there are ways around this with Pin, but it’s very confusing IMO, and still you have to work with pointers rather than just having a &str.
lmm•4h ago
Ar-Curunir•3h ago