The words of every C programmer who created a CVE.
Also, https://github.com/ghostty-org/ghostty/issues?q=segfault
All jokes aside, it doesn’t actually take much discipline to write a small utility that stays memory safe. If you keep allocations simple, check your returns, and clean up properly, you can avoid most pitfalls. The real challenge shows up when the code grows, when inputs are hostile, or when the software has to run for years under every possible edge case. That’s where “just be careful” stops working, and why tools, fuzzing, and safer languages exist.
- Every C programmer I've talked to
No its not, if it was that easy C wouldn't have this many memory related issues...
avoiding all memory management mistakes is not easy, and the bigger the codebase becomes, the more exponential the chance for disaster gets
C and Zig aren't the same. I would wager that syntax differences between languages can help you see things in one language that are much harder to see in another. I'm not saying that Zig or C are good or bad for this, or that one is better than the other in terms of the ease of seeing memory problems with your eyes, I'm just saying that I would bet that there's some syntax that could be employed which make memory usage much more clear to the developer, instead of requiring that the developer keep track of these things in their mind.
Even if you must manually annotate each function so that some metaprogram that runs at compile time can check that nothing is out of place could help detect memory leaks, I would think. or something; that's just an idea. There's a whole metaprogramming world of possibilities here that Zig allows that C simply doesn't. I think there's a lot of room for tooling like this to detect problems without forcing you to contort yourself into strange shapes simply to make the compiler happy.
Probably both. They're words of hubris.
C and Zig give the appearance of practicality because they allow you to take shortcuts under the assumption that you know what you're doing, whereas Rust does not; it forces you to confront the edge cases in terms of ownership and provenance and lifetime and even some aspects of concurrency right away, and won't compile until you've handled them all.
And it's VERY frustrating when you're first starting because it can feel so needlessly bureaucratic.
But then after awhile it clicks: Ownership is HARD. Lifetimes are HARD. And suddenly when going back to C and friends, you find yourself thinking about these things at the design phase rather than at the debugging phase - and write better, safer code because of it.
And then when you go back to Rust again, you breathe a sigh of relief because you know that these insidious things are impossible to screw up.
On both your average days and your bad days.
Over the 40 to 50 years that your carer lasts.
I guess those kind of developers exist, but I know that I'm not one of them.
The question is, then, what price in language complexity are you willing to pay to completely avoid the 8th most dangerous cause of vulnerabilities as opposed to reducing them but not eliminating them? Zig makes it easier to find UAF than in C, and not only that, but the danger of UAF exploitability can be reduced even further in the general case rather easily (https://www.cl.cam.ac.uk/~tmj32/papers/docs/ainsworth20-sp.p...). So it is certainly true that memory unsafety is a cause of dangerous vulnerabilities, but it is the spatial unsafety that's the dominant factor here, and Zig eliminates that. So if you believe (rightly, IMO) that a language should make sure to reduce common causes of dangerous vulnerabilities (as long as the price is right), then Zig does exactly that!
I don't think it's unreasonable to find the cost of Rust justified to eliminate the 8th most dangerous cause of vulnerabilities, but I think it's also not unreasonable to prefer not to pay it.
[1]: https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html
Second, I don't care if my bank card details leak because of CSRF or because of a bug in Chromium. Now, to be fair, the list of dangerous vulnerabilities weighs things by number of incidents and not by number of users affected, and it is certainly true that more people use Chrome than those who use a particular website vulnerable to CSRF. But things aren't so simple, there, too. For example, I work on the JVM, which is largely written in C++, and I can guarantee that many more people are affected by non-memory-safety vulnerabilities in Java programs than by memory-safety vulnerabilities in the JVM.
Anyway, the point is that the overall danger and incidence of vulnerabilities - and therefore the justified cost in addressing all the different factors involved - is much more complicated than "memory unsafety bad". Yes, it's bad, but different kinds of memory unsafety are bad to different degrees, and the badness can be controlled separately from the cause.
Now, I think it's obvious that even Rust fans understand there's a complex cost/benefit game here, because most software today is already written in memory-safe languages, and the very reason someone would want to use a language like Rust in the first place is because they recognise that sometimes the cost of other memory-safe languages isn't worth it, despite the importance of memory safety. If both spatial and temporal safety were always justified at any reasonable cost (that is happily paid by most software already), then there would be no reason for Rust to exist. Once you recognise that, you have to also recognise that what Rust offers must be subject to the same cost/benefit analysis that is used to justify it in the first place. And it shouldn't be surprising that the outcome would be similar: sometimes the cost may be justified, sometimes it may not be.
Languages like Modula-3 or Oberon would have taken over the world of systems programming.
Unfortunately there are too many non-believers for systems programming languages with automatic resource management to take off as they should.
Despite everything, kudos to Apple for pushing Swift no matter what, as it seems to be only way for adoption.
Or those languages had other (possibly unrelated) problems that made them less attractive.
I think that in a high-economic-value, competitive activity such as software, it is tenuous to claim that something delivers a significant positive gain and at the same time that that gain is discarded for irrational reasons. I think at least one of these is likely to be false, i.e. either the gain wasn't so substantial or there were other, rational reasons to reject it.
Rust's model has a strict model that effectively prevents certain kinds of logic errors/bugs. So that's good (if you don't mind the price). But it doesn't address all kinds of other logic errors/bugs. It's like closing one door to the barn, but there are six more still wide open.
I see rust as an incremental improvement over C, which comes at quite a hefty price. Something like zig is also an incremental improvement over C, which also comes at a price, but it looks like a significantly smaller one.
(Anyway, I'm not sure zig is even the right comp for rust. There are various languages that provide memory safety, if that's your priority, which also generally allow dropping into "unsafe" -- typically C -- where performance is needed.)
I feel like I am most interested about nim given how easy it was to pick up and how interoperable it is with C and it has a garbage collector and can change it which seems to be great for someone like me who doesn't want to worry about manual memory management right now but maybe if it becomes a bottleneck later, I can atleast fix it without worrying too much..
Out of all of them from what little I know and my very superficial knowledge Odin seems the most appealing to me, it's primary use case from what I know is game development I feel like that could easily pivot into native desktop application development was tempted to make a couple of those in odin in the past but never found the time.
Nim I like the concept and the idea of but the python-like syntax just irks me. haha I can't seem to get into languages where indentation replaces brackets.
But the GC part of it is pretty neat, have you checked Go yet?
I haven't really looked into odin except joining their discord and asking them some questions.
it seems that aside from some normal syntax, it is sort of different from golang under the hood as compared to V-lang which is massively inspired by golang
After reading the HN post of sqlite which recommended using sqlite as a odt or some alternative which I agreed. I thought of creating an app in flutter similar to localsend except flutter only supports C esq and it would've been weird to take golang pass it through C and then through flutter or smth and I gave up...
I thought that odin could compile to C and I can use that but it turns out that Odin doesn't really compile to C as compared to nim and v-lang which do compile to C.
I think that nim and v-lang are the best ways to write some app like that though with flutter and I am now somewhat curious as to what you guys think would be the best way of writing highly portable apps with something personally dev-ex being similar to golang..
I have actually thought about using something like godot for this project too and seeing if godot supports something like golang or typescript or anything really. Idk I was just messing around and having a bit of fun lol i think.
But I like nim in the sense that I feel sometimes in golang that I can't change its GC and so although I do know that for most things it wouldn't be a breaker.
but still, I sometimes feel like I should've somewhat freedom to add memory management later without restarting from scratch or something y'know?
Golang is absolutely goated. This was why I also recommended V-lang, V-lang is really similar to golang except it can have memory management...
They themselves say that on the website that IIRC if you know golang, you know 70% V-lang
I genuinely prefer golang over everything but I still like nim/ V-lang too as fun languages as I feel like their ecosystem isn't that good even though I know that yes they can interop with C but still...
We don't need yet another language with manual memory management in the 21st century, and V doesn't look that would ever be that relevant.
I think people prefer what's familiar to them, and Swift definitely looks closer to existing C++ to me, and I believe has multiple people from the C++ WG working on it now as well, supposedly after getting fed up with the lack of language progress on C++.
The most recent versions gained a lot in the way of cross-platform availability, but the lack of a native UI framework and its association with Apple seem to put off a lot of people from even trying it.
I wish it was a lot more popular outside of the Apple ecosystem.
https://docs.swift.org/swift-book/documentation/the-swift-pr...
Edits mine.
I like to keep the spacetime topologies complete.
Constant = time atom of value.
Register = time sequence of values.
Stack = time hierarchy of values.
Heap = time graph of values.
Seasoned Rust coders don’t spend time fighting the borrow checker - their code is already written in a way that just works. Once you’ve been using Rust for a while, you don’t have to “restructure” your code to please the borrow checker, because you’ve already thought about “oh, these two variables need to be mutated concurrently, so I’ll store them separately”.
The “object soup” is a particular approach that won’t work well in Rust, but it’s not a fundamentally easier approach than the alternatives, outside of familiarity.
My experience is that what makes your statement true, is that _seasoned_ Rust developers just sprinkle `Arc` all over the place, thus effectively switching to automatic garbage collection. Because 1) statically checked memory management is too restrictive for most kinds of non trivial data structures, and 2) the hoops of lifetimes you have to go to to please the static checker whenever you start doing anything non trivial are just above human comprehension level.
Do you tend to use a lot of Arenas?
The first is a fairly generic input -> transform -> output. This is your generic request handler for instance. You receive a payload, run some transform on that (and maybe a DB request) and then produce a response.
In this model, Arc is very fitting for some shared (im)mutable state. Like DB connections, configuration and so on.
The second pattern is something like: state + input -> transform -> new state. Eg. you're mutating your app state based on some input. This fits stuff like games, but also retained UIs, programming language interpreters and so on on.
Using ARCs here muddles the ownership. The gamedev ecosystem has found a way to manage this by employing ECS, and while it can be overkill, the base DOD principles can still be very helpful.
Treat your data as what it is; data. Use indices/keys instead of pointers to represent relations. Keep it simple.
Arenas can definitely be a part of that solution.
Even then, I’d agree that while Arc is used in lots of places in work stealing runtimes, I disagree that it’s used everywhere or that you can really do anything else if you want to leverage all your cores with minimum effort and not having to build your application specialized to deal with that.
I don't care that they have a good work-stealing event loop, I care that it's the default and their APIs all expect the work-stealing implementation and unnecessarily constrain cases where you don't use that implementation. It's frustrating and I go out of my way to avoid Tokio because of it.
Edit: the issues are in Axum, not the core Tokio API. Other libs have this problem too due to aforementioned defaults.
At $dayjob we have built a large codebase (high-throughput message broker) using the thread-per-core model with tokio (ie one worker thread per CPU, pinned to that CPU, driving a single-threaded tokio Runtime) and have not had any problems. Much of our async code is !Send or !Sync (Rc, RefCell, etc) precisely because we want it to benefit from not needing to run under the default tokio multi-threaded runtime.
We don't use many external libs for async though, which is what seems to be the source of your problems. Mostly just tokio and futures-* crates.
But in this case, the data hiding behind the Arc is almost never mutable. It's typically some shared, read-only information that needs to live until all the concurrent workers are done using it. So this is very easy to reason about: Stick a single chunk of read-only data behind the reference count, and let it get reclaimed when the final worker disappears.
There are some cases where someone new to Rust will try to use Arc as a solution to every problem, but I haven't seen much code like this outside of reviewing very junior Rust developers' code.
In some application architectures Arc is a common feature and it's fine. Saying that seasoned Rust developers rarely use Arc isn't true, because some types of code require shared references with Arc. There is nothing wrong with Arc when used properly.
I think this is less confusing to people who came from modern C++ and understand how modern C++ features like shared_ptr work and when to use them. For people coming from garbage collected languages it's more tempting to reach for the Arc types to try to write code as if it was garbage collected.
That doesn’t mean there aren’t other legitimate use cases, but “all the time” is not representative of the code I read or write, personally.
No, this couldn't be further from the truth.
If you use Rust for web server backend code then yes, you see `Arc`s everywhere. Otherwise their use is pretty rare, even in large projects. Rust is somewhat unique in that regard, because most Rust code that is written is not really a web backend code.
To some extent this is unavoidable. Non-'static lifetimes correspond (roughly) to a location on the program stack. Since a Future that suspends can't reasonably stay on the stack it can't have a lifetime other than 'static. Once it has to be 'static, it can't borrow anything (that's not itself 'static), so you either have to Copy your data or Rc/Arc it. This, btw, is why even tokio's spawn_local has a 'static bound on the Future.
It would be nice if it were ergonomic for library authors to push the decision about whether to use Rc<RefCell<T>> or Arc<Mutex<T>> (which are non-threadsafe and threadsafe variants of the same underlying concept) to the library consumer.
- 151 instances of "Arc<" in Servo: https://github.com/search?q=repo%3Aservo%2Fservo+Arc%3C&type...
- 5 instances of "Arc<" in AWS SDK for Rust https://github.com/search?q=repo%3Arusoto%2Frusoto%20Arc%3C&...
- 0 instances for "Arc<" in LOC https://github.com/search?q=repo%3Acgag%2Floc%20Arc%3C&type=...
- 6 instances of "Rc<" in AWS SDK for Rust: https://github.com/search?q=repo%3Arusoto%2Frusoto+Rc%3C&typ...
- 0 instance for "Rc<" in LOC: https://github.com/search?q=repo%3Acgag%2Floc+Rc%3C&type=cod...
(Disclaimer: I don't know what these repos are except Servo).
Plus the html processing needs to be Arc as well, so that tracks.
Arc isn't really garbage collection. It's like a reference counted smart pointer like C++ has shared_ptr.
If you drop an Arc and it's the last reference to the underlying object, it gets dropped deterministically.
Garbage collection generally refers to more complex systems that periodically identify and free unused objects in a less deterministic manner.
In c++ land this is very often called garbage collection too
Large scale teams always get pointer ownership wrong.
Project Zero has enough examples.
No, this is a subset of garbage collection called tracing garbage collection. "Garbage collection" absolutely includes refcounting.
If you need a referecne counted garbage collector for more than a tiny minotiry of your code, then Rust was probably the wrong choice of language - use something that has a better (mark and sweep) garbage collectors. Rust is good for places where you can almost always find a single owner, and you can use reference counting for the rare exception.
However, the difference between Arc and a Garbage Collector is that the Arc does the cleanup at a deterministic point (when the last Arc is dropped) whereas a Garbage Collector is a separate thing that comes along and collects garbage later.
> If you need a referecne counted garbage collector for more than a tiny minotiry of your code
The purpose of Arc isn't to have a garbage collector. It's to provide shared ownership.
There is no reason to avoid Rust if you have an architecture that requires shared ownership of something. These reductionist generalizations are not accurate.
I think a lot of new Rust developers are taught that Arc shouldn't be abused, but they internalize it as "Arc is bad and must be avoided", which isn't true.
As a rough approximation, if you're very heavy-handed with ARC then you probably shouldn't be using rust for that project.
[0] The term "leak" can be a bit hard to pin down, but here I mean something like space which is allocated and which an ordinary developer would prefer to not have allocated.
However, I disagree with generalizations that you can judge the quality of code based on whether or not it uses a lot of Arc. You need to understand the architecture and what's being accomplished.
Reference counting is a valid form of garbage collection. It is arguably the simplest form. https://en.wikipedia.org/wiki/Garbage_collection_(computer_s...
The other forms of GC are tracing followed by either sweeping or copying.
> If you drop an Arc and it's the last reference to the underlying object, it gets dropped deterministically.
Unless you have cycles, in which case the objects are not dropped. And then scanning for cyclic objects almost certainly takes place at a non-deterministic time, or never at all (and the memory is just leaked).
> Garbage collection generally refers to more complex systems that periodically identify and free unused objects in a less deterministic manner.
No. That's like saying "a car is a car; a vehicle is anything other than a car". No, GC encompasses reference counting, and GC can be deterministic or non-deterministic (asynchronous).
I do find myself running into lifetime and borrow-checker issues much less these days when writing larger programs in rust. And while your comment is a bit cheeky, I think it gets at something real.
One of the implicit design mentalities that develops once you write rust for a while is a good understanding of where to apply the `UnsafeCell`-related types, which includes `Arc` but also `Rc` and `RefCell` and `Cell`. These all relate to inner mutability, and there are many situations where plopping in the right one of these effectively resolves some design requirement.
The other idiomatic thing that happens is that you implicitly begin structuring your abstract data layouts in terms of thunks of raw structured data and connections between them. This usually involves an indirection - i.e. you index into an array of things instead of holding a pointer to the thing.
Lastly, where lifetimes do get involved, you tend to have a prior idea of what thing they annotate. The example in the article is a good case study of that. The author is parsing a `.notes` file and building some index of it. The text of the `.notes` file is the obvious lifetime anchor here.
You would write your indexing logic with one lifetime 'src: `fn build_index<'src>(src: &'src str)`
Internally to the indexing code, references to 'src-annotated things can generally pass around freely as their lifetime converges after it.
Externally to the indexing code you'd build a string of the notes text, and passing a reference to that to the `build_index` function.
For simple CLI programs, you tend not to really need anything more than this.
It gets more hairy if you're looking at constructing complex object graphs with complex intermediate state, partial construction of sub-states, etc. Keeping track of state that's valid at some level, while temporarily broken at another level, is where it gets really annoying with multiple nested lifetimes and careful annotation required.
But it was definitely a bit of a hair-pulling journey to get to my state of quasi-peace with Rust's borrow checker.
No true scotsman would ever be confused by the borrow checker.
i've seen plenty of rust projects open source and otherwise that utilise Arc heavily or use clone and/or copy all over the place.
> No true scotsman would ever be confused by the borrow checker.
I'd take that No true scotsman over the "Real C programmers write code without CVE" for $5000.
Also you are strawmanning the argument. GP said, "As a seasoned veteran of Rust you learn to think like the borrow checkers." vs "Real Rust programmers were born with knowledge of borrow checker".
They are clearly just saying as you become more proficient with X, Y is less of a problem. Not that if the borrow checker is blocking you that you aren't a real Rust programmer.
Let's say you're trying to get into running. You express that you can't breathe well during the exercise and it's a miserable experience. One of your friends tells you that as an experienced runner they don't encounter that in the same way anymore, and running is thus more enjoyable. Do you start screeching No True Scotsman!! at them? I think not.
My beef is sometimes with the ways traits are implemented or how AWS implemented Errors for the their library that is just pure madness.
Here is one piece of the problem:
while let Some(page) = object_stream.next().await {
match page {
// ListObjectsV2Output
Ok(p) => {
if let Some(contents) = p.contents {
all_objects.extend(contents);
}
}
// SdkError<ListObjectsV2Error, Response>
Err(err) => {
let raw_response = &err.raw_response();
let service_error = &err.as_service_error();
error!("ListObjectsV2Error: {:?} {:?}", &service_error, &raw_response);
return Err(S3Error::Error(format!("ListObjectsV2Error: {:?}", err)));
}
}
}
while let Some(page) = object_stream.next().await {
match page {
// ListObjectsV2Output
Ok(p) => {
if let Some(contents) = p.contents {
all_objects.extend(contents);
}
}
// SdkError<ListObjectsV2Error, Response>
Err(err) => {
let raw_response = err.raw_response();
let service_error = err.as_service_error();
error!("ListObjectsV2Error: {:?} {:?}", service_error, raw_response);
return Err(S3Error::Error(format!("ListObjectsV2Error: {:?}", err)));
}
}
}
I would have written it this way while let Some(page) = object_stream.next().await {
let p: ListObjectsV2Output = page.map_err(|err| {
// SdkError<ListObjectsV2Error, Response>
let raw_response = err.raw_response();
let service_error = err.as_service_error();
error!("ListObjectsV2Error: {service_error:?} {raw_response:?}");
S3Error::Error(format!("ListObjectsV2Error: {err:?}"))
})?;
if let Some(contents) = p.contents {
all_objects.extend(contents);
}
}
although if your crate defines `S3Error`, then I would prefer to write while let Some(page) = object_stream.next().await {
if let Some(contents) = page?.contents {
all_objects.extend(contents);
}
}
by implementing `From`: impl From<SdkError<ListObjectsV2Error, Response>> for S3Error {
fn from(err: SdkError<ListObjectsV2Error, Response>) -> S3Error {
let raw_response = err.raw_response();
let service_error = err.as_service_error();
error!("ListObjectsV2Error: {service_error:?} {raw_response:?}");
S3Error::Error(format!("ListObjectsV2Error: {err:?}"))
}
}
I really hope it’s an Rc/Arc that you’re cloning. Just deep cloning the value to get ownership is dangerous when you’re doing it blindly.
I have some issues with Zig's design, especially around the lack of explicit interface/trait, but I agree with the post that it is a more practical language, just because of how much simpler its adoption is.
Yes, they know when to give up.
I like the fact that "fighting the borrow checker" is an idea from the period when the borrowck only understood purely lexical lifetimes. So you have to fight to explain why the thing you wrote, which is obviously correct, is in fact correct.
That's already ancient history by the time I learned Rust in 2021. But, this idea that Rust will mean "fighting the borrow checker" took off anyway even though the actual thing it's about was solved.
Now for many people it really is a significant adjustment to learn Rust if your background is exclusively say, Python, or C, or Javascript. For me it came very naturally and most people will not have that experience. But even if you're a C programmer who has never had most of this [gestures expansively] before you likely are not often "fighting the borrow checker". That diagnostic saying you can't make a pointer via a spurious mutable reference? Not the borrow checker. The warning about failing to use the result of a function? Not the borrow checker.
Now, "In Rust I had to read all the diagnostics to make my software compile" does sound less heroic than "battling with the borrow checker" but if that's really the situation maybe we need to come up with a braver way to express this.
When I was learning rust (coming from python/java) it certainly felt like a battle because I "knew" the code was logically sound (at least in other languages) but it felt like I had to do all sorts of magic tricks to get it to compile. Since then I've adapted and understand better _why_ the compiler has those rules, but in the beginning it definitely felt like a fight and that the code _should_ work.
Even though Rust can end up with some ugly/crazy code, I love it overall because I can feel pretty safe that I'm not going to create hard-to-find memory errors.
Sure, I can (and do) write code that causes my (rust) app to crash, but so far they've all been super trivial errors to debug and fix.
I haven't tried Zig yet though. Does it give me all the same compile time memory usage guarantees?
"This chair is guaranteed not to collapse out from under you. It might be a little less comfortable and a little heavier, but most athletic people get used to that and don't even notice!"
Let's quote the article:
> I’d say as it currently stands Rust has poor developer ergonomics but produces memory safe software, whereas Zig has good developer ergonomics and allows me to produce memory safe software with a bit of discipline.
The Rust community should be upfront about this tradeoff - it's a universal tradeoff, that is: Safety is less ergonomic. It's true when you ride a skateboard with a helmet on, it's true when you program, it's true for sex.
Instead you see a lot of arguments with anecdotal or indeterminate language. "Most people [that I talk to] don't seem to have much trouble unless they're less experienced."
It's an amazing piece of rhetoric. In one sentence the ergonomic argument has been dismissed by denying subjectivity exists or matters and then implying that those who disagree are stupid.
"a bit of discipline" is doing a lot of work here.
"Just don't write (memory) bugs!" hasn't produced (memory) safe C, and they've been trying for 50yrs. The best practices have been to bolt on analyzers and strict "best practice" standards to enforce what should be part of the language.
You're either writing in Rust, or you're writing in something else + using extra tools to try and achieve the same result as Rust.
[1]: https://github.com/oven-sh/bun/issues?q=is%3Aissue%20state%3...
Surely there is no borrow checker, but a lot of memory-safety issues with C and C++ comes from lack of good containers with sane interfaces (std::* in C++ is just bad from memory safety point of view).
If C++ gained the proper sum types, error handling and templates in Zig style 15 years ago and not the insanity that is in modern C++ Rust may not exist or be much more niche at this point.
You can argue that using C or C++ can get you to 80% of the way but most people don't actively think "okay, how do I REALLY mess up this program?" and fix all the various invariant that they forgot to handle. Even worse, this issue is endemic in higher level dynamic languages like Python too. Most people most of the time only think about the happy path.
Rust is not the helmet. It is not a safety net that only gives you a benefit in rare catastrophic events.
Rust is your lane assist. It relieves you from the burden of constant vigilance.
A C or C++ programmer that doesn't feel relief when writing Rust has never acquired the mindset that is required to produce safe, secure and reliable code.
Maybe yours is a more apt analogy, but as a very competent driver I can't tell you how often lane assist has driven me crazy.
If I could simply rely on it in all situations, then it would be fine. It's the death of a thousand cuts each and every time it behaves less than ideally that gets to me and I've had to turn it off in every single car a I've driven that has it.
It is a helmet, just accept it. Helmets are useful.
These modern approaches are not languages that result in constant memory-safety issues like you imply.
Interesting analogy. I love lane assist. When I love it. And hate it when it gets in the way. It can actively jerk the car in weird and surprising ways when presented with things it doesn’t cope well with. So I manage when it’s active very proactively. Rust of course has unsafe… but… to keep the analogy, that would be like driving in a peer group where everyone was always asking me if I had my lane assist on, where when I arrived at a destination, I was badgered with “did you do the whole drive with lane assist?” And if I didn’t, I’d have explained to me the routes and techniques I could have used to arrive at my destination using used lane assist the whole way.
Disclaimer, I have only dabbled a little with rust. It is the religion behind and around it that I struggle with, not the borrow checker.
The optimal way to write Python is to have your code properly structured, but you can just puke a bunch of syntax into a .py file and it'll still run. You can experiment with a file that consists entirely of "print('Hello World')" and go from there. Import a json file with `json.load(open(filename))` and boom.
Rust, meanwhile, will not let you do this. It requires you to write a lot of best-practice stuff from the start. Loading a JSON file in a function? That function owns that new data structure, you can't just keep it around. You want to keep it around? Okay, you need to do all this work. What's that? Now you need to specify a lifetime for the variable? What does that mean? How do I do that? What do I decide?
This makes Rust feel much less approachable and I think gives people a worse impression of it at the start when they start being told that they're doing it wrong - even though, from an objective memory-safety perspective, they are, it's still frustrating when you feel as though you have to learn everything to do anything. Especially in the context of the small programs you write when you're learning a language. I don't care about the 'lifetime' of this data structure if the program I'm writing is only going to run for 350ms.
As I've toiled a bit more with Rust on small projects (mine or others') I feel the negative impacts of the language's restrictions far more than I feel the positive impacts, but it is nice to know that my small "download a URL from the internet" tool isn't going to suffer from a memory safety bug and rootkit my laptop because of a maliciously crafted URL. I'm sure it has lots of other bugs waiting to be found, but at least it's not those ones.
The only problem is the code would be littered with Rc<RefCell<Foo>>. If Rust would have a compact notation for that a lot of pain related to fighting the borrow checker just to avoid the above would be eliminated.
I honestly don't even know what to respond to that, but it's kind of weird to me to honestly think that you'd need essentially a "PhD" in order to use a tool...
Turns out not wearing that helmet, and continously falling down for 40 years at the skate park leaves has its price.
I'm not sure that that tradeoff is quite so universal. GC'd languages (or even GC'd implementations like Fil-C) are equally or even more memory-safe than Rust but aren't necessarily any less ergonomic. If anything, it's not an uncommon position that GC'd languages are more ergonomic since they don't forbid some useful patterns that are difficult or impossible to express in safe Rust.
That hasn't been my experience at all. At best, the first version of code pops out quickly and cleanly because the author knows the appropriate idiom to choose. Refactoring rust code to handle changes in that allocation idiom is extremely expensive, even for the most seasoned developers.
Case in point:
> Once you’ve been using Rust for a while, you don’t have to “restructure” your code to please the borrow checker, because you’ve already thought about “oh, these two variables need to be mutated concurrently, so I’ll store them separately”.
Which fails to handle "these two variables didn't need to be mutated concurrently, but now they do".
He seems to know what he's doing, from the author's Twitter:
Post something slightly mentioning rust in r/cpp, Rust evangelists show up, post something slightly mentioning rust in r/zig, Rust evangelists show up. How is this not a cult?
zig & rust have a somewhat thin middle area in the venn diagram.
As for the Ads, even though it's my site, I'd urge you to turn on adblocker, pi-hole or anything like that, I won't mind.
I have ads on there yes, but since I primarily write tech articles for a target audience of tech people you can imagine that most readers have some sort of adblocker either browser, network or otherwise.
So my grand total monthly income from ads basically covers hosting costs and so on.
Edit: The author seems to be in community and I'm mistaken
> This means that basically the borrow checker can only catch issues at comptime but it will not fix the underlying issue that is developers misunderstanding memory lifetimes or overcomplicated ownership. The compiler can only enforce the rules you’re trying to follow; it can’t teach you good patterns, and it won’t save you from bad design choices.
In the short times that I wrote Rust, it never occurred to me that my lifetime annotations were incorrect. They felt like a bit of a chore but I thought said what I meant. I'm sure there's a lot of getting used to using it--like static types--and becomes second nature at some point. Regardless, code that doesn't use unsafe can't have two threads concurrently writing the same memory.
The full title is "Why Zig Feels More Practical Than Rust for Real-World CLI Tools". I don't see why CLI tools are special in any respect. The article does make some good points, but it doesn't invalidate the strength of Rust in preventing CVEs IMO. Rust or Zig may feel certain ways to use for certain people, time and data will tell.
Personally, there isn't much I do that needs the full speed of C/C++, Zig, Rust so there's plenty of GC languages. And when I do contribute to other projects, I don't get to choose the language and would be happy to use Rust, Zig, or C/C++.
Because they don't grow large or need a multi-person team. CLI tools tend to be one & done. In other words, it's saying "Zig, like C, doesn't scale well. Use something else for larger, longer lived codebases."
This really comes across in the article's push that Zig treats you like an adult while Rust is a babysitter. This is not unlike the sentiment for Java back in the day. But the reality is that most codebases don't need to be clever and they do need a babysitter.
Most of those are more memory safe than C. None of them have the borrow checker. This leaves me wondering why - other than proselytizing Zig - this article would make such a direct and narrow comparison between only Zig and Rust.
So is the error handling boilerplate.
Unix system programming in OCaml, from 1991
It's a bit messier than that. Basically the only concurrency-related bug I ever actually want help with from the compiler is memory ordering issues. Rust chose to make those particular racey memory writes safe instead of unsafe.
> Developers are not Idiots
I'm often distracted and AIs are idiots, so a stricter language can keep both me and AIs from doing extra dumb stuff.
> Rust’s borrow checker is a a pretty powerful tool that helps ensure memory safety during compile time. It enforces a set of rules that govern how references to data can be used, preventing common programming memory safety errors such as null pointer dereferencing, dangling pointers and so on. However you may have notice the word compile time in the previous sentence. Now if you got any experience at systems programming you will know that compile time and runtime are two very different things. Basically compile time is when your code is being translated into machine code that the computer can understand, while runtime is when the program is actually running and executing its instructions. The borrow checker operates during compile time, which means that it can only catch memory safety issues that can be determined statically, before the program is actually run. > > This means that basically the borrow checker can only catch issues at comptime but it will not fix the underlying issue that is developers misunderstanding memory lifetimes or overcomplicated ownership. The compiler can only enforce the rules you’re trying to follow; it can’t teach you good patterns, and it won’t save you from bad design choices.
This appears to be claiming that Rust's borrow checker is only useful for preventing a subset of memory safety errors, those which can be statically analysed. Implying the existence of a non-trivial quantity of memory safety errors that slip through the net.
> The borrow checker blocks you the moment you try to add a new note while also holding references to the existing ones. Mutability and borrowing collide, lifetimes show up, and suddenly you’re restructuring your code around the compiler instead of the actual problem.
Whereas this is only A Thing because Rust enforces rules so that memory safety errors can be statically analysed and therefore the first problem isn't really a problem. (Of course you can still have memory safety problems if you try hard enough, especially if you start using `unsafe`, but it does go out of its way to "save you from bad design choices" within that context.)
If you don't want that feature, then it's not a benefit. But if you do, it is. The downside is that there will be a proportion of all possible solutions that are almost certainly safe, but will be rejected by the compiler because it can't be 100% sure that it is safe.
The thing I wish we would remember, as developers, is that not all programs need to be so "safe". They really, truly don't. We all grew up loving lots of unsafe software. Star Fox 64, MS Paint, FruityLoops... the sad truth is that developers are so job-pilled and have pager-trauma, so they don't even remember why they got in the game.
I remember reading somewhere that Andrew Kelley wrote zig because he didn't have a good language to write a DAW in, and I think its so well suited to stuff like that! Make cool creative software you like in zig, and people that get hella about memory bugs can stay mad.
Meanwhile, everyone knows that memory bugs made super mario world better, not worse.
I am fine with ignoring the problems that rust solves, but not because I'm smart and disciplined. It just fits my use-case of making fast _non-critical_ software. I don't think we should rewrite security and networking stacks in it.
I don't think you need the ritual and complexity that rust brings for small and simple scripts and CLI utilities...
Choose the tool that fits your usecase. You would never bring wasm unity to render a static html file. But if you make a browsergame, you might want to.
self.last.as_ref().unwrap().borrow().next.as_ref().unwrap().clone()
I know it can be improved but that's what I think of
Yes, safety isn't correctness but if you can't even get safety then how are you supposed to get correctness?
For small apps Zig probably is more practical than Rust. Just like hiring an architect and structural engineers for a fence in your back yard is less practical than winging it.
https://play.rust-lang.org/?version=stable&mode=debug&editio...
I once joined a company with a large C/C++ codebase. There I worked with some genuinely expert developers - people who were undeniably smart and deeply experienced. I'm not exaggerating and mean it.
But when I enabled the compiler warnings (which annoyed them) they had disabled and ran a static analyzer over the codebase for the first time, hundreds of classic C bugs popped up: memory leaks, potential heap corruptions, out-of-bounds array accesses, you name it.
And yet, these same people pushed back when I introduced things like libfmt to replace printf, or suggested unique_ptr and vector instead of new and malloc.
I kept hearing:
"People just need to be disciplined allocations. std::unique_ptr has bad performance" "My implementation is more optimized than some std algorithm." "This printf is more readable than that libfmt stuff." etc.
The fact is, developers, especially the smart ones probably, need to be prevented from making avoidable mistakes. You're building software that processes medical data. Or steers a car. Your promise to "pay attention" and "be careful" cannot be the safeguard against catastrophe.
It's true, but devs are not infallible and that's the point of Rust. Not idiots, not infallible either.
IMO admitting that one can make mistakes even if they don't think they have is a sign of an experienced and trustworthy developer.
It's not that Rust compiler engineers think that devs are idiots, in fact you CAN have footguns in Rust, but one should never use a footgun easily, because that's how you get security vulnerabilities.
Maybe we'll even get a tabs vs. spaces article next.
Apparently it isn't programming with a straightjacket any longer, like on Usenet discussions.
> Compile-time only: The borrow checker cannot fix logic bugs, prevent silent corruption, or make your CLI behave predictably. It only ensures memory rules are followed.
Also not really true from my experience. There have been plenty of times where the borrow checker is a MASSIVE help in multithreaded context.
The catgirls have no problems producing lots of great software in Rust. It seems more such software comes out every day, nya :3
I'd love to see the actual code here! When I imagine the Rust code for this, I don't really foresee complicated borrow-checker or reference issues. I imagine something like
struct Note {
filename: String,
// maybe: contents: String
}
// newtype for indices into `notes`
struct NoteIdx(usize);
struct Notes {
notes: Vec<Note>,
tag_refs: HashMap<String, Vec<NoteIdx>>
}
You store indices instead of pointers. This is very unlikely to be slower: both a usize index and a pointer are most likely 64 bits on your hardware; there's arguably one extra memory deref but because `notes` will probably be in cache I'd argue it's very unlikely you'll see a real-life performance difference.It's not magic: you can still mess up the indices as you add and remove notes.
But it's safer: if you mess up the indices, you'll get an out-of-bounds error instead of writing to an unintended location in your process's memory.
Anyway, even if you don't care about safety, it's clear and easy to think about and reason about, and arguably easier to do printf debugging with: "this tag is mentioned in notes 3, 10 and 190, oh, let's print out what those ones are". That's better than reading raw pointers.
Maybe I'm missing something? This sort of task comes up all day every while writing Rust code. It's just a pretty normal pattern in the language. You don't store raw references for ordinary logic like this. You do need it when writing allocators, async runtimes, etc. Famously, async needs self-referential structs to store stack local state between calls to `.await`, and that's why the whole business with `Pin` exists.
I stole someone else's benchmark to use, and at one point I ran into seriously buggy behavior on strings (but not integers) that wasn't caught at the point where it happened early even with -Odebug.
Turns out the benchmark was freeing the strings before it finished performing all of the operations on the data structure. That's the sort of thing that Rust makes nearly impossible, but Zig didn't catch at all.
That being said, you've missed the point if you can't understand that safety comes at a real cost, not an abstract or 'by any means necessary' cost, but a cost as real as the safety issues.
So this. We currently spent about a month carefully instrumenting and coming to understand a subtle bug in our distributed radio network. This all runs on bare metal C (samd21 chips). Because timing, and hundreds of little processors, and radios were all involved, it was a pita to surface what the issue was. It was algorithmic. Not a memory problem. Writing this in rust or zig (instead of straight C) would not have fixed this problem.
I’d like to consider doing next generations of this product in zig or rust. I’m not opposed. I like the extra tools to make the product better. But they’re a small part of the picture in writing good software. The borrow checker may improve your code, it doesn’t guarantee successful software.
I agree the borrow checker can be a pain though, I wish there were something like Rust with a great GC. Go has loads of other bad design decisions (err != nil, etc.) and Cargo is fantastic.
(If you go no GC "because it's fun" then there's no need for the post in the first place --- just use what's fun!)
Not Go because of its anaemic type system.
"Cognitive overhead: You’re constantly thinking about lifetimes, ownership, and borrow scopes, even for simple tasks. A small CLI like my notes tool suddenly feels like juggling hot potatoes."
None of this goes away if you are using C or Zig, you just get less help from the compiler.
"Developers are not idiots"
Even intelligent people will make mistakes because they are tired or distracted. Not being an idiot is recognising your own fallibility and trying to guard against it.
What I will say, that the post fails to touch on, is: The Rust compiler's ability to reason about the subset of programs that are safe is currently not good enough, it too often rejects perfectly good programs. A good example of this it the inability to express that the following is actually fine:
struct Foo {
bar: String,
baz: String,
}
impl Foo {
fn barify(&mut self) -> &mut String {
self.bar.push_str("!");
&mut self.bar
}
fn bazify(&self) -> &str {
&self.baz
}
}
fn main() {
let mut foo = Foo {
bar: "hello".to_owned(),
baz: "wordl".to_owned(),
};
let s = foo.barify();
let a = foo.bazify();
s.push_str("!!");
}
which leads to awkward constructs like fn barify(bar: &mut String) -> &mut String {
bar.push_str("!");
bar
}
// in main
let s = barify(&mut foo.bar);
Especially considering author's https://github.com/dayvster?tab=repositories
Weird that they don’t consider other options, in particular languages with reference counting or garbage collection. Those will not solve all ownership issues, but for immutable objects, they typically do. For short-running CLI tools, garbage collecting languages may even be faster than ones with manual memory management because they may be able to postpone all memory freeing until the program exits.
tonetegeatinst•4h ago
dayvster•4h ago
osmsucks•4h ago
dayvster•4h ago
Ar-Curunir•4h ago
Cloudef•3h ago