The words of every C programmer who created a CVE.
Also, https://github.com/ghostty-org/ghostty/issues?q=segfault
All jokes aside, it doesn’t actually take much discipline to write a small utility that stays memory safe. If you keep allocations simple, check your returns, and clean up properly, you can avoid most pitfalls. The real challenge shows up when the code grows, when inputs are hostile, or when the software has to run for years under every possible edge case. That’s where “just be careful” stops working, and why tools, fuzzing, and safer languages exist.
- Every C programmer I've talked to
No its not, if it was that easy C wouldn't have this many memory related issues...
avoiding all memory management mistakes is not easy, and the bigger the codebase becomes, the more exponential the chance for disaster gets
C and Zig aren't the same. I would wager that syntax differences between languages can help you see things in one language that are much harder to see in another. I'm not saying that Zig or C are good or bad for this, or that one is better than the other in terms of the ease of seeing memory problems with your eyes, I'm just saying that I would bet that there's some syntax that could be employed which make memory usage much more clear to the developer, instead of requiring that the developer keep track of these things in their mind.
Even if you must manually annotate each function so that some metaprogram that runs at compile time can check that nothing is out of place could help detect memory leaks, I would think. or something; that's just an idea. There's a whole metaprogramming world of possibilities here that Zig allows that C simply doesn't. I think there's a lot of room for tooling like this to detect problems without forcing you to contort yourself into strange shapes simply to make the compiler happy.
Probably both. They're words of hubris.
C and Zig give the appearance of practicality because they allow you to take shortcuts under the assumption that you know what you're doing, whereas Rust does not; it forces you to confront the edge cases in terms of ownership and provenance and lifetime and even some aspects of concurrency right away, and won't compile until you've handled them all.
And it's VERY frustrating when you're first starting because it can feel so needlessly bureaucratic.
But then after awhile it clicks: Ownership is HARD. Lifetimes are HARD. And suddenly when going back to C and friends, you find yourself thinking about these things at the design phase rather than at the debugging phase - and write better, safer code because of it.
And then when you go back to Rust again, you breathe a sigh of relief because you know that these insidious things are impossible to screw up.
On both your average days and your bad days.
Over the 40 to 50 years that your carer lasts.
I guess those kind of developers exist, but I know that I'm not one of them.
I am not a computer scientist (I have no degree in CS) but it sure seems like it would be possible to determine statically if a reference could be misused in code as written without requiring that you be the Rust Borrow Checker, if the language was designed with those kinds of things from the beginning.
The question is, then, what price in language complexity are you willing to pay to completely avoid the 8th most dangerous cause of vulnerabilities as opposed to reducing them but not eliminating them? Zig makes it easier to find UAF than in C, and not only that, but the danger of UAF exploitability can be reduced even further in the general case rather easily (https://www.cl.cam.ac.uk/~tmj32/papers/docs/ainsworth20-sp.p...). So it is certainly true that memory unsafety is a cause of dangerous vulnerabilities, but it is the spatial unsafety that's the dominant factor here, and Zig eliminates that. So if you believe (rightly, IMO) that a language should make sure to reduce common causes of dangerous vulnerabilities (as long as the price is right), then Zig does exactly that!
I don't think it's unreasonable to find the cost of Rust justified to eliminate the 8th most dangerous cause of vulnerabilities, but I think it's also not unreasonable to prefer not to pay it.
[1]: https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html
Second, I don't care if my bank card details leak because of CSRF or because of a bug in Chromium. Now, to be fair, the list of dangerous vulnerabilities weighs things by number of incidents and not by number of users affected, and it is certainly true that more people use Chrome than those who use a particular website vulnerable to CSRF. But things aren't so simple, there, too. For example, I work on the JVM, which is largely written in C++, and I can guarantee that many more people are affected by non-memory-safety vulnerabilities in Java programs than by memory-safety vulnerabilities in the JVM.
Anyway, the point is that the overall danger and incidence of vulnerabilities - and therefore the justified cost in addressing all the different factors involved - is much more complicated than "memory unsafety bad". Yes, it's bad, but different kinds of memory unsafety are bad to different degrees, and the harm can be controlled separately from the cause.
Now, I think it's obvious that even Rust fans understand there's a complex cost/benefit game here, because most software today is already written in memory-safe languages, and the very reason someone would want to use a language like Rust in the first place is because they recognise that sometimes the cost of other memory-safe languages isn't worth it, despite the importance of memory safety. If both spatial and temporal safety were always justified at any reasonable cost (that is happily paid by most software already), then there would be no reason for Rust to exist. Once you recognise that, you have to also recognise that what Rust offers must be subject to the same cost/benefit analysis that is used to justify it in the first place. And it shouldn't be surprising that the outcome would be similar: sometimes the cost may be justified, sometimes it may not be.
Sure, but just by virtue of what these languages are used for, almost all CSRF vulnerabilities are not in code written in C, C++, Rust, or Zig. So if I’m targeting that space, why would I care that some Django app or whatever has a CSRF when analyzing what vulnerabilities are important to prevent for my potential Zig project?
You’re right that overall danger and incidence of vulnerabilities matter - but they matter for the actual use-case you want to use the language for. The Linux kernel for example has exploitable TOCTOU vulnerabilities at a much higher rate than most software - why would they care that TOCTOU vulnerabilities are rare in software overall when deciding what complexity to accept to reduce them?
The rate of vulnerabilities obviously can't be zero, but it also doesn't need to be. It needs to be low enough for the existing coping processes to work well, and those processes need to be applied anyway. So really the question is always about cost: what's the cheapest way for me to get to a desired vulnerability rate?
Which brings me to why I may prefer a low-level language that doesn't prevent UAF: because the language that does present UAF has a cost that is not worth it for me, either because UAF vulnerabilities are not a major risk for my application or because I have cheaper ways to prevent them (without necessarily eliminating the possibility of UAF itself), such as with one of the modern pointer-tagging techniques.
To your point about V8 and CPython: that calculus makes sense if I’m Microsoft and I could spend time/money on memory safety in CPython or on making CSRF in whatever Python library I use harder. My understanding is that the proportions of the budget for different areas of vulnerability research at any tech giant would in fact vindicate this logic.
However, if I’m on the V8 team or a CPython contributor and I’m trying to reduce vulnerabilities, I don’t have any levers to pull for CSRF or SQL injection without just instead working on a totally different project that happens to be built on the relevant language. If my day job is to reduce vulnerabilities in V8 itself, those would be totally out of scope and everybody would look at my like I’m crazy if I brought it up in a meeting.
Similarly, if I’m choosing a language to (re)write my software in and Zig is on the table, I am probably not super worried about CSRF and SQL injection - most likely I’m not writing an API accessed by a browser or interfacing with a SQL database at all! Also I have faith that almost all developers who know what Zig is in the first place would not write code with a SQL injection vulnerability in any language. That those are still on the top ten list is a condemnation of our entire species, in my book.
Maybe (and I'll return to that later), but even if the job were to specifically reduce vulnerabilities in V8, it may not be the case that focusing on UAF is the best way to go, and even if it were, it doesn't mean that eliminating UAF altogether is the best way to reduce UAF vulnerabilities. More generally, memory safety => fewer vulnerabilities doesn't mean fewer vulnerabilities => memory safety.
When some problem is a huge cause of exploitable vulnerabilities and eliminating it is cheap - as in the case of spatial memory safety - it's pretty easy to argue that eliminating it is sensible. But when it's not as big a cause, when the exploits could be prevented in other ways, and when the cost of eliminating the problem at the source is high, it's not so clear cut that that's the best way to go.
The costs involved could actually increase vulnerabilities overall. A more complex language could have negative effects on correctness (and so on security) in some pretty obvious ways: longer build times could mean less testing; less obvious code could mean more difficult reviews.
But I would say that there's even a problem with your premise about "the job". The more common vulnerabilities are in JS, the less value there is in reducing them in V8, as the relative benefit to your users will be smaller. If JS vulnerabilities are relatively common, there could, perhaps, be more value to V8 users in improving V8's performance than in reducing its vulnerabilities.
BTW, this scenario isn't so hypothetical for me, as I work on the Java platform, and I very much prefer spending my time on trying to reduce injection vulnerabilities in Java than on chasing down memory-safety-related vulnerabilities in HotSpot (because there's more security value to our users in the former than in the latter).
I think Zig is interesting from a programming-language design point of view, but I also think it's interesting from a product design point of view in that it isn't so laser-focused on one thing. It offers spatial memory safety cheaply, which is good for security, but it also offers a much simpler language than C++ (while being just as expressive) and fast build times, which could improve productivity [1], as well as excellent cross-building. So it has something for everyone (well, at least people who may care about different things).
[1]: These could also have a positive effect on correctness, which I hinted at before, but I'm trying to be careful about making positive claims on that front because if there's anything I've learnt in the field of software correctness is that things are very complicated, and it's hard to know how to best achieve correctness. Even the biggest names in the field have made some big, wrong predictions.
That's a good example and I agree with you there. I think the difference with V8 though is twofold:
1. Nobody runs fully untrusted code on HotSpot today and expects it to stop anybody from doing anything. For browser JavaScript engines, of course the expectation is that the engine (and the browser built on it) are highly resistant to software sandbox escapes. A HotSpot RCE that requires a code construction nobody would actually write is usually unexploitable - if you can control the code the JVM runs, you already own the process. A JavaScript sandbox escape is in most cases a valuable part of an exploit chain for the browser.
2. Even with Google's leverage on the JS and web standardization processes, they have very limited ability to ship user-visible security features and get them adopted. Trusted Types, which could take a big chunk out of very common XSS vulnerabilities and wasn't really controversial, was implemented in Safari 5 years after Chrome shipped it. Firefox still doesn't support it. Let's be super optimistic and say that after another 5 years it'll be as common as CSP is today - that's ten years to provide a broad security benefit.
These are of course special aspects of V8's security environment, but having a mountain of memory safe code you can tweak on top of your unsafe code like the JVM has is also unusual. The main reason I'd be unlikely to reach for Zig + temporal pointer auth on something I work on is that I don't write a lot of programs that can't be done in a normie GC-based memory safe programming language, but for which having to debug UAF and data race bugs (even if they crash cleanly!) is a suitable tradeoff for the Rust -> Zig drop in language complexity.
As to your last point, I certainly accept that that could be the case for some, but the opposite is also likely: if UAF is not an outsized cause of problems, then a simpler language that, hopefully, can make catching/debugging all bugs easier could be more attractive than one that could be tilting too much in favour of eliminating UAF possibly at the expense of other problems. My point being that it seems like there are fine reasons to prefer a Rust-like approach over a Zig-like approach and vice-versa in different situations, but we simply don't yet know enough to tell which one - if any - is universally or even more commonly superior to the other.
Languages like Modula-3 or Oberon would have taken over the world of systems programming.
Unfortunately there are too many non-believers for systems programming languages with automatic resource management to take off as they should.
Despite everything, kudos to Apple for pushing Swift no matter what, as it seems to be only way for adoption.
Or those languages had other (possibly unrelated) problems that made them less attractive.
I think that in a high-economic-value, competitive activity such as software, it is tenuous to claim that something delivers a significant positive gain and at the same time that that gain is discarded for irrational reasons. I think at least one of these is likely to be false, i.e. either the gain wasn't so substantial or there were other, rational reasons to reject it.
Projects like Midori, Swift, Android, MaximeVM, GraalVM, only happen when someone high enough is willing to keep it going until it takes off.
When they fail, usually it is because management backing felt through, not because there wasn't a way to sort out whatever was the cause.
Even Java had enough backing from Sun, IBM, Oracle and BEA during its early uncertainty days outside being a language for applets, until it actually took off on server and mobile phones.
If Valhala never makes it, it is because Oracle gave up funding the team after all these years, or it is impossible and it was a waste of money?
Even for teams further toward the right of the bell curve, historical contingencies have a greater impact than they do in more grounded engineering fields. There are specialties of course, but nobody worries that when they hire a mechanical engineer someone needs to make sure the engineer can make designs with a particular brand of hex bolt because the last 5 years of the company’s designs all use that brand.
In fact, when we look at the long list of languages that have become super-popular and even moderately popular - including languages that have grown only to later shrink rather quickly - say Fortran, COBOL, C, C++, JavaScript, Java, PHP, Python, Ruby, C#, Kotlin, Go, TypeScript, we see languages that are either more specific to some domains or more general, some reducing switching costs (TS, Kotlin) some not, but we do see that the adoption rate is proportional to the language's peak market share, and once the appropriate niche is there (think of a possibly new/changed environment in biological evolution) we see very fast adoption, as we'd expect to see from a significant fitness increase.
So given that many languages displace incumbents or find their own niches, and that the successful ones do it quickly, I think that the most reasonable assumption to start with when a language isn't displaying that is that its benefits just aren't large enough in the current environment(s). If the pace of your language's adoption is slow, then: 1. the first culprit to look for is the product-market fit of the language, and 2. it's a bad sign for the language's future prospects.
I guess it's possible for something with a real but low advantage to spread slowly and reach a large market share eventually, but I don't think it's ever happened in programming languages, and there's the obvious risk of something else with a bigger advantage getting your market in the meantime.
It's just pig-headedness by Apple, nothing more.
Instead Swift was designed around the use-cases the team was familiar with, which would be C++ and compilers. Let's just say that the impedance between that and rapid UI development was pretty big. From C++ they also got the tolerance for glacial compile times (10-50 times as slow as compiling the corresponding Objective-C code)
In addition to that they did big experiments, such as value semantics backed by copy-on-write, which they thought was cool, but is – again – worthless in terms of the common problem domains.
Since then, the language's just been adding features at a speed even D can't match.
However, one thing the language REALLY GETS RIGHT, and which is very under-appreciated, is that they duplicated Objective-C's stability across API versions. ObjC is best in class when it comes to the ability to do forward and backwards compatibility, and Swift has some AWESOME work to make that work despite the difficulties.
Rust's model has a strict model that effectively prevents certain kinds of logic errors/bugs. So that's good (if you don't mind the price). But it doesn't address all kinds of other logic errors/bugs. It's like closing one door to the barn, but there are six more still wide open.
I see rust as an incremental improvement over C, which comes at quite a hefty price. Something like zig is also an incremental improvement over C, which also comes at a price, but it looks like a significantly smaller one.
(Anyway, I'm not sure zig is even the right comp for rust. There are various languages that provide memory safety, if that's your priority, which also generally allow dropping into "unsafe" -- typically C -- where performance is needed.)
Could you point at some language features that exist in other languages that Rust doesn't have that help with logic errors? Sum types + exhaustive pattern matching is one of the features that Rust does have that helps a lot to address logic errors. Immutability by default, syntactic salt on using globals, trait bounds, and explicit cloning of `Arc`s are things that also help address or highlight logic bugs. There are some high level bugs that the language doesn't protect you from, but I know of now language that would. Things like path traversal bugs, where passing in `../../secret` let's an attacker access file contents that weren't intended by the developer.
The only feature that immediately comes to mind that Rust doesn't have that could help with correctness is constraining existing types, like specifying that an u8 value is only valid between 1 and 100. People are working on that feature under the name "pattern in types".
There's a complexity cost to adding features, and while each one may make sense on its own, in aggregate they may collectively burden the developer with too much complexity.
Go tries to hide the issues, until a data loss happens because it has had trouble dealing with non-UTF8 filenames and Strings are by convention UTF8 but not truly and some functions expect UTF8 while others can work with any collection of bytes.
https://blog.habets.se/2025/07/Go-is-still-not-good.html
Or the Go time library which is a monster of special cases after they realized they needed monotonic clocks [1] but had to squeeze it into the existing API.
Rust is on the other end of the spectrum. Explicit over implicit, but you can implicitly assume stuff works by panicking on these unexpected errors. Making the problem easy to fix if you stumble upon it after years of added cruft and changing requirements.
There is a significant crowd of people who don't necessarily love borrow checker, but traits/proper generic types/enums win them over Go/Python. But yes, it takes significant maturity to recognize and know how to use types properly.
Much of Zig's user base seems to be people new to systems programming. Coming from a managed code background, writing native code feels like being a powerful wizard casting fireball everywhere. After you write a few unsafe programs without anything going obviously wrong, you feel invincible. You start to think the people crowing about memory safety are doing it because they're stupid, or, cowards, or both. You find it easy to allocate and deallocate when needed: "just" use defer, right? Therefore, it someone screws up, that's a personal fault. You're just better, right?
You know who used to think that way?
Doctors.
Ignaz Semmelweis famously discovered that hand-washing before childbirth decreased morality by an order of magnitude. He died poor and locked in an asylum because doctors of the day were too proud to acknowledge the need to adopt safety measures. If mandatory pre-surgical hand-washing step prevented complication, that implied the surgeon had a deficiency in cleanliness and diligence, right?
So they demonized Semmelweis and patients continued for decades to die needlessly. I'm sure that if those doctors had been on the internet today, they would say, as the Zig people do say, "skill issue".
It takes a lot of maturity to accept that even the most skilled practitioners of an art need safety measures.
What happens in those cases is that you drop a whole lot of disorganized dynamic and stack allocations and just handle them in a batch. So in all cases where the problem is tracking temporary objects, there's no need to track ownership and such. It's a complete non-problem.
So if you're writing code in domains where the majority of effort to do manual memory management is tracking temporary allocations, then in those cases you can't really meaningfully say that because Rust is safer than a corresponding malloc/free program in C/C++ it's also safer than the C3/Jai/Odin/Zig solution using arenas.
And I think a lot of the disagreement comes from this. Rust devs often don't think that switching the use of the allocator matters, so they argue against what's essentially a strawman built from assumed malloc/free based memory patterns that are incorrect.
ON THE OTHER HAND, there are cases where this isn't true and you need to do things like safely passing data back and forth between threads. Arenas doesn't help with that at all. So in those cases I think everyone would agree that Rust or Java or Go is much safer.
So the difference between domains where the former or the latter dominates needs to be recognised, or there can't possibly be any mutual understanding.
http://www.3kranger.com/HP3000/mpeix/doc3k/B3150290023.10194...
What is old is new again.
I feel like I am most interested about nim given how easy it was to pick up and how interoperable it is with C and it has a garbage collector and can change it which seems to be great for someone like me who doesn't want to worry about manual memory management right now but maybe if it becomes a bottleneck later, I can atleast fix it without worrying too much..
Out of all of them from what little I know and my very superficial knowledge Odin seems the most appealing to me, it's primary use case from what I know is game development I feel like that could easily pivot into native desktop application development was tempted to make a couple of those in odin in the past but never found the time.
Nim I like the concept and the idea of but the python-like syntax just irks me. haha I can't seem to get into languages where indentation replaces brackets.
But the GC part of it is pretty neat, have you checked Go yet?
I haven't really looked into odin except joining their discord and asking them some questions.
it seems that aside from some normal syntax, it is sort of different from golang under the hood as compared to V-lang which is massively inspired by golang
After reading the HN post of sqlite which recommended using sqlite as a odt or some alternative which I agreed. I thought of creating an app in flutter similar to localsend except flutter only supports C esq and it would've been weird to take golang pass it through C and then through flutter or smth and I gave up...
I thought that odin could compile to C and I can use that but it turns out that Odin doesn't really compile to C as compared to nim and v-lang which do compile to C.
I think that nim and v-lang are the best ways to write some app like that though with flutter and I am now somewhat curious as to what you guys think would be the best way of writing highly portable apps with something personally dev-ex being similar to golang..
I have actually thought about using something like godot for this project too and seeing if godot supports something like golang or typescript or anything really. Idk I was just messing around and having a bit of fun lol i think.
But I like nim in the sense that I feel sometimes in golang that I can't change its GC and so although I do know that for most things it wouldn't be a breaker.
but still, I sometimes feel like I should've somewhat freedom to add memory management later without restarting from scratch or something y'know?
Golang is absolutely goated. This was why I also recommended V-lang, V-lang is really similar to golang except it can have memory management...
They themselves say that on the website that IIRC if you know golang, you know 70% V-lang
I genuinely prefer golang over everything but I still like nim/ V-lang too as fun languages as I feel like their ecosystem isn't that good even though I know that yes they can interop with C but still...
We don't need yet another language with manual memory management in the 21st century, and V doesn't look that would ever be that relevant.
V is also similar to golang in syntax, something that I definitely admire tbh.
I am interested about nim and V more tbh as compared to D-lang
In fact I was going to omit D-lang from my comment but I know that those folks are up to something great too and I will try to look into them more but nim defintely peaks my interests as a production ready language-ish imo as compared to V-lang or even D-lang
I think people prefer what's familiar to them, and Swift definitely looks closer to existing C++ to me, and I believe has multiple people from the C++ WG working on it now as well, supposedly after getting fed up with the lack of language progress on C++.
The most recent versions gained a lot in the way of cross-platform availability, but the lack of a native UI framework and its association with Apple seem to put off a lot of people from even trying it.
I wish it was a lot more popular outside of the Apple ecosystem.
https://docs.swift.org/swift-book/documentation/the-swift-pr...
Edits mine.
I like to keep the spacetime topologies complete.
Constant = time atom of value.
Register = time sequence of values.
Stack = time hierarchy of values.
Heap = time graph of values.
Seasoned Rust coders don’t spend time fighting the borrow checker - their code is already written in a way that just works. Once you’ve been using Rust for a while, you don’t have to “restructure” your code to please the borrow checker, because you’ve already thought about “oh, these two variables need to be mutated concurrently, so I’ll store them separately”.
The “object soup” is a particular approach that won’t work well in Rust, but it’s not a fundamentally easier approach than the alternatives, outside of familiarity.
My experience is that what makes your statement true, is that _seasoned_ Rust developers just sprinkle `Arc` all over the place, thus effectively switching to automatic garbage collection. Because 1) statically checked memory management is too restrictive for most kinds of non trivial data structures, and 2) the hoops of lifetimes you have to go to to please the static checker whenever you start doing anything non trivial are just above human comprehension level.
Do you tend to use a lot of Arenas?
The first is a fairly generic input -> transform -> output. This is your generic request handler for instance. You receive a payload, run some transform on that (and maybe a DB request) and then produce a response.
In this model, Arc is very fitting for some shared (im)mutable state. Like DB connections, configuration and so on.
The second pattern is something like: state + input -> transform -> new state. Eg. you're mutating your app state based on some input. This fits stuff like games, but also retained UIs, programming language interpreters and so on on.
Using ARCs here muddles the ownership. The gamedev ecosystem has found a way to manage this by employing ECS, and while it can be overkill, the base DOD principles can still be very helpful.
Treat your data as what it is; data. Use indices/keys instead of pointers to represent relations. Keep it simple.
Arenas can definitely be a part of that solution.
Even then, I’d agree that while Arc is used in lots of places in work stealing runtimes, I disagree that it’s used everywhere or that you can really do anything else if you want to leverage all your cores with minimum effort and not having to build your application specialized to deal with that.
I don't care that they have a good work-stealing event loop, I care that it's the default and their APIs all expect the work-stealing implementation and unnecessarily constrain cases where you don't use that implementation. It's frustrating and I go out of my way to avoid Tokio because of it.
Edit: the issues are in Axum, not the core Tokio API. Other libs have this problem too due to aforementioned defaults.
At $dayjob we have built a large codebase (high-throughput message broker) using the thread-per-core model with tokio (ie one worker thread per CPU, pinned to that CPU, driving a single-threaded tokio Runtime) and have not had any problems. Much of our async code is !Send or !Sync (Rc, RefCell, etc) precisely because we want it to benefit from not needing to run under the default tokio multi-threaded runtime.
We don't use many external libs for async though, which is what seems to be the source of your problems. Mostly just tokio and futures-* crates.
But in this case, the data hiding behind the Arc is almost never mutable. It's typically some shared, read-only information that needs to live until all the concurrent workers are done using it. So this is very easy to reason about: Stick a single chunk of read-only data behind the reference count, and let it get reclaimed when the final worker disappears.
There are some cases where someone new to Rust will try to use Arc as a solution to every problem, but I haven't seen much code like this outside of reviewing very junior Rust developers' code.
In some application architectures Arc is a common feature and it's fine. Saying that seasoned Rust developers rarely use Arc isn't true, because some types of code require shared references with Arc. There is nothing wrong with Arc when used properly.
I think this is less confusing to people who came from modern C++ and understand how modern C++ features like shared_ptr work and when to use them. For people coming from garbage collected languages it's more tempting to reach for the Arc types to try to write code as if it was garbage collected.
That doesn’t mean there aren’t other legitimate use cases, but “all the time” is not representative of the code I read or write, personally.
No, this couldn't be further from the truth.
If you use Rust for web server backend code then yes, you see `Arc`s everywhere. Otherwise their use is pretty rare, even in large projects. Rust is somewhat unique in that regard, because most Rust code that is written is not really a web backend code.
To some extent this is unavoidable. Non-'static lifetimes correspond (roughly) to a location on the program stack. Since a Future that suspends can't reasonably stay on the stack it can't have a lifetime other than 'static. Once it has to be 'static, it can't borrow anything (that's not itself 'static), so you either have to Copy your data or Rc/Arc it. This, btw, is why even tokio's spawn_local has a 'static bound on the Future.
It would be nice if it were ergonomic for library authors to push the decision about whether to use Rc<RefCell<T>> or Arc<Mutex<T>> (which are non-threadsafe and threadsafe variants of the same underlying concept) to the library consumer.
smol's spawn also requires the Future to be 'static (https://docs.rs/smol/latest/smol/fn.spawn.html), while tokio's local block_on also does not require 'static or Send + Sync (https://docs.rs/tokio/latest/tokio/task/struct.LocalSet.html...).
- 151 instances of "Arc<" in Servo: https://github.com/search?q=repo%3Aservo%2Fservo+Arc%3C&type...
- 5 instances of "Arc<" in AWS SDK for Rust https://github.com/search?q=repo%3Arusoto%2Frusoto%20Arc%3C&...
- 0 instances for "Arc<" in LOC https://github.com/search?q=repo%3Acgag%2Floc%20Arc%3C&type=...
- 6 instances of "Rc<" in AWS SDK for Rust: https://github.com/search?q=repo%3Arusoto%2Frusoto+Rc%3C&typ...
- 0 instance for "Rc<" in LOC: https://github.com/search?q=repo%3Acgag%2Floc+Rc%3C&type=cod...
(Disclaimer: I don't know what these repos are except Servo).
Plus the html processing needs to be Arc as well, so that tracks.
Arc isn't really garbage collection. It's like a reference counted smart pointer like C++ has shared_ptr.
If you drop an Arc and it's the last reference to the underlying object, it gets dropped deterministically.
Garbage collection generally refers to more complex systems that periodically identify and free unused objects in a less deterministic manner.
In c++ land this is very often called garbage collection too
Large scale teams always get pointer ownership wrong.
Project Zero has enough examples.
No, this is a subset of garbage collection called tracing garbage collection. "Garbage collection" absolutely includes refcounting.
Chapter 5, https://gchandbook.org/contents.html
Other CS quality references can be provided with similar table of contents.
If you need a referecne counted garbage collector for more than a tiny minotiry of your code, then Rust was probably the wrong choice of language - use something that has a better (mark and sweep) garbage collectors. Rust is good for places where you can almost always find a single owner, and you can use reference counting for the rare exception.
However, the difference between Arc and a Garbage Collector is that the Arc does the cleanup at a deterministic point (when the last Arc is dropped) whereas a Garbage Collector is a separate thing that comes along and collects garbage later.
> If you need a referecne counted garbage collector for more than a tiny minotiry of your code
The purpose of Arc isn't to have a garbage collector. It's to provide shared ownership.
There is no reason to avoid Rust if you have an architecture that requires shared ownership of something. These reductionist generalizations are not accurate.
I think a lot of new Rust developers are taught that Arc shouldn't be abused, but they internalize it as "Arc is bad and must be avoided", which isn't true.
That is the most common implementation, but that is still just an implementation detail. Garbage collectors can run deterministically which is what reference counting does.
> There is no reason to avoid Rust if you have an architecture that requires shared ownership of something.
Rust can be used for anything. However the goals are still something good for system programming. Systems programming implies some compromises which makes Rust not as good a choice for other types of programming. Nothing wrong with using it anyway (and often you have a mix and the overhead of multiple languages makes it worth using one even when another would be better for a small part of the problem)
> I think a lot of new Rust developers are taught that Arc shouldn't be abused, but they internalize it as "Arc is bad and must be avoided", which isn't true.
Arc has a place. However most places where you use it a little design work could eliminate the need. If you don't understand what I'm talking about then "Arch is bad and must be avoided" is better than putting Arc everywhere even though that would work and is less effort in the short run (and for non-systems programming it might even be a good design)
As a rough approximation, if you're very heavy-handed with ARC then you probably shouldn't be using rust for that project.
[0] The term "leak" can be a bit hard to pin down, but here I mean something like space which is allocated and which an ordinary developer would prefer to not have allocated.
However, I disagree with generalizations that you can judge the quality of code based on whether or not it uses a lot of Arc. You need to understand the architecture and what's being accomplished.
That wasn't really my point, but I disagree with your disagreement anyway ;) Yes, you don't want to over-generalize, but Arc has a lot of downsides, doesn't have a lot of upsides, and can usually be relatively easily avoided in lieu of something with a better set of tradeoffs. Heavy use isn't bad in its own right, but it's a strong signal suggestive of code needing some love and attention.
My point though was: If you are going to heavily use Arc, Rust isn't the most ergonomic language for the task, and where for other memory management techniques the value proposition of Rust is more apparent it's a much narrower gap compared to those ergonomic choices if you use Arc a lot. Maybe you have to (or want to) use Rust anyway for some reason, but it's usually a bad choice conditioned on that coding style.
Reference counting is a valid form of garbage collection. It is arguably the simplest form. https://en.wikipedia.org/wiki/Garbage_collection_(computer_s...
The other forms of GC are tracing followed by either sweeping or copying.
> If you drop an Arc and it's the last reference to the underlying object, it gets dropped deterministically.
Unless you have cycles, in which case the objects are not dropped. And then scanning for cyclic objects almost certainly takes place at a non-deterministic time, or never at all (and the memory is just leaked).
> Garbage collection generally refers to more complex systems that periodically identify and free unused objects in a less deterministic manner.
No. That's like saying "a car is a car; a vehicle is anything other than a car". No, GC encompasses reference counting, and GC can be deterministic or non-deterministic (asynchronous).
I do find myself running into lifetime and borrow-checker issues much less these days when writing larger programs in rust. And while your comment is a bit cheeky, I think it gets at something real.
One of the implicit design mentalities that develops once you write rust for a while is a good understanding of where to apply the `UnsafeCell`-related types, which includes `Arc` but also `Rc` and `RefCell` and `Cell`. These all relate to inner mutability, and there are many situations where plopping in the right one of these effectively resolves some design requirement.
The other idiomatic thing that happens is that you implicitly begin structuring your abstract data layouts in terms of thunks of raw structured data and connections between them. This usually involves an indirection - i.e. you index into an array of things instead of holding a pointer to the thing.
Lastly, where lifetimes do get involved, you tend to have a prior idea of what thing they annotate. The example in the article is a good case study of that. The author is parsing a `.notes` file and building some index of it. The text of the `.notes` file is the obvious lifetime anchor here.
You would write your indexing logic with one lifetime 'src: `fn build_index<'src>(src: &'src str)`
Internally to the indexing code, references to 'src-annotated things can generally pass around freely as their lifetime converges after it.
Externally to the indexing code you'd build a string of the notes text, and passing a reference to that to the `build_index` function.
For simple CLI programs, you tend not to really need anything more than this.
It gets more hairy if you're looking at constructing complex object graphs with complex intermediate state, partial construction of sub-states, etc. Keeping track of state that's valid at some level, while temporarily broken at another level, is where it gets really annoying with multiple nested lifetimes and careful annotation required.
But it was definitely a bit of a hair-pulling journey to get to my state of quasi-peace with Rust's borrow checker.
How else would you safely share data in multi-threaded code? Which is the only reason to use Atomic reference counts.
No true scotsman would ever be confused by the borrow checker.
i've seen plenty of rust projects open source and otherwise that utilise Arc heavily or use clone and/or copy all over the place.
> No true scotsman would ever be confused by the borrow checker.
I'd take that No true scotsman over the "Real C programmers write code without CVE" for $5000.
Also you are strawmanning the argument. GP said, "As a seasoned veteran of Rust you learn to think like the borrow checkers." vs "Real Rust programmers were born with knowledge of borrow checker".
They are clearly just saying as you become more proficient with X, Y is less of a problem. Not that if the borrow checker is blocking you that you aren't a real Rust programmer.
Let's say you're trying to get into running. You express that you can't breathe well during the exercise and it's a miserable experience. One of your friends tells you that as an experienced runner they don't encounter that in the same way anymore, and running is thus more enjoyable. Do you start screeching No True Scotsman!! at them? I think not.
Would you also say the same for a C++ project that uses shared_ptrs everywhere?
The clone quip doesn't work super well when comparing to C++ since that language "clones" data implicitly all the time
My beef is sometimes with the ways traits are implemented or how AWS implemented Errors for the their library that is just pure madness.
Here is one piece of the problem:
while let Some(page) = object_stream.next().await {
match page {
// ListObjectsV2Output
Ok(p) => {
if let Some(contents) = p.contents {
all_objects.extend(contents);
}
}
// SdkError<ListObjectsV2Error, Response>
Err(err) => {
let raw_response = &err.raw_response();
let service_error = &err.as_service_error();
error!("ListObjectsV2Error: {:?} {:?}", &service_error, &raw_response);
return Err(S3Error::Error(format!("ListObjectsV2Error: {:?}", err)));
}
}
} while let Some(page) = object_stream.next().await {
match page {
// ListObjectsV2Output
Ok(p) => {
if let Some(contents) = p.contents {
all_objects.extend(contents);
}
}
// SdkError<ListObjectsV2Error, Response>
Err(err) => {
let raw_response = err.raw_response();
let service_error = err.as_service_error();
error!("ListObjectsV2Error: {:?} {:?}", service_error, raw_response);
return Err(S3Error::Error(format!("ListObjectsV2Error: {:?}", err)));
}
}
}
I would have written it this way while let Some(page) = object_stream.next().await {
let p: ListObjectsV2Output = page.map_err(|err| {
// SdkError<ListObjectsV2Error, Response>
let raw_response = err.raw_response();
let service_error = err.as_service_error();
error!("ListObjectsV2Error: {service_error:?} {raw_response:?}");
S3Error::Error(format!("ListObjectsV2Error: {err:?}"))
})?;
if let Some(contents) = p.contents {
all_objects.extend(contents);
}
}
although if your crate defines `S3Error`, then I would prefer to write while let Some(page) = object_stream.next().await {
if let Some(contents) = page?.contents {
all_objects.extend(contents);
}
}
by implementing `From`: impl From<SdkError<ListObjectsV2Error, Response>> for S3Error {
fn from(err: SdkError<ListObjectsV2Error, Response>) -> S3Error {
let raw_response = err.raw_response();
let service_error = err.as_service_error();
error!("ListObjectsV2Error: {service_error:?} {raw_response:?}");
S3Error::Error(format!("ListObjectsV2Error: {err:?}"))
}
}My problem is that I should have something like
(http_status, reason) where http_status is a String or u16, reason is a enum with SomeError(String) structure. So essentially having a flat meaningful structure instead of this what we currently have. I do not have any mental model about the error structure of the AWS libs or don't even know where to start to create that mental model. As a result I just try to turn everything to a string and return it altogether hoping that the real issue is there somwhere in that structure.
I think the AWS library error handling is way to complex for what it does and one way we could improve that if Rust had a great example of a binary (bin) project that has lets say 2 layers of functions and showing how to organize your error effectively.
Now do this for a lib project. Without this you end up with this hot mess. At least this is how I see it. If you have a suggestion how should I return errors from a util.rs that has s3_list_objects() to my http handler than I would love to hear what you have to say.
Thanks for your suggestions anyway! I am going to re-implement my error handling and see if it gives us more clarity with impl.
https://momori.dev/posts/rust-error-handling-thiserror-anyho...
burntsushi has a good writeup about their difference in usecase here:
https://www.reddit.com/r/rust/comments/1cnhy7d/whats_the_wis...
I really hope it’s an Rc/Arc that you’re cloning. Just deep cloning the value to get ownership is dangerous when you’re doing it blindly.
I have some issues with Zig's design, especially around the lack of explicit interface/trait, but I agree with the post that it is a more practical language, just because of how much simpler its adoption is.
Yes, they know when to give up.
I like the fact that "fighting the borrow checker" is an idea from the period when the borrowck only understood purely lexical lifetimes. So you have to fight to explain why the thing you wrote, which is obviously correct, is in fact correct.
That's already ancient history by the time I learned Rust in 2021. But, this idea that Rust will mean "fighting the borrow checker" took off anyway even though the actual thing it's about was solved.
Now for many people it really is a significant adjustment to learn Rust if your background is exclusively say, Python, or C, or Javascript. For me it came very naturally and most people will not have that experience. But even if you're a C programmer who has never had most of this [gestures expansively] before you likely are not often "fighting the borrow checker". That diagnostic saying you can't make a pointer via a spurious mutable reference? Not the borrow checker. The warning about failing to use the result of a function? Not the borrow checker.
Now, "In Rust I had to read all the diagnostics to make my software compile" does sound less heroic than "battling with the borrow checker" but if that's really the situation maybe we need to come up with a braver way to express this.
When I was learning rust (coming from python/java) it certainly felt like a battle because I "knew" the code was logically sound (at least in other languages) but it felt like I had to do all sorts of magic tricks to get it to compile. Since then I've adapted and understand better _why_ the compiler has those rules, but in the beginning it definitely felt like a fight and that the code _should_ work.
fn main() {
let mut x = 5;
let y = &x;
let z = &mut x;
}
The original borrowck goes oh no, y is a reference to x and then z is a mutable reference to x, those can't both exist at the same time, but the scope of y hasn't ended so I give up here's an error message. You needed to adjust your software so that it can see why what you wrote is fine. That's "fighting the borrow checker".But today's borrowck just goes duh, the reference y goes away right before the variable z is created and everything is cool.
These are called "Non-lexical lifetimes" because the lifetime is no longer strictly tied to a lexical scope - the curly braces in the program - but can have any necessary extent to make things correct.
Further improving the ability of the borrowck to see that what you're doing is fine is an ongoing piece of work for Rust and always will be†, but NLL was the lowest hanging fruit, most of the software I write would need tweaks to account for a strict lexical lifetime and it'd be very annoying when I know I am correct.
† Rice's theorem tells us we can either have a compiler where sometimes illegal borrows are allowed or a compiler where sometimes borrows that should be legal are forbidden (or both, which seems useless), but we cannot have one which is always right, so, Rust chooses the safe option and that means we're always going to be working to make it just a bit better.
Even though Rust can end up with some ugly/crazy code, I love it overall because I can feel pretty safe that I'm not going to create hard-to-find memory errors.
Sure, I can (and do) write code that causes my (rust) app to crash, but so far they've all been super trivial errors to debug and fix.
I haven't tried Zig yet though. Does it give me all the same compile time memory usage guarantees?
"This chair is guaranteed not to collapse out from under you. It might be a little less comfortable and a little heavier, but most athletic people get used to that and don't even notice!"
Let's quote the article:
> I’d say as it currently stands Rust has poor developer ergonomics but produces memory safe software, whereas Zig has good developer ergonomics and allows me to produce memory safe software with a bit of discipline.
The Rust community should be upfront about this tradeoff - it's a universal tradeoff, that is: Safety is less ergonomic. It's true when you ride a skateboard with a helmet on, it's true when you program, it's true for sex.
Instead you see a lot of arguments with anecdotal or indeterminate language. "Most people [that I talk to] don't seem to have much trouble unless they're less experienced."
It's an amazing piece of rhetoric. In one sentence the ergonomic argument has been dismissed by denying subjectivity exists or matters and then implying that those who disagree are stupid.
"a bit of discipline" is doing a lot of work here.
"Just don't write (memory) bugs!" hasn't produced (memory) safe C, and they've been trying for 50yrs. The best practices have been to bolt on analyzers and strict "best practice" standards to enforce what should be part of the language.
You're either writing in Rust, or you're writing in something else + using extra tools to try and achieve the same result as Rust.
[1]: https://github.com/oven-sh/bun/issues?q=is%3Aissue%20state%3...
The borrow checker is something new Rust devs struggle with for a couple months, as they learn, then the rules are internalized and the code gets written just like any other language. I think new devs only struggle with the borrow checker because everyone has internalized the C memory model for the last 50 years. In another 50, everyone will be unlearning Rust for whatever replaces it.
Why? Interested to know.
Just for background, I have not tried out either Zig or Rust yet, although I have been interestedly reading about both of them for a while now, on HN and other places, and also in videos, and have read some of the overview and docs of both. But I have a long background in C dev earlier. And I have been checking out C-like languages for a while such as Odin, Hare, C3, etc.
Surely there is no borrow checker, but a lot of memory-safety issues with C and C++ comes from lack of good containers with sane interfaces (std::* in C++ is just bad from memory safety point of view).
If C++ gained the proper sum types, error handling and templates in Zig style 15 years ago and not the insanity that is in modern C++ Rust may not exist or be much more niche at this point.
AFAIK "P2688 R5 Pattern Matching: match Expression" exists and is due C++29 (what actually matters is when it's accepted and implemented by compilers anyway)
Also, cheap bound checks (in Rust) are contingent to Rust's aliasing model.
Yes it did, of course. Maybe it takes years of practice, the assistance of tools (there are many, most very good), but it's always been possible to write memory safe large C programs.
Sure, it's easier to write a robust program in almost every other language. But to state that nobody ever produced a memory safe C program is just wrong. Maybe it was just rethoric for you, but I'm afraid some may read that and think it's a well established fact.
Can you provide examples for it? Because it honestly doesn't seem like it has ever been done.
sqlite
billions of installations and relatively few incidents
Few incidents != not badly exploitable
Few incidents != no more undiscovered safety bugs/issues
I don't think your examples quite cut it.
Well all of them "potentially" do, which is enough from a security standpoint
There have been enough zero days using memory leaks that we know the percentage is also non trivial.
So yes, if programmers can write bugs they will, google SREs were the first to famously measure bugs per release as a metric instead of the old fashioned (and naive) "we aren't gonna write any more bugs"
You can argue that using C or C++ can get you to 80% of the way but most people don't actively think "okay, how do I REALLY mess up this program?" and fix all the various invariant that they forgot to handle. Even worse, this issue is endemic in higher level dynamic languages like Python too. Most people most of the time only think about the happy path.
Ideally you can also design a safe API around it using the appropriate language primitives to model the abstraction, but there’s no requirement.
In practice the vast majority will be accepted, and what remains is stuff the Rust compiler cannot prove to be correct. If Rust doesn't like your code, there are two solutions. The first is to go through the rituals to rewrite it as provably-safe code - which can indeed feel a bit tedious if your code was designed using principles which don't map well to Rust. The second is to use `unsafe` blocks - but that means proving its safety is up to the programmer. But as we've historically seen with C and unsafe-heavy Rust code bases, programmers are horrible at proving safety, so your mileage may vary.
I don't want to be the "you're holding it wrong" person, but "Rust rejected my valid and safe program" more often than not means "there's a subtle bug in my program I am not aware of yet". The Rust borrow checker has matured a lot since its initial release, and it doesn't have any trouble handling the low-hanging fruit anymore. What's left is mainly complex and hard-to-reason-about stuff, and that's exactly the kind of code humans struggle with as well.
Rust is not the helmet. It is not a safety net that only gives you a benefit in rare catastrophic events.
Rust is your lane assist. It relieves you from the burden of constant vigilance.
A C or C++ programmer that doesn't feel relief when writing Rust has never acquired the mindset that is required to produce safe, secure and reliable code.
Maybe yours is a more apt analogy, but as a very competent driver I can't tell you how often lane assist has driven me crazy.
If I could simply rely on it in all situations, then it would be fine. It's the death of a thousand cuts each and every time it behaves less than ideally that gets to me and I've had to turn it off in every single car a I've driven that has it.
It is a helmet, just accept it. Helmets are useful.
These modern approaches are not languages that result in constant memory-safety issues like you imply.
Or better yet, modern as 1961 Burroughs released ESPOL/NEWP and C was a decade away to be invented.
I didn't narrowly claim the borrow checker (as opposed to the type system or other static analysis) was the sole focus of the tradeoff.
That's true.
> for the sake of safety,
That's false though. All deep dives in the topic find that the core issue is the sheer amount of unoptimized IR that is thrown at LLVM, especially due to the pervasive monomorphization of everything.
Are you really going to try and convince people that this is completely incidental and not a result of pursuing its robust static contracts? How pedantic should we about about it?
I am, because that's what all the people that explored the question converged on.
Now if you have other evidences to bring to the debate, feel free to – otherwise, please stop spreading FUD and/or incompetence.
So... do I as I say, not as I do?
Interesting analogy. I love lane assist. When I love it. And hate it when it gets in the way. It can actively jerk the car in weird and surprising ways when presented with things it doesn’t cope well with. So I manage when it’s active very proactively. Rust of course has unsafe… but… to keep the analogy, that would be like driving in a peer group where everyone was always asking me if I had my lane assist on, where when I arrived at a destination, I was badgered with “did you do the whole drive with lane assist?” And if I didn’t, I’d have explained to me the routes and techniques I could have used to arrive at my destination using used lane assist the whole way.
Disclaimer, I have only dabbled a little with rust. It is the religion behind and around it that I struggle with, not the borrow checker.
The optimal way to write Python is to have your code properly structured, but you can just puke a bunch of syntax into a .py file and it'll still run. You can experiment with a file that consists entirely of "print('Hello World')" and go from there. Import a json file with `json.load(open(filename))` and boom.
Rust, meanwhile, will not let you do this. It requires you to write a lot of best-practice stuff from the start. Loading a JSON file in a function? That function owns that new data structure, you can't just keep it around. You want to keep it around? Okay, you need to do all this work. What's that? Now you need to specify a lifetime for the variable? What does that mean? How do I do that? What do I decide?
This makes Rust feel much less approachable and I think gives people a worse impression of it at the start when they start being told that they're doing it wrong - even though, from an objective memory-safety perspective, they are, it's still frustrating when you feel as though you have to learn everything to do anything. Especially in the context of the small programs you write when you're learning a language. I don't care about the 'lifetime' of this data structure if the program I'm writing is only going to run for 350ms.
As I've toiled a bit more with Rust on small projects (mine or others') I feel the negative impacts of the language's restrictions far more than I feel the positive impacts, but it is nice to know that my small "download a URL from the internet" tool isn't going to suffer from a memory safety bug and rootkit my laptop because of a maliciously crafted URL. I'm sure it has lots of other bugs waiting to be found, but at least it's not those ones.
The only problem is the code would be littered with Rc<RefCell<Foo>>. If Rust would have a compact notation for that a lot of pain related to fighting the borrow checker just to avoid the above would be eliminated.
That sounds like Rhai or one of the other Rust-alike scripting languages.
That said, I'm all the time noodling new small programs in Rust. Cargo and crates.io makes this far simpler than with C/C++ where I have to write some code in another language entirely like [C]Make to get the thing to build. And I find that the borrow checker and rustc's helpful errors create a sort of ladder where all I have to do to get a working program is fix the errors the compiler identifies. And it often tells he how. Once the errors are fixed one by one, which is easy enough, and the software builds, my experience is that I get the expected program behavior about 95% of the time. I cannot say the same for other languages.
Is it..?
Rust is more like your parents when you are a kid: don't do that, don't do that either! see? you wanted to go out to play with your friends and now you have a bruised knee. What did I told you? Now go to your room and stay there!
I honestly don't even know what to respond to that, but it's kind of weird to me to honestly think that you'd need essentially a "PhD" in order to use a tool...
“Skill issue” definitely does not means “mentally deficient”. It comes from the videogames world, where it is used to disparage the lack of training/natural ability of other players; frequently accompanied by “get good”, i.e. continue training & grinding to up your skill.
I struggled with Rust at first but now it feels quite natural, and is the language I use at work and for my open-source work.
I was not "mentally deficient" when I struggled with Rust (at least that I know of :v), while you could say I had a skill issue with the language
And it's not even that i dislike the language, but this is evangelism to just dismiss the point of my argument with "skill issue". A tool isn't supposed to be difficult, it should help you in whatever you're trying to achieve, not making it more difficult.
Turns out not wearing that helmet, and continously falling down for 40 years at the skate park has its price.
I'm not sure that that tradeoff is quite so universal. GC'd languages (or even GC'd implementations like Fil-C) are equally or even more memory-safe than Rust but aren't necessarily any less ergonomic. If anything, it's not an uncommon position that GC'd languages are more ergonomic since they don't forbid some useful patterns that are difficult or impossible to express in safe Rust.
That's a myth that just won't die. How is it that people simultaneously believe
1) GC makes a language slow, and
2) Go is fast?
Go's also isn't the only safe GC. There are plenty of good options out there. You are unlikely to encounter a performance issue using one of these languages that you could resolve only with manual memory management.
Easy one: either not the same people, or people holding self contradicting thoughts.
GC are slow not only because of scanning the memory but also because of the boxing. In my experience, 2 to 3 times slower.
Still a better tradeoff in the vast majority of cases over manual memory management. A GC is well worth the peace of mind.
Not every GC boxes primitives. Most don't.
I think people that talk about GC'd languages being slow are usually not building Rails or Django apps in their day to day.
Go can be made to run much faster than C.
Especially when the legacy C code is complex and thus single threaded, Go's fabulous multicore support means you can be exploiting parallelism and finishing jobs faster, with far less effort than it would take to do it in C.
If you measure performance per developer day invested in writing the Go, Go usually wins by a wide margin.
Not literally the case.
> If you measure performance per developer day invested in writing the Go, Go usually wins by a wide margin.
I can accept that performance/hour-spent is better in Go than C, but that's different from Go's performance ceiling being higher than C's. People often confuse ceilings with effort curves.
It's plenty fast compared to commonly used languages such as JS, PHP or Python, but can easily be let in the dust by Java and C#, which arguably play in the same court.
And AOT-compiled, no GC languages like C++, Rust or Zig just run circles around it.
You are comparing quality of implementation, not languages.
But comparing languages in a vacuum has 0 value. Maybe some alien entity will use physic transcending time and space to make TCL the fastest language ever, but right now I won't be writing heavy data-processing code in it.
For example, comparing languages with LLVM based implementations, usually if the machine code isn't the same, reveals that they aren't pushing the same LLVM IR down the pipe, and has little value for what the grammar actually looks like.
Because that's implicit at this point – I'm not going to prefix with “because Earth geometry is approximately Euclidian at our scale” every time I'm telling a tourist to go straight ahead for 300m to their bus station.
Just like when people say “C++ is fast”, of course they refer to clang/g++/msvc, not some educational university compiler.
Of course the authors of many of such comments aren't to blame, they only know what they know, hence why https://xkcd.com/386/
There are always going to be problem sets where the GC causes significant slowdown.
It's totally possible languages as ergonomic as Rust can be more safe, just because Rust isn't perfect or even has some notable, partially subjective, design flaws.
I'm not sure that changes anything about my comment? GC'd languages can give you safety
and* ergonomics, no need to trade off one for the other. Obviously doing so requires tradeoffs of their own, but such additional criteria were not mentioned in the comment I originally replied to.> I often hear complaints that Rust's semantics actually haven't maximized ergonomics, even factoring in the added difficulty it faces in pursuit of safety.
Well yes, that's factually true. I don't think anyone can disagree that there aren't places where Rust can further improve ergonomics (e.g., partial borrows). And that's not even taking into account places where Rust intentionally made things less ergonomic (around raw pointers IIRC, though I think there's some discussion about changing that).
> It's totally possible languages as ergonomic as Rust can be more safe
It's definitely possible (see above about GC'd languages). There are just other tradeoffs that need to be made that aren't on the ergonomics <-> safety axis.
One can fail, or artificially make a language less ergonomic and that doesn't mean that fixing that somehow has an effect on the safety tradeoff.
So obviously it is when safety and ergonomics are each already maximized that pushing one or the other results in a tradeoff. It's like saying removing weight from a car isn't a tradeoff because the weight was bricks in the trunk.
Anyways I was holding performance constant in all of this because the underlying assumption of Rust and Zig and Odin and C is that performance will make no sacrifices.
That's not the way I read your original comment. When you said "it's a universal tradeoff, that is: Safety is less ergonomic", to me the implication is that gaining safety must result in losing ergonomics and vice versa. The existence of languages that are as safe/safer than Rust and more ergonomic than Rust would seem to be a natural counterexample since they have gained safety over Zig/C/C++ and haven't (necessarily, depending on the exact language) sacrificed ergonomics to do so.
> One can fail, or artificially make a language less ergonomic and that doesn't mean that fixing that somehow has an effect on the safety tradeoff.
To be honest that case didn't even cross my mind when I wrote my original comment. I was assuming we were working at the appropriate Pareto frontier.
> So obviously it is when safety and ergonomics are each already maximized that pushing one or the other results in a tradeoff.
Assuming no other relevant axes are available, sure.
> Anyways I was holding performance constant in all of this because the underlying assumption of Rust and Zig and Odin and C is that performance will make no sacrifices.
Sure. Might have been nice to include that assumption in your original comment, but even then I'm not sure it's too wise to ignore the performance axis completely due to the existence of safe implementations of otherwise "unsafe" languages (e.g., Zig's ReleaseSafe, GCs for C like Fil-C, etc.) that trade off performance for safety instead of ergonomics.
Guaranteed thread safety is huge. I hope more high level, GC languages use Rust's approach of defining an interface for types that can be safely sent and/or shared across threads
To be honest that particular aspect of Rust's memory safety story slipped my mind when I wrote that comment. I was thinking of Java's upcoming (?) disabling-by-default of sun.misc.Unsafe which allows you to ensure that you have no unsafe code at all in your program and your dependency tree outside of the JVM itself. To be fair, that's admittedly not quite the same level of memory safety improvement over Rust as Rust/Java/C#/etc. over C/C++/etc., but I felt it's a nice guarantee to have available.
> Guaranteed thread safety is huge.
I totally agree! It's not the only way to get memory safety, but I definitely lean towards that approach over defining data races to be safe.
I interpreted the parent to be saying that ergonomics IS (at least partly) subjective. The subjective aspect is "what you are used to". And once you get used to Rust its ergonomics are fine, something I agree with having used Rust for a few years now.
> The Rust community should be upfront about this tradeoff
I think they are. But more to the point, I think that safety is not really something you can reasonably "trade-off", at least not for non-toy software. And I think that because I don't really see C/C++/Zig people saying "we're trading off safety for developer productivity/performance/etc". I see them saying "we can write safe code in an unsafe language by being really careful and having a good process". Maybe they're right, but I'm skeptical based on the never-ending proliferation of memory safety issues in C/C++ code.
The issue is the underlying and unfair assumption that is so common in these debates: that the memory-unsafe language we're comparing against Rust is always C/C++, rather than a modern approach like Zig or Odin (which will share many arguments against C/C++).
You can prove to yourself this happens by looking around this thread! The topic is Zig vs. Rust and just look at how many pro-Rust arguments mention C (including yours).
It's a strong argument if we pose C as the opponent, because C can be so un-ergonomic that even Rust with its added constraints competes on that aspect. But compare it to something like Zig or Odin (which has ergonomic and safety features like passing allocators to any and all functions, bounds checking by default, sane slice semantics which preclude the need for pointer arithmetic) and the ergonomics/safety argument isn't so simple.
At the same time, I will argue that Zig’s improvements over C are much less substantial compared to something like Rust. It’s great, but not a paradigm shift.
For example, in Rust when we write `for n in 0..10 {` that 0..10 is a Range, we can make one of those, we can store one in a variable, Range is a type. In Odin `for i in 0..<10 {` is uh, magic, we can't have a 0..<10, it's just syntax for the loop.
in Rust we can `for puppy in litter {` and litter - whatever type that is - just has to implement IntoIterator, the trait for things which know how to be iterated, and they iterate over whatever that iterator does. In Odin only specific built-in types are suitable, and they do... whatever seemed reasonable to Bill.
You can't provide this for your own type, it's a second class citizen and isn't given the same privileges as Odin's built-in types.
If you're Ginger Bill, Odin is great, it does exactly what you expected and it covers everything you care about but nothing more.
Not every single semantic element in the language needs to be a type.
`for i in 0..<10`
This isn't "magic," it's a loop that initializes a value `i` and checks against it. It's a lot less "magic" than Rust. The iterable types in Odin are slices and arrays - that is hardly arbitrary like you imply.
The type system in Rust is mostly useful for its static guarantees. Using it for type-gymnastics and unnecessary abstractions is ugly and performative.
Tasks in Odin can be accomplished with simplicity.
The C++ misdirection and unbounded type abstractions are simply not appreciated by many.
If you want a language with no special cases that is 100% fully abstract then program in a Turing machine. I'll take the language designed to make computers perform actions over a language having an identity crisis with mathematics research, all else equal. Unless I'm doing math research of course - Haskell can be quite fun!
Ginger Bill is a PhD physicist as well -- not that education confers wisdom -- but I don't bet his design choices are coming from a resentment of math or abstraction.
Absolute generality isn't the boon you think it is.
Far from just the two special cases you listed I count five, Bill has found need for iterating over:
Both built-in array types, strings (but not cstrings), slices and maps.
`for a, b in c { ... }` makes a the key and b the value if c is a map, but if c were instead an array then a is the value and b is an index. Presumably both these ideas were useful to Bill and the lack of coherence didn't bother him.
Maps are a particularly interesting choice of built-in. We could argue about exactly how a built-in string type should best work, or the growth policy for a mediocre growable array type - and Odin offers two approaches for strings (though not with the same support), but for maps you clearly need implementation options. and instead in the name of "simplicity" Bill locks you into his choice.
You're stuck with Bill's choice of hash, Bill's layout, and Bill's semantics. If you want your own "hash table" type that's not map yours will have even worse ergonomics and you can't fix it. Yours can't be iterated with a for loop, it can't be special case initialized even if your users enabled that for the built-in type, and of course all the familiar functions and function groups don't work for your type.
I don't have a use for a "language designed to make computers perform actions" when it lacks the fine control needed to get the best out of the machine but insists I must put in all the hard work anyway.
I'd agree with that if the comparison is JavaScript or Python. If the comparison is Zig (or C or C++) then I don't agree that it's universal. I personally find Rust more ergonomic than those languages (in addition to be being safer).
That is obviously true, otherwise we'd code in assembly and type in UTF-8 byte codes.
Well put! And this should not be contentious issue, it simply is annoying to deal with Rust's very strict compiler. It's not a matter of opinion it simply is more annoying than if you were to use any other language that does not put that much burden on you the developer.
Not all memory safety bugs are critical issues either. We like to pretend like they are but specifically in `coreutils` there were 2 memory safety bugs found recently.
However is it really a big concern? if someone has gotten access to your system where they can run `coreutil` commands you probably have bigger problems than them running a couple of commands that leak.
Speaking more abstractly since I haven't looked at the CVEs in question, but an attacker directly accessing coreutils on your system isn't the only possible attack vector. Another potentially interesting one would be them expecting you to run coreutils on something under they control. For example, a hypothetical exploit in grep might be exploitable by getting you to grep through a repo with a malicious file.
A more concrete example would be some of the various zero-click iMessage exploits, where the vulnerable software isn't exploited via the attackers directly accessing the victim's device but is exploited by sending a malicious file.
It's not safety that makes it less ergonomic, it's correctness.
The distinction between correctness and safety is that safety is willing to suffer false positives, in pursuit of correctness. Correctness is just correctness.
That can be true for small programs. Not always, because Rust's type system makes for programs that can be every bit as compact as Python if the algorithm doesn't interact badly with the borrow checker. Or even if it does. For example this tutorial converts a fairly nary C program to Rust: https://cliffle.com/p/dangerust/ The C was 234 lines, the finished memory safe Rust 198 lines.
But when it comes to large programs, the ergonomics strangely tips into reverse. By "strangely tips into reverse", I mean yes it takes more tokens and thinking to produce a working Rust program, but overall it saves time. Here a "large program" means a programmer can't fit it all in his head at one time. I think Andrew Huang summed the effect up best, when he said if you start pulling on a thread in a Rust program, you always get to the end. In other languages, you often just make the knot tighter.
Anyway, it’s all pretty easy, what’s the use arguing which of multiple easy things is easiest?
In fact, I find it more ergonomic than any other language I ever work with. I'm consistently more productive with it than even scripting languages.
Getting tired of this quip being asserted as fact. Ergonomics are subjective; memory safety is not.
I feel unburdened while using Rust, which is not something I can say about a lot of other dev environments.
As for Zig... I tried to get into it, and I can't remember the specifics, but they felt like "poor" taste in language design (I have a similar opinion of Go). I say taste because I think some thighs weren't necessarily bad, but I just couldn't convince myself to like them. I realise this is a very minority opinion, and I know great engineers who love Zig.
Zig's just not my thing I guess. Same way Rust isn't someone else's thing.
That hasn't been my experience at all. At best, the first version of code pops out quickly and cleanly because the author knows the appropriate idiom to choose. Refactoring rust code to handle changes in that allocation idiom is extremely expensive, even for the most seasoned developers.
Case in point:
> Once you’ve been using Rust for a while, you don’t have to “restructure” your code to please the borrow checker, because you’ve already thought about “oh, these two variables need to be mutated concurrently, so I’ll store them separately”.
Which fails to handle "these two variables didn't need to be mutated concurrently, but now they do".
In the C/C++/Zig code, you would add the second concurrent access, and then start fixing things up and restructuring things - if you, the programmer, knew about the first access, and knew that the concurrent access is a problem.
In countless cases, that work would not be done, and I cannot blame any of the involved people, because managing that kind of detailed complexity over the lifespan of a project is not humanly possible. The result is another concurrency bug, meaning UB in production.
Having the compiler tell you about such problems up front, exactly when they happen, is a complete game changer.
Well, sure, which in practice means throwing a lock around it.
I mean, I get it. There's a category of bugs that happen in real code. Rust chooses some categories of bugs (certainly not all of them) to construct walls around and forces code into idioms that can be provably correct for at least a subset[1] of the bug space. For the case of memory safety, that's a really pretty convincing case. Other areas are less clear; in particular I'm not a fan of Rust's idea of threadsafety and don't think it fits what actually performance-parallel code needs.
[1] You can 100% write racy code with Sync/Send! You can write racy code with boring filesystem access using 100% safe rust (or 1980's csh code, whatever), too. Race conditions are inherent to concurrency. Sync/Send just protect memory access, they do nothing to address semantic/state bugs.
You can construct something like `PhantomData<&mut ()>` to express invariants like "while this type exists, these other operations are unavailable". You can implement Send and/or Sync to say things like "under these specific conditions, this thing is thread safe". These are really powerful features of the type system, and no other mainstream language can really express such invariants at compile time.
[1] Though with overhead. It's tends not to be possible in safe rust to get it to generate code that looks like a pthread_mutex critical section. This again is one of my peeves, you do dangerous shared memory concurrency for performance!
I really wish people would quit bleating on about the borrow checker. As someone who does systems programming, that's not the problem with Rust.
Which Trait do I need to do X? Where is Trait Y and who has my destructor? How am I supposed to refactor this closure into a function? Sigh, I have to wrap yet another object as a newtype because of the Orphan Rule. Ah yes, an eight deep chain initialization calls because Rust won't do named/optional function arguments. Oh, great, the bug is inside a macro--well, there goes at least one full day. Ah, an entity component systems that treats indices like pointers but without the help of the compiler so I can index into the void and scribble over everything--but, hey, it's memory safe and won't segfault (erm, there is a reason why C programmers groan when they get a stack/heap smasher bug).
He seems to know what he's doing, from the author's Twitter:
Post something slightly mentioning rust in r/cpp, Rust evangelists show up, post something slightly mentioning rust in r/zig, Rust evangelists show up. How is this not a cult?Plenty of such people out there.
This guy appears to just personally dislike Rust for reasons undisclosed and tries to rationalize it via posts like this one.
It's like with this former coworker of my former coworker who was really argumentative, seemingly for the sake of it. I did some digging and found that his ex left him and is now happily married.
Turns out that when he was criticizing the use of if-else in Angular templates what he was really thinking about was "if someone else".
> This means that basically the borrow checker can only catch issues at comptime but it will not fix the underlying issue that is developers misunderstanding memory lifetimes or overcomplicated ownership. The compiler can only enforce the rules you’re trying to follow; it can’t teach you good patterns, and it won’t save you from bad design choices.
In the short times that I wrote Rust, it never occurred to me that my lifetime annotations were incorrect. They felt like a bit of a chore but I thought said what I meant. I'm sure there's a lot of getting used to using it--like static types--and becomes second nature at some point. Regardless, code that doesn't use unsafe can't have two threads concurrently writing the same memory.
The full title is "Why Zig Feels More Practical Than Rust for Real-World CLI Tools". I don't see why CLI tools are special in any respect. The article does make some good points, but it doesn't invalidate the strength of Rust in preventing CVEs IMO. Rust or Zig may feel certain ways to use for certain people, time and data will tell.
Personally, there isn't much I do that needs the full speed of C/C++, Zig, Rust so there's plenty of GC languages. And when I do contribute to other projects, I don't get to choose the language and would be happy to use Rust, Zig, or C/C++.
Because they don't grow large or need a multi-person team. CLI tools tend to be one & done. In other words, it's saying "Zig, like C, doesn't scale well. Use something else for larger, longer lived codebases."
This really comes across in the article's push that Zig treats you like an adult while Rust is a babysitter. This is not unlike the sentiment for Java back in the day. But the reality is that most codebases don't need to be clever and they do need a babysitter.
Most of those are more memory safe than C. None of them have the borrow checker. This leaves me wondering why - other than proselytizing Zig - this article would make such a direct and narrow comparison between only Zig and Rust.
So is the error handling boilerplate.
Unix system programming in OCaml, from 1991
I’m aware a few companies use primarily OCaml just as a few use primarily some form of Lisp. It’s just that some of these really nice languages don’t see as much use as they could.
It's a bit messier than that. Basically the only concurrency-related bug I ever actually want help with from the compiler is memory ordering issues. Rust chose to make those particular racey memory writes safe instead of unsafe.
> Developers are not Idiots
I'm often distracted and AIs are idiots, so a stricter language can keep both me and AIs from doing extra dumb stuff.
I really appreciate this in my role, where I have an office right next to the entrance to the building. I get walk-ins all of the time. When my door is closed, I get knocks on the door all of the time. Both AI and strict languages are great tools in my environment, where focus for me is as abundant as water in a desert.
> Rust’s borrow checker is a a pretty powerful tool that helps ensure memory safety during compile time. It enforces a set of rules that govern how references to data can be used, preventing common programming memory safety errors such as null pointer dereferencing, dangling pointers and so on. However you may have notice the word compile time in the previous sentence. Now if you got any experience at systems programming you will know that compile time and runtime are two very different things. Basically compile time is when your code is being translated into machine code that the computer can understand, while runtime is when the program is actually running and executing its instructions. The borrow checker operates during compile time, which means that it can only catch memory safety issues that can be determined statically, before the program is actually run. > > This means that basically the borrow checker can only catch issues at comptime but it will not fix the underlying issue that is developers misunderstanding memory lifetimes or overcomplicated ownership. The compiler can only enforce the rules you’re trying to follow; it can’t teach you good patterns, and it won’t save you from bad design choices.
This appears to be claiming that Rust's borrow checker is only useful for preventing a subset of memory safety errors, those which can be statically analysed. Implying the existence of a non-trivial quantity of memory safety errors that slip through the net.
> The borrow checker blocks you the moment you try to add a new note while also holding references to the existing ones. Mutability and borrowing collide, lifetimes show up, and suddenly you’re restructuring your code around the compiler instead of the actual problem.
Whereas this is only A Thing because Rust enforces rules so that memory safety errors can be statically analysed and therefore the first problem isn't really a problem. (Of course you can still have memory safety problems if you try hard enough, especially if you start using `unsafe`, but it does go out of its way to "save you from bad design choices" within that context.)
If you don't want that feature, then it's not a benefit. But if you do, it is. The downside is that there will be a proportion of all possible solutions that are almost certainly safe, but will be rejected by the compiler because it can't be 100% sure that it is safe.
The thing I wish we would remember, as developers, is that not all programs need to be so "safe". They really, truly don't. We all grew up loving lots of unsafe software. Star Fox 64, MS Paint, FruityLoops... the sad truth is that developers are so job-pilled and have pager-trauma, so they don't even remember why they got in the game.
I remember reading somewhere that Andrew Kelley wrote zig because he didn't have a good language to write a DAW in, and I think its so well suited to stuff like that! Make cool creative software you like in zig, and people that get hella about memory bugs can stay mad.
Meanwhile, everyone knows that memory bugs made super mario world better, not worse.
I am fine with ignoring the problems that rust solves, but not because I'm smart and disciplined. It just fits my use-case of making fast _non-critical_ software. I don't think we should rewrite security and networking stacks in it.
I don't think you need the ritual and complexity that rust brings for small and simple scripts and CLI utilities...
Choose the tool that fits your usecase. You would never bring wasm unity to render a static html file. But if you make a browsergame, you might want to.
The thing I wish we would remember, as developers, is that not all programs need to be so "safe".
"Safety" is just a shorthand for "my program means what I say". Unsafety is semantic gibberish.There's lots of reasons to write artistically gibberish code, just as there is with natural language (e.g. Lewis Carroll). Most programs aren't going for code as art though. They're trying to accomplish something definite through a computer and gibberish is directly counterproductive. If you don't mean what you write or care what you get, software seems like the wrong way to accomplish your goals. I'd still question whether you want software even in a probabilistic argument along these lines.
Even for those cases where gibberish is meaningful at a higher level (like IOCCC and poetry), it should be intentional and very carefully crafted. You can use escape hatches to accomplish this in Rust, though I make no comment on the artistic merits of doing so.
The argument you're making is that uncontrolled, unintentional gibberish is a positive attribute. I find that a difficult argument to accept. If we could wave a magic wand and make all code safe with no downsides, who among us wouldn't?
It doesn't change anything about Super Mario World speedruns because you can accomplish the same thing as arbitrary code execution inputs with binary patching. We just have this semi-irrational belief that one is cheating and one is not.
Anybody would, but Rust is not that wand, and there is no wand.
Code needs to _exist_ in order to matter. Time is finite, my free-time is even more limited. Most of my code is garbage collected, and runs great, but if I needed it to be really fast, I would use Zig.
I don't need to be told what software is for or how to accomplish my goals, and I'm sure you wouldn't understand my goals. I've been making code creatively for almost 30 years. You might as well tell an origami artist that folding paper is a bad way to accomplish their goals.
The attitude among _some_ Rust devs (or armchair coders) that there is no place for non-rust manual-memory languages is insanely disconnected with reality. Games exist, Rust ones hardly do. Synths exist, Rust ones hardly do. Not everything is a high-availability microservice or an OS kernel! Look around!
Edited to add:
> "Safety" is just a shorthand for "my program means what I say". Unsafety is semantic gibberish.
You know this isn't true, right? "Safety" in Rust specifically means memory safety and thread safety: no use-after-free, no data races, no null/dangling pointer dereferences, no buffer overflows. That's it. It doesn't guarantee your program is correct, and it doesn't even prevent memory leaks.
Things being manually managed doesn't make them gibberish, and something being implicit, rather than explicit, doesn't mean its gibberish.
The attitude of Rust being bug-free is _insaaaane_. A "bug" is just code that breaks expectations, I promise we can and will write those in every language forever.
When I say "my program means what I say", that means the code that is written has some precise meaning that can be faithfully translated into execution (sans hardware/runtime/toolchain bugs).
This is different than "I said what I mean". If you write different code, that may violate expectations and create a bug, but it will still be faithfully translated as written.
Safe Rust attempts to guarantee this with the absence of UB. The definition rust uses still isn't a universal definition, which we agree on. That's why my comment didn't actually talk about Rust. The definition I used should be valid no matter what particular guarantees you choose.
Things being manually managed doesn't make them gibberish, and something being implicit, rather than explicit, doesn't mean its gibberish.
I completely agree. I like C, for what it's worth.Where we disagree is that I'm saying this doesn't scale. A large enough program (for a surprisingly small definition of large) will always have gaps and mistakes that violate your chosen definition of safety, and those bits are the gibberish I'm talking about.
Funny, because that's not anywhere close to what the comment you're replying to states.
They said `"Safety" is just a shorthand for "my program means what I say"`. That's a reasonable explanation: the code you wrote is not working exactly as you intended, due to some sort of unknown behavior.
The "bug" you're talking about would be the program doing exactly what you implemented, but what you implemented is wrong. The difference is so obvious that it's hard to think that you're engaging in a good faith argument.
You could write rust code with logic errors. I could write C with a memory leak that doesn't matter because of the context it runs in. Neither program is gibberish but one of them causes real problems.
I'm open to suggestions on how to clarify things as you're the second comment to misunderstand it and I'm not sure how to better explain it.
Either way, telling someone they have to pick between using a safe language like Rust and writing "semantically gibberish" is a false dichotomy. Please don't call programs written in memory-unsafe languages semantic gibberish until you prove that there are absolutely zero bugs in your program.
self.last.as_ref().unwrap().borrow().next.as_ref().unwrap().clone()
I know it can be improved but that's what I think of
Yes, safety isn't correctness but if you can't even get safety then how are you supposed to get correctness?
For small apps Zig probably is more practical than Rust. Just like hiring an architect and structural engineers for a fence in your back yard is less practical than winging it.
https://play.rust-lang.org/?version=stable&mode=debug&editio...
I once joined a company with a large C/C++ codebase. There I worked with some genuinely expert developers - people who were undeniably smart and deeply experienced. I'm not exaggerating and mean it.
But when I enabled the compiler warnings (which annoyed them) they had disabled and ran a static analyzer over the codebase for the first time, hundreds of classic C bugs popped up: memory leaks, potential heap corruptions, out-of-bounds array accesses, you name it.
And yet, these same people pushed back when I introduced things like libfmt to replace printf, or suggested unique_ptr and vector instead of new and malloc.
I kept hearing:
"People just need to be disciplined allocations. std::unique_ptr has bad performance" "My implementation is more optimized than some std algorithm." "This printf is more readable than that libfmt stuff." etc.
The fact is, developers, especially the smart ones probably, need to be prevented from making avoidable mistakes. You're building software that processes medical data. Or steers a car. Your promise to "pay attention" and "be careful" cannot be the safeguard against catastrophe.
printf("Error: File `%s` in batch %d failed.", file.c_str(), batch) vs fmt::print("Error: File `{}` in batch {} failed.", file, batch)
One of which is objectively safer and more portable than the other. They didn't care. "I like what I've been doing for the last 20 years already better because it looks better.". "No Its not because I'm just used to it." "If you are careful it is just as safe. But you gotta know what you are doing."
And best of all - classic elitism:
"If you are not smart enough to do it right with printf, maybe you shouldn't be a C++ programmer. Go write C# or something instead."
The same person was not smart enough to do it right in many places as I've proven with a static analyzer.
You can guide compiler to check printf style format strings using __attribute__((format)) btw, also checks you are not using a variable as a format string
It's true, but devs are not infallible and that's the point of Rust. Not idiots, not infallible either.
IMO admitting that one can make mistakes even if they don't think they have is a sign of an experienced and trustworthy developer.
It's not that Rust compiler engineers think that devs are idiots, in fact you CAN have footguns in Rust, but one should never use a footgun easily, because that's how you get security vulnerabilities.
Maybe we'll even get a tabs vs. spaces article next.
Apparently it isn't programming with a straightjacket any longer, like on Usenet discussions.
> Compile-time only: The borrow checker cannot fix logic bugs, prevent silent corruption, or make your CLI behave predictably. It only ensures memory rules are followed.
Also not really true from my experience. There have been plenty of times where the borrow checker is a MASSIVE help in multithreaded context.
The catgirls have no problems producing lots of great software in Rust. It seems more such software comes out every day, nya :3
I'd love to see the actual code here! When I imagine the Rust code for this, I don't really foresee complicated borrow-checker or reference issues. I imagine something like
struct Note {
filename: String,
// maybe: contents: String
}
// newtype for indices into `notes`
struct NoteIdx(usize);
struct Notes {
notes: Vec<Note>,
tag_refs: HashMap<String, Vec<NoteIdx>>
}
You store indices instead of pointers. This is very unlikely to be slower: both a usize index and a pointer are most likely 64 bits on your hardware; there's arguably one extra memory deref but because `notes` will probably be in cache I'd argue it's very unlikely you'll see a real-life performance difference.It's not magic: you can still mess up the indices as you add and remove notes.
But it's safer: if you mess up the indices, you'll get an out-of-bounds error instead of writing to an unintended location in your process's memory.
Anyway, even if you don't care about safety, it's clear and easy to think about and reason about, and arguably easier to do printf debugging with: "this tag is mentioned in notes 3, 10 and 190, oh, let's print out what those ones are". That's better than reading raw pointers.
Maybe I'm missing something? This sort of task comes up all day every while writing Rust code. It's just a pretty normal pattern in the language. You don't store raw references for ordinary logic like this. You do need it when writing allocators, async runtimes, etc. Famously, async needs self-referential structs to store stack local state between calls to `.await`, and that's why the whole business with `Pin` exists.
(I think Miri will shout at you if you use a invalidated pointer here)
I stole someone else's benchmark to use, and at one point I ran into seriously buggy behavior on strings (but not integers) that wasn't caught at the point where it happened early even with -Odebug.
Turns out the benchmark was freeing the strings before it finished performing all of the operations on the data structure. That's the sort of thing that Rust makes nearly impossible, but Zig didn't catch at all.
That being said, you've missed the point if you can't understand that safety comes at a real cost, not an abstract or 'by any means necessary' cost, but a cost as real as the safety issues.
So this. We currently spent about a month carefully instrumenting and coming to understand a subtle bug in our distributed radio network. This all runs on bare metal C (samd21 chips). Because timing, and hundreds of little processors, and radios were all involved, it was a pita to surface what the issue was. It was algorithmic. Not a memory problem. Writing this in rust or zig (instead of straight C) would not have fixed this problem.
I’d like to consider doing next generations of this product in zig or rust. I’m not opposed. I like the extra tools to make the product better. But they’re a small part of the picture in writing good software. The borrow checker may improve your code, it doesn’t guarantee successful software.
I agree the borrow checker can be a pain though, I wish there were something like Rust with a great GC. Go has loads of other bad design decisions (err != nil, etc.) and Cargo is fantastic.
(If you go no GC "because it's fun" then there's no need for the post in the first place --- just use what's fun!)
Not Go because of its anaemic type system.
Most mobile games are implemented in a system with GC (Unity with il2cpp), and it's not even a /good/ GC, it's Boehm.
Distribution can also be a lot easier if you don't need to care about the user having a specific version of Python or specific packages available.
- Interlisp => https://interlisp.org/
- Cedar => https://www.youtube.com/watch?v=z_dt7NG38V4
Imagine what we could have today with hardware that is mostly busy running Electron crap and CLI tools from the 1970s.
"Cognitive overhead: You’re constantly thinking about lifetimes, ownership, and borrow scopes, even for simple tasks. A small CLI like my notes tool suddenly feels like juggling hot potatoes."
None of this goes away if you are using C or Zig, you just get less help from the compiler.
"Developers are not idiots"
Even intelligent people will make mistakes because they are tired or distracted. Not being an idiot is recognising your own fallibility and trying to guard against it.
What I will say, that the post fails to touch on, is: The Rust compiler's ability to reason about the subset of programs that are safe is currently not good enough, it too often rejects perfectly good programs. A good example of this it the inability to express that the following is actually fine:
struct Foo {
bar: String,
baz: String,
}
impl Foo {
fn barify(&mut self) -> &mut String {
self.bar.push_str("!");
&mut self.bar
}
fn bazify(&self) -> &str {
&self.baz
}
}
fn main() {
let mut foo = Foo {
bar: "hello".to_owned(),
baz: "wordl".to_owned(),
};
let s = foo.barify();
let a = foo.bazify();
s.push_str("!!");
}
which leads to awkward constructs like fn barify(bar: &mut String) -> &mut String {
bar.push_str("!");
bar
}
// in main
let s = barify(&mut foo.bar);There has been discussion to solve this particular problem[0].
Rust being sub par for so long just shows how much people won't want to fund these problems and how hard they are to solve during program compile.
I ofc like Zig quite a bit but I find Rust to suit my tastes better. Zig feels too much like C with extra steps. And the lack of good tooling and stability around Zig hurts large scale adoption.
But I think in 10 years Zig will be the de facto better-ish C.
And Rust will be the low level language for any large project where safety is amongst the top 3 priorities.
Zig feels much younger than Rust so we'll see how it develops, but it's certainly interesting. In particular, comptime and explicit allocators are two ideas I hope Rust borrow more from Zig.
> And Rust will be the low level language for any large project where safety is amongst the top 3 priorities.
Personally I don't really see what'd be left for Zig, because in most software of consequence safety is already a top 3 priority.
I believe that explains why many game developers, who have a very complex job to do by default, usually see the Rust tradeoff as not worth it. Less optionality in system design compounds the difficulty of an already difficult task.
If The Rust Compiler never produced false positives it should in theory be (ignoring syntactic/semantic flaws) damn-near as ergonomic as anything. Much, much easier said than done.
In particular, if your comparison point is C and Zig and you don't care about safety you could use unsafe, knowing you are likely triggering UB, and be in mostly the same position as you would in C or Zig.
And I was making a point even more general than prototyping, though I also wouldn't discount the importance of that either.
Weird that they don’t consider other options, in particular languages with reference counting or garbage collection. Those will not solve all ownership issues, but for immutable objects, they typically do. For short-running CLI tools, garbage collecting languages may even be faster than ones with manual memory management because they may be able to postpone all memory freeing until the program exits.
If I don't need absolute best performance, I can use GC-ed systems like Node, Python, Go, OCaml, or even Java (which starts fast now thanks to Graal AOT) and enjoy both the safety and expressive power of using a high-level language. When I use a GCed language, I don't have to worry about allocation, lifetimes, and so on, and the user gets a plenty good experience.
If I need the performance only manual memory management can provide (and this situation arises a lot less often than people think it does), I can justify spending the extra time expressing my thoughts in Rust, which will get me both performance and safety.
Why would I go to the trouble of using Zig instead of Rust? Zig, like Rust, incurs a complexity and ecosystem cost. It doesn't give me safety in exchange. I put in about as much effort as I would into a Rust program but don't get anything extra back. (Same goes if you substitute "C++" for "Rust".)
> All it took was some basic understanding of memory management and a bit of discipline.
Is the idea behind Zig just that it's perfectly safe if you know what you're doing --- therefore using Zig is some kind of costly signal of competence? That's like someone saying free-solo-ing a cliff face is perfectly safe if you know what you're doing. Someone falls to his death? Skill issue, right?
We have decades of experience showing that nobody, no matter how much "understanding" and "discipline" he has, can consistently write memory-safe code with manual memory management in a language that doesn't enforce memory safety rules.
So what's the value proposition for Zig?
Are you supposed to use it instead of something like Go or Python or AOT-Kotlin and spend more of your time dealing with memory than you would in one of these languages? Why?
Or are you supposed to use it instead of Rust and get, what, slightly faster compile times, maybe? And no memory safety?
If your program runs for a short time and then exits, arena editing is an option. That seems to be what the author means by "CLI tools". It's the lifetime, not the input format.
"Rust is amazing, if you’re building something massive, multithreaded, or long-lived, where compile-time guarantees actually save your life. The borrow checker, lifetimes, and ownership rules are a boon in large systems."
Yes. That's really what Rust is for. I've written a large metaverse client in Rust, and one of the regression tests I run is to put an avatar in a tour vehicle and let it ride around for 24 hours. About 20 threads. No memory leaks. No crashes. That would take a whole QA team and lots of external tools such as Valgrind in C++, and it would be way too slow in any of the interpreted languages.
Off-topic - that sounds amazing, is this commercial or hobby software? Any way I could learn more about it?
As a professional Rust developer, I don’t find I do this. I occasionally think of those things. But I do remember a short adjustment period when I was learning Rust that I would get frustrated by the borrow checker. Of course, that’s it doing its job!
I would actually be interested in a honest comparison of a Rust CLI program using clap, duct and some other standard CLI handling libraries with their Zig equivalents.
Show the code side by side, how much boilerplate each requires. Then compare how quickly they compile in debug and release mode, how fast they start, how big the binaries are after stripping, stuff like that.
There is a lot of research showing that a high percentage of security bugs are from memory safety issues across many different studies. I believe this is why people are pushing moving to Rust.
However, if you don’t get the memory safety in Zig. Why bother moving from your existing coding language? Why not just not learn a new language and code where you are?
Because Zig is a better language across the board.
And that includes safety. You can have a language that doesn't do heavy static-analysis like Rust which still makes safety a lot easier than C/C++.
Memory safety is not even a flaw of C/C++, it's a tradeoff. That being said, even if memory-safety was a 'feature' (rather than a tradeoff, and yes, Rust did a better job minimizing the trade than GC or FP languages), it's not the only feature.
Null pointers disallowed by default. Slices. Superb cross-compilation. Easy C interop. Comptime instead of C preprocessor.
There are lots more.
The problem Rust is up against is that the number of people who want Rust simply because of "strict typing" far, far, far outnumbers those who care about safety or speed. And that leads to the issue that most people really should be using a GC language like OCaml rather than Rust.
Unfortunately, the OCaml ecosystem ... :(
You could argue with the reasoning that C feels more practical than Zig for real-world CLI tools.
The argument provided by the author feels a bit besides the point.
Also, I started programming on a TRS-80 color computer with cassette tape storage and it still feels crazy to just do stuff and it allocates somewher and you have little control over what happens and some other process runs in the background that tries to clean up after you. I bounce off of languages (for hobby projects) with such a large list of features and with executable sizes that are so huge. I can't help it.
Experienced Rust coders aren't going to find themselves in various borrow checker (and lifetime) pitfalls that newbies do, sure.
That said, the borrow checker and lifetime do cause problems for even experienced Rust programmers. Not because they don't understand memory management, lifetimes, etc. But because they don't - yet - fully understand the problem being solved.
All programs are an evolutionary process of developing a solution to a problem (or many problems). You think one thing, code it up, realize you missed something or didn't fully grok the issue, pivot, etc.
Rust does a great job in the compiler of letting the user know if they've borked something. But often times a refactor/fix in C/D/Zig due to learned (or new) requirements is just a tweak, while in Rust it becomes a major overhaul because now something needs to be `mut` or have a lifetime added to it. I - personally - consider that "fighting" the borrow checker, regardless of how helpful (or correct) it also is.
C also allows to produce memory safe software with a bit of discipline. This "bit of discipline" is the issue here which developers are lacking.
- out of bounds access (70% of CVEs)
- nullptr dereferences
- type safety issues
That’s massively better than C. Preventing use after free errors requires much less discipline than never missing a boundary or bungling a signed/unsigned conversion.
Zig also has a knock your socks off incredible cross platform build system, empowers some really nice optimizations/ergonomics with comptime, has orders of magnitude faster build times that C++/rust.
Zig is still < v1.0 so standard library could use some work and there are other warts, but I think it will be a great choice for performance oriented programs in the future.
Here is a bunch of more memory safe languages that are more popular than Rust: Python, Java, JS ... should I continue?
But wait, there's more. Can Rust verify that arbitrary in-data is valid, like having contracts statically checked at compile time? It can't, and so then by the same logic as some people calling non-Rust languages "gibberish" for not having a borrow checker, should we call Rust "gibberish" because it can't check those invariants at compile time like C3 can do?
No?
Then get off that high horse.
And by the way, my great fear is that we'll get more adoption by Rust while at the same time Rust is unable to improve the compile times. C++, Swift and Rust, the three horsemen of infamously long compile times.
The borrow checker isn't the main problem, just like C++'s problems isn't complex template messages – it's that both takes so long to compile. Time you could spend reading and testing the code. Long compile times inhibits refactoring and cause bugs to stay unfixed.
Rust's concept of "safety" really means "absence of Undefined Behavior". Lots of people don't seem to understand that C, C++, and Zig programs containing UB are gibberish - they are not C, C++, or Zig programs, but something else entirely. The key insight here is that in any language, UB invalidates the entire program. It no longer does what you think it does, your tests are all moot, and so on. But a lot of people seem to think that there is an acceptable amount of UB in any code base. There isn't.
UB is a concept that exists for every language, even languages like Python, Java, C#, JavaScript, but those languages make it very hard to encounter accidentally.
Before Rust, there was no way to guarantee the absence of UB without a significant runtime cost, so that's a very meaningful invention.
tonetegeatinst•4mo ago
dayvster•4mo ago
osmsucks•4mo ago
dayvster•4mo ago
Ar-Curunir•4mo ago
Cloudef•4mo ago