I think it's also important not to centre Rust alone. In the larger picture, Rust has a combo of A) good timing, and B) the best evangelism. It stands on decades of memory safe language & runtime development, as well as the efforts of their many advocates.
We're really talking about resistance to memory safety in the last redoubts of unsafety: browsers and operating systems.
Was a fascinating detective story to illustrate it.
And control systems, c++ (along with PLCs ofcourse) dominates in my experience from developing maritime software and there doesnt appear to be much inclination towards change.
And the VMs for the two languages that you mentioned above (edit: though to be fair to your comment, I suppose those were initially written 20+ years ago).
And probably lots of robotics, defense, and other industries
Granted, those aren’t consumer problems, but I would push back on the “last redoubts”.
We should absolutely move toward memory safe languages, but I also think there are still things to be tried and learned
.. and other performance critical areas like Financial applications (HFT), High Performance Computing (incl. AI/ML), embedded, IoT, Gaming/Engines, Databases, Compilers etc.. Browsers and OS are highly visible, but there is a gigantic ton of new C++ code written everyday in spite of the availability of memory safe languages.
There are plenty of people, though, who argue that everything must be memory safe (and therefore rewritten in Rust :) I personally don't agree with that sentiment and it seems like you don't agree either.
If you look at what unsafe languages are used for, it mostly falls into two camps (ignoring embedded). You have legacy code e.g. browsers, UNIX utilities, etc which are too expensive to rewrite except on an opportunistic basis even though they could be in principle. You have new high-performance data infrastructure e.g. database kernels, performance-engineered algorithms, etc where there are still significant performance and architectural advantages to using languages like C++ that are not negotiable, again for economic reasons.
Most of the "resistance" is economic reality impinging on wishful thinking. We still don't have a practical off-ramp for a lot of memory-unsafe code. To the extent a lot of evangelism targets these cases it isn't helpful. It is like telling people living in the American suburbs that they should sell their cars and take the bus instead.
There are critical systems today that are essentially Prince Rupert’s drops. Mightily impressive, but with catastrophic weaknesses in the details.
I'm wondering what the cost would be of rewriting Chrome, at 20 to 30 million lines of code, in Rust?
I suspect that despite the memory unsafety, the cost of maintaining it in its current form is vastly lower than this.
Plus, any rewrite will certainly introduce new bugs, some of them temporarily serious. Did you see the post years back about a Rust program that exhibited the Heartbleed bug?
These new bugs need to be taken into account when estimating the cost of rewrite.
https://compat-table.github.io/compat-table/es6/
Chrome is currently unable to support a feature that was added to JavaScript in 2015 for ECMAScript 6.
The reason given was something about proper tail calls being beyond the technical capabilities of the teams involved.
If the code in or surrounding Chrome and its underlying V8 engine are currently so unmaintainable that the teams cannot incorporate a JavaScript feature from 10 years ago, then the cost of merely maintaining the C++ codebase is too high.
The all-or-nothing, now-or-never framing makes the change feel more intimidating than it would be in practice. Mozilla's strategy is to incrementally use Rust more and more in their C++ codebase. I don't know what Chrome's plan is, but the fact that Mozilla is able to make progress is an indication that it isn't impossibly expensive to do better. Mozilla is a non-profit, while Google's Q1 2025 revenue was $77.3 billion.
> Did you see the post years back about a Rust program that exhibited the Heartbleed bug?
Do you remember the actual Heartbleed bug?
> 20 to 30 million lines
In my own experience, seasoned engineers often remind me that every line of code is a liability. Tens of millions of lines of C++ that work closely with the internet sounds like quite the surface area.
Vividly. I spent a full week on remediation, even though the risk we had was traced to a single linux box exposed to the internet that had tens of kb of traffic over the last year.
Being proactive, we reissued all certificates for all of our internally deployed ssl points.
> In my own experience, seasoned engineers often remind me that every line of code is a liability. Tens of millions of lines of C++ that work closely with the internet sounds like quite the surface area.
No question. I don't question the wisdom of rewriting all of it in Rust. Having spent 60 years in the software business, I have a feeling for the size of the effort. And for what it is worth, I don't have any doubt about the competency of the teams involved.
Unlike python or java, it’s both compiled and fast
Ok, just glanced at my corp workstation and some Java build analysis server is using 25GB RES, 50GB VIRT when I have no builds going. The hell is it doing.
Allocating a heap of the size it was configured to use, probably.
Java is also fairly greedy with memory by default. It likes to grow the heap and then hold onto that memory unless 70% of the heap is free after a collection. The ratios used to grow and shrink the heap can be tuned with MinHeapFreeRatio and MaxHeapFreeRatio.
1. Resource and reuse objects that otherwise are garbage collected. Use `new` sparingly.
2. Avoid Java idioms that create garbage, e.g. for (String s : strings) {...}, substitute with (int i = 0, strings_len = strings.length(), i < strings_len) { String s = strings[i]; ...}
I wrote performance-engineered Java for years. Even getting it to within 2x worse than performance-engineered C++ took heroic efforts and ugly Java code.
They are in particular careful to never state that bindgen emits the wrong code. Maybe they could have said that bindgen in fact does handle this case correctly. But Omniglot seems to be doing a lot more than bindgen, and
--constified-enum <REGEX> Mark any enum whose name matches REGEX as a series of constants
--constified-enum-module <REGEX> Mark any enum whose name matches REGEX as a module of constants
IMO, saying bindgen avoids the issue presented in the article is not accurate.
edit: formatting
You can force it to generate Rust enums, but it doesn't by default.
The referenced footnote, [9], leads to: https://www.usenix.org/conference/osdi25/presentation/schuer...
Please don't do this, thanks!
It is true that some decisions people make aren't rational, and it may even be true that most decisions most people make aren't entirely rational, but the claim that the whole software market, which is under selective pressures, manages to make irrationally wrong decisions in a consistently biased way is quite extraordinary and highly unlikely. What is more likely is that the decisions are largely rational, just don't correspond to your preferences. It's like the VHS vs. Betamax story. Fans of the latter thought that the preference for the former was irrational because of the inferior picture quality, but VHS was superior in another respect - recording time - that mattered more to more people.
I was programming military applications in Ada in the nineties (also not memory-safe, BTW) and I can tell you we had very good reasons to switch to C++ at the time, even from a software correctness perspective (I'm not saying C++ still retains those particular advantages today).
If you think so many people who compete with each other make a decision you think is obviously irrational, it's likely that you're missing some information.
Cyber Security itself is an example of this. It may seem rational to want guarantees of security for the entire supply chain. But that simply isn't possible in reality.
A professional effort is the judicious application of resources to the highest priorities. That includes care in design and testing. Applications built with C and C++ are running everywhere around the world, every minute of every day.
0. https://stackoverflow.com/questions/28426191/how-to-specify-...
1. https://hackage.haskell.org/package/range-0.3.0.2/docs/Data-...
Ada does not have curly braces.
Stephen Bourne, author of the Bourne shell, used macros to make C memory unsafe language look like Algo 68 memory safe language.
The root of the problem is measurement. Speed is one of the few dimensions of software that is trivially quantifiable, so it becomes the yardstick for everything. This is textbook McNamara Fallacy[1]: what is easy to measure becomes what is measured, and what is not easily measured is erased from the calculus. See developer velocity, cognitive overhead, maintainability, and joy. It's the same fallacy that McNamara made in Vietnam and Rumsfield made in the War on Terror so at least they're in good company.
This singular focus distorts decisions around language choice, especially among the inexperienced, who haven't yet learned to recognize trade-offs or to value the intangibles software process. Like you said, humans are irrational, but this is one particularly spectacular dimension of that irrationality.
I find it hard to reconcile this with the actual observed trend of all software getting slower and more memory intensive over time
Application performance is a very important factor. To ignore it is foolish.
I'm just wondering in the explanation of listing 2 you say:
> a discriminant value indicating the enum’s active variant (4 bytes)
As far as I can find, there's no guarantee for that, the only thing I can find is that it might be interpreted as an `isize` value but the compiler is permitted to use smaller values: https://doc.rust-lang.org/reference/items/enumerations.html#...
Is there any reason to say it should be 4 bytes?
It doesn't change any of the conclusions, I'm just curious
But then again, modeling a C enum to a Rust enum is bad design. You want to use const in Rust and match against those.
But it is a bad example in general, because the author passes on a pointer of a string slice to FFI without first converting it to a CString, so it isn't null terminated.
That makes sense, they just don't use repr(C) for the PrintResult so I didn't consider that.
> But then again, modeling a C enum to a Rust enum is bad design. You want to use const in Rust and match against those.
That makes sense but if there could be a way to safely generate code that converts to an enum safely as proposed in the article that would be good as the enum is more idiomatic.
> But it is a bad example in general, because the author passes on a pointer of a string slice to FFI without first converting it to a CString, so it isn't null terminated.
The signature for async_print in C is `async_res_t async_print(const *uint8_t, size_t)` and they are passing a pointer to a &[u8] created from a byte string literal, so I think it's correct.
Just the syntax is miserable punctuation soup to start with.
That said, it might be useful. The demo case is contrived, though. Passing Rust async semantics into C code is inherently iffy. I'd like to see something like OpenJPEG (a JPEG 2000 encoder written in C) safely encapsulated in this way.
Also I think there's other great things about Rust other than _just_ memory safety
timewizard•5mo ago
In a language with the `unsafe` construct and effectively no automated tooling to audit the uses of it. You have no guarantee of any significance. You've just slightly changed where the security boundary _might_ lie.
> There is a great amount of software already written in other languages.
Yea. And development of those languages is on going. C++ has improved the memory safety picture quite a bit of the past decade and shows no signs of slowing down. There is no "one size fits all" solution here.
Finally, if memory safety were truly "table stakes" then we would have been using the dozens of memory safe languages that already existed. It should be blindingly obvious that /performance/ is table stakes.
noisem4ker•5mo ago
I think a big part of it is just inertia.
dwattttt•5mo ago
AlotOfReading•5mo ago
Cargo allows you to apply rustc lints to the entire project, albeit not dependencies (currently). If you want dependencies you need something like cargo-geiger instead. If you find unsafe that way, you can report it to the rust safety dance people, who work with the community to eliminate unsafe in crates.
All of this is worlds ahead of the situation in C++.
vlovich123•5mo ago
However, if I can apply a nitpicking attitude here that you're applying to their argument about the ease with which unsafe can be kept out of a complex codebase. unsafe is pretty baked into the language because there's either simply convenient constructs that the Rust compiler can't ever prove safely (e.g. doubly-linked list), can't prove safely today (e.g. various accessors like split), or is required for basic operations (e.g. allocating memory). Pretending like you can really forbid unsafe code wholesale in your dependency chain is not practical & this is ignoring soundness bugs within the compiler itself. That doesn't detract from the inherent advantage of safe by default.
AlotOfReading•5mo ago
It's not easy in Rust, but it's possible.
imglorp•5mo ago
Industry is seeing quantifiable improvements, eg: https://thehackernews.com/2024/09/googles-shift-to-rust-prog...
xvedejas•5mo ago
UltraSane•5mo ago
burnt-resistor•5mo ago
burnt-resistor•5mo ago
zaphar•5mo ago
C++ has artificially limited how much it can improve the memory safety picture because of their quite valid dedication to backwards compatibility. This is a totally valid choice on their part but it does mean that C++ is largely out of the running for the kinds of table stakes memory safety stuff the article talks about.
There are dozens of memory safe languages that already exist: Java, Go, Python, C#, Rust, ... And a whole host of other ones I'm not going to bother listing here.
tialaramex•5mo ago
Nah, there's a famous WG21 (the C++ committee) paper named "ABI: Now or Never" which lays out just some of the ever growing performance cost of choices the committee has made to preserve ABI and explains that if this cost is to be considered a price paid for something the committee needs to pick "Never" and if they instead want to stop paying the price they need to pick "Now" and, if as the author suspects, they don't actually care, they should pick neither and C++ should be considered obsolete.
The committee, of course, picked neither, and lots of people who were there have since defended this claiming that this was a false dilemma - they were actually cleverly picking "Later" which that author didn't offer. Each time they've repeated this more time has passed yet they're still no closer to this "Later" ...