"For trivial relocatability, we found a showstopper bug that the group decided could not be fixed in time for C++26, so the strong consensus was to remove this feature from C++26."
[https://herbsutter.com/2025/11/10/trip-report-november-2025-...]
I was really looking forward to this feature, as it would've helped improve Rust <-> C++ interoperability.
I'm not even going to try to understand this.
Using C++ these days feels like continuing to use the earth-centric solar system model to explain the motion of planets vs a much simple sun-centric model: Unnecessarily over-complicated.
C++ is still high on the TIOBE index mainly because it is indeed old and used in a lot of legacy systems. For new projects, though, there's less reason to choose C++.
[1] https://www.warp.dev/blog/why-is-building-a-ui-in-rust-so-ha...
I’d say this if Rust were at the #1 spot too.
It’s important to remember that c++ has a 30 year head start on Rust, especially at a crucial growth part of computing. Thats why it tops the TIOBE index. But I fully expect it to go the way of COBAL sooner rather than later where most new development does not use c++.
Sometimes you just need to move forward.
Python 3 should be studied for why it didn't work as opposed to a lesson not to do it again.
I'm curious about this in particular. It seems like the Python 2 to 3 transition is a case study in why backwards compatibility is important. Why would you say the lesson isn't necessarily that we should never break backwards compatibility? It seems like it almost could've jeopardized Python's long-term future. From my perspective it held on just long enough to catch a tail wind from the ML boom starting in the mid 2010s.
Often you hear the advice that when using C++ your team should restrict yourself to a handful of features. ...and then it turns out that the next library you need requires one of the features you had blacklisted and now it's part of your codebase.
However, if you don't grow and evolve your language you will be overtaken by languages that do. Your community becomes moribund, and more and more code gets written that pays for the earlier mistakes. So instead of splitting your community, it just starts to atrophy.
Python 2 to 3 was a disaster, so it needs to be studied. I think the lesson was that they waited too long for breaking changes. Perhaps you should never go too many releases without breaking something so the sedentary expectation never forms and people are regularly upgrading. It's what web browsers do. Originally people were saying "it takes us 6 months to validate a web browser for our internal webapps, we can't do that!" ...but they managed and now you don't even know when you upgrade.
The specifics really matter in this kind of analysis.
Backwards compatibility is C++'s greatest asset, I already took part in a few rewrites away from C++, exactly because the performance in compiled managed languages was good enough, and the whole thing was getting rebooted.
If you're just writing application software for consumers or professionals, or a network service, and it's destined to run on one of the big three families of operating systems using the one of the big few established hardware architectures at that scale, there are definitely alternatives that can make your business logic and sometimes even your key algorithms simpler or clearer, or your code more resistant to certain classes of error.
If you look at Rust and see "this does everything I could imagine doing, and more simply than C++", there's nothing wrong with that, because you're probably right for yourself. But there are other projects out there that other people work on, or can expected themselves working on someday, that still befit C++ and it's nice for the language to keep maturing and modernizing for their sake, while maintaining its respect for all the underlying weirdness they have to navigate.
Tangentially, is there a good alternative to Qt or SDL in Rust yet?
It doesn't though, or at least none of those echoes are why C++ is complex. Here are some examples of unnecessary complexity.
The rules of 3/5 exist solely due to copy/move/assign semantics. These rules do not need to exist if the semantics were simpler.
Programmers need to be aware of expression value categories (lvalue, rvalue, etc). These concepts that most languages keep as internal details of their IRs are leaked to error messages, because of the complex semantics of expression evaluation.
SFINAE is a subtle rule of template expansion that is exploited for compile time introspection due to the fact the latter is just missing, despite the clear desire for it.
The C++ memory model for atomics is a particular source of confusion and incorrectness in concurrent programs because it decouples a fairly simple problem domain and rules into (arguably too small) a set of semantics that are easy to misuse, and also create surprisingly complex emergent behaviors when misused that become hard to debug.
These are problems with the language's design and have nothing to do with the hardware and systems it targets.
The thing that bugs me about this topic is that C++ developers have a kind of Stockholm syndrome for their terrible tools and devex. I see people routinely struggle with things that other languages simply don't have (including C and Rust!) because C++ seems committed to playing on hard mode. It's so bad that every C++ code base I've worked on professionally is essentially its own dialect and ecosystem with zero cross pollination (except one of abseil/boost/folly).
There is so much complexity in there that creates no value. Good features and libraries die in the womb because of it.
Since C++17 there are better options.
Despite all its warts, most C++ wannabe replacements, depend on compiler tools written in C++, and this isn't going to change in then foreseeable future, based on the about two decades that took to replace C with C++ in compiler development circles, even though there is some compatibility.
Opinions are cool.
Most of the complexity comes from the fact that C++ trivially supports consuming most C code, but with its own lifetime model on top, and that it also offers great flexibility.
Of course things become simpler when you ditch C source compat and can just declare "this variable will not be aliased by anyone else"
AFAIK C++'s constexpr and TMP is less limited than Rust's is.
Barely.
The C++ aliasing rules map quite poorly into hardware. C++ barely helps at all with writing correct multithreaded code, and almost all non-tiny machines have multiple CPUs. C++ cannot cleanly express the kinds of floating point semantics that are associative, and SIMD optimization care about this. C++ exceptions have huge overhead when actually thrown.
But how much does aliasing matter on modern hardware? I know you're aware of Linus' position on this, I personally find it very compelling :)
As a silly little test a few months ago, I built whole Linux systems with -fno-strict-aliasing in CFLAGS, everything I've tried on it is within 1% of the original performance.
I've never seen an attempt to answer that question. Maybe it's unanswerable in practice. But the examples of aliasing optimizations always seem to be eliminating a load, which in my experience is not an especially impactful thing in the average userspace widget written in C++.
The closest example of a more sophisticated aliasing optimization I've seen is example 18 in this paper: https://dl.acm.org/doi/pdf/10.1145/3735592
...but that specific example with a pointer passed to a function seems analogous to what is possible with 'restrict' in C. Maybe I misunderstood it.
This is an interesting viewpoint, but is unfortunately light on details: https://lobste.rs/s/yubalv/pointers_are_complicated_ii_we_ne...
Don't get me wrong, I'm not saying aliasing is a big conspiracy :) But it seems to have one of the higher hype-to-reality disconnects for compiler optimizations, in my limited experience.
Of course, that was also 10 years ago, so things may be different now. There'll have been interest from the Rust project for improving the optimisations `noalias` performs, as well as improvements from Clang to improve optimisations under C and C++'s aliasing model.
Is there a better way to test the contribution of aliasing optimizations? Obviously the compiler could be patched, but that sort of invalidates the test because you'd have to assume I didn't screw up patching it somehow.
What I'm specifically interested in is how much more or less of a difference the class of optimizations makes on different calibers of hardware.
For Rust, you'd have to patch the compiler, as they don't generally provide options to tweak this sort of thing. For both rust and C this should be pretty easy to patch, as you'd just disable the production of the noalias attribute when going to LLVM; gcc instead of clang may be harder, I don't know how things work over there.
Which is why exceptions should never really be used for control flow. In our code, an exception basically means "the program is closing imminently, you should probably clean up and leave things in a sensible state if needed."
Agree with everything else mostly. C/C++ being a "thin layer on top of hardware" was sort of true 20? 30? years ago.
It wasn't initially, and then NVIDIA went through a multi-year effort to redesign the hardware.
If you're curious, there are two CppCon talks on the matter.
Suppose T is a file handle or an owning pointer (like unique_ptr), and you want to say:
T my_thing = [whatever]
and you want a guarantee that T has no null value and therefore my_thing is valid so long as it’s in scope.In C++, if you are allowed to say:
consume(std::move(my_thing));
then my_thing is in scope but invalid. But at least C++ could plausibly introduce a new style of object that is neither copyable nor non-destructively-movable but is destructively movable.Interestingly, Go is kind of all in on the opposite approach. Every instance of:
my_thing, err := [whatever]
creates an in-scope variable that might be invalid, and I can’t really imagine Go moving in a direction where either this pattern is considered deprecated or where it might be statically invalid to use my_thing.I actually can imagine Python moving in a direction where, if you don’t at least try to prove to a type checker that you checked err first, you are not allowed to access my_thing. After all, you can already do:
my_thing: T | None = [whatever]
and it’s not too much of a stretch to imagine similar technology that can infer that, if err is None, then T is not None. Combining this in a rigorous way with Python’s idea that everything is mutable if you try hard enough might be challenging, but I think that rigor is treated as optional in Python’s type checking.
andrewmcwatters•2mo ago