I've used C++ for so long and I'm a good way into thinking that the language is just over. It missed its mark because of backwards compatibility with fundamental language flaws. I think we can continue to add features - which are usually decent ideas given the context, kudos to the authors for the effort - but the language will decline faster than it can be fixed. Furthermore, the language seems to be continually becoming harder to implement, support, and write due to the constant feature addition and increase in semantic interconnectivity. To me it's almost mostly a theoretical exercise to add features at this point: practically we end up with oddly specced features which mostly work but are fundamentally crippled because they need to dodge an encyclopedia of edge cases. The committee are really letting the vision of a good C++ down by refusing to break backwards compatibility to fix core problems. I'm talking fundamental types, implicit conversions, initialisation, preprocessor, undefined / ill-formed NDR behaviour. The C++ I'm passionate about is dead without some big changes I don't think the community can/will handle.
sure we are looking at options - but rust and c++ don't interoperate well (c api is too limiting). D was looking interesting for a while but I'm not sure how it fits (d supports c++ abi)
Memory safety semantics aside (needed and will be disruptive, even if done gradually) ---
You could get 80% of the way to ergonomic parity via a 1:1 re-syntaxing, just like Reason (new syntax for OCaml) and Elixer (new syntax for Erlang). C++ has good bones but bad defaults. Why shouldn't things be const by default? Why can't we do destructuring in more places? Why is it so annoying to define local functions? Why do we have approximately three zillion ways of initializing a variable?
You can address a lot of the pain points by making an alternative, clean-looking "modern" syntax that, because it's actually the same language as C++, would have perfect interoperability.
You'd have an existing language with a new syntax; it can perfectly interact with existing C++ code, but you could make those suggested changes, and could also express things in the new syntax that couldn't be done in the old one.
EDIT: taking an example elsewhere in this thread; taking an address of an uninitialised variable and passing it to a function. Today the compiler can't (without inter-procedure analysis) tell whether this is a use of uninitialised data, or whether it's only going to write/initialise the variable.
A new syntax could allow you to express that distinction.
Sounds like Herb Sutter's cpp2/cppfront: https://github.com/hsutter/cppfront
Perhaps a better language could even get some traction without major corporate sponsorship. I think (?) rust and zig are examples of that.
Rust might not count depending on whether you count Mozilla's sponsorship a major corporate sponsorship.
Also things often just don’t compose well. For example if you have a nested class that you want to use in an unordered_set in its parent class then you just can’t do it because you can’t put the std::hash specialization anywhere legal. It’s just two parts of the language which are totally valid on their own but don’t work together. Stuff like this is such a common problem in c++ that it drives me nuts
This is not true. From within your parent class you use an explicit hashing callable, and then from outside of the parent class you can go back to using the default std::hash.
The result looks like this:
struct Foo {
struct Bar {};
struct BarHasher {
std::size_t operator ()(const Bar&) const noexcept;
};
std::unordered_set<Bar, BarHasher> bar_set;
};
namespace std {
template<>
struct hash<Foo::Bar> {
std::size_t operator()(const Foo::Bar& b) const noexcept {
return Foo::BarHasher()(b);
}
};
}
The std::hash specialization at the end is legal and allows other users of Foo::Bar to use std::unordered_set<Foo::Bar> without needing the explicit BarHasher.This is an interesting perspective to me, because my view as someone who's been using Rust since close to 1.0 and hasn't done much more than dabbled in C++ over the years is basically the opposite. My (admittedly limited) understanding is that this has never really been a goal of the committee, because if someone is willing to sacrifice backwards compatibility, they could presumably just switch to using one of those other languages at that point. Arguably the main selling point of C++ today is the fact that there's a massive set of existing codebases out there (both libraries that someone might want to use and applications that might still be worked on), and for the majority of them, being rewritten would be at best a huge effort and more realistically not something going to be seriously considered.
If the safety and ergonomics of C++ are a concern, I guess I'm not sure why someone would pick it over another language for a newly-started codebase. In terms of safety, Rust is an option that exists today without needing C++ to change. Ergonomics are a bit less clear-cut, but I'd argue that most of the significant divergences in ergonomics between languages are pretty subjective, and it's not obvious to me that there's a significant enough gap in between Rust's and C++'s respective choices that warrant a new language that's not compatible with C++ but is far enough from Rust for someone to refuse to use it on the basis ergonomics alone. It seems to me like "close enough to C++ to attract the people who don't want to use Rust but far enough from Rust to justify breaking C++'s backwards compatibility" is just too narrow a niche for it to be worth it for C++ to go after.
However I think C++ still has some things going for it which may make it a useful option, assuming the core issues were fixed. C++ gives ultimate control over memory and low level things (think pointers, manual stack vs heap, inline assembly). It has good compatibility with C ABIs. It's very general purpose and permissive. And there are many programmers with C++ (or C) knowledge out there already.
Further, I think C++ started on its current feature path before Rust really got a big foothold. Consider C++ has been around a really long time, plenty long enough to fix core features.
Finally I reckon the whole backwards compatibility thing is a bit weird because if the code is so ancient and unchangable, why does it need the latest features? Like you desprately need implicit long-to-int conversion but also coroutines?? And for regular non-ancient code, we already try to avoid the problematic parts of C++, so fixing/removing/changing them wouldn't be so bad. IMO it's a far overdone obsession with backwards compatibility.
Of course without a significant overhaul to the language you'd probably say "screw it" and start from scratch with something nicer like Rust.
I think I'm most confused about the last part that you're saying. A significant overhaul to the language in a breaking way feels pretty much the same as saying "screw it" and starting from scratch, just with specific ergonomic choices being closer to C++ than to Rust. Several of the parts that you cite as the strengths of the language, like inline assembly and pointers are still available in Rust, just not outside of explicitly unsafe contexts, and I'd imagine that an overhaul of C++ to enhance memory safety would end up needing to make a fairly similar compromise for them. It just seems like the language you're wishing for would end up with a fairly narrow design space, even if it is objectively superior to the C++ we have today, because it would have to give up the largest advantage that C++ does have without enough unoccupied room to grow into. The focus on backwards compatibility doesn't seem to be that it would necessarily be the best choice in a vacuum, but a reflection of the state of the ecosystem as it is today, and a perception that sacrificing it would be giving up its position as the dominant language in a well-defined niche to try to compete in a new one. This is obviously a subjective viewpoint, but it doesn't seem implausible to me, and given the fact that we can't really know how it would work out unless they do try, sticking with compatibility feels like the safer option.
Headers would be a problem given their text inclusion in multiple translation units, but it's not insurmountable; you're currently limited to the oldest standard a header is included into, and under a new standard that breaks compatibility you'd be limited to a valid subset of the old & new standard.
EDIT: ironically modules (as a concept) would (could?) solve the header problem, but they've not exactly been a success story so far.
Because they are little different from precompiled headers. Import std; may be nice, but in a large project you are likely to have your own defines.hpp file anyway (that is going to be precompiled for double-digits compile times reduction).
Ironically too, migrating every header in an executable project to modules might slow down build times, as dependency chains reduce the parallelism factor of the build.
It's widely used and you can do so effectively if you need it as know what you're doing.
Like retained mode GUIs, games, intrusive containers or anything that can't be trivially represented by a tree of unique_/shared_ptr?
They love to say C++ is for everyone but it is clearly not. Only wizards and nerds burdened by sunk cost fallacy is willingly writing these modern C++ code. I personally just use C++ as a "nicer" C.
IIUC this is what Profiles are. It’s an opt in source file method to ban certain misfeatures and require certain other safe features.
But then the problem becomes where exactly do you opt-in to this feature? If you do it in a header file then this can result in a function being compiled with the safety profile turned on in one translation unit and then that exact same function is compiled without that safety profile in another translation unit... which ironically results in one of the most dangerous possible outcomes in C++, the so-called ODR violation.
If you don't allow safety-profiles to be turned on in header files, then you've now excluded a significant amount of code in the form of templates, constexpr and inline functions.
Most of the rest of my complaints could be addressed by jettisoning backward compatibility and switching to more sensible defaults. I realize this will never happen.
C++ still has some unique strengths, particularly around metaprogramming compared to other popular systems languages. It also is pretty good at allowing you to build safe and efficient abstractions around some ugly edge cases that are unavoidable in systems programming. Languages like Rust are a bit too restrictive to handle some of these cases gracefully.
What do you mean?
Like any language that lasts (including Python and Rust) you subset it over time: you end up with linters and sanitizers and static analyzers and LSP servers, you have a build. But setting up a build is a one-time cost and maintaining a build is a fact of life, even JavaScript is often/usually the output of a build.
And with the build done right? Maybe you dont want C++ if youre both moving fast and doing safety or security critical stuff (as in, browser, sshd, avionics critical) but you shouldnt be moving fast on avoinics software to begin with.
And for stuff outside of that "even one buffer overflow is too many" Venn?
C++ doesn't segfault more than either of those after its cleared clang-tidy and ASAN. Python linking shoddy native stuff crashes way more.
I would personally like a safe (and fast) subset that doesn't require me to be vigilant but catches every thing I could do wrong to the same level as rust. Then, like rust, you could remove that flag for a few low-level parts that for some reason need to be "unsafe" (maybe because they call into the OS).
There was a good talk from the WebKit team about stuff they did to get more safety.
https://www.youtube.com/watch?v=RLw13wLM5Ko
Some of it was AST level checks. IIRC, they have a pre-commit check that there is no pointer math being used. They went over how to change code with pointer math into safe code with zero change in performance.
A similar one was Ref usage checking where they could effectively see a ref counted object was being passed to as a raw pointer to a function that might free the ref and then still used in the calling function. They could detect that with an AST based checker.
That said, I have no idea how they (the C++ committee) are going to fix all the issues. --no-undefined-behavior would be a start. Can they get rid of the perf bombs with std::move? Why do I have to remember that shit?
"no one goes there anymore it's too crowded"
Unless they were incumbent and inertia keeps them in. Or they're the only choice you have for a niche target. Or you have some other reason to keep them, such as (thinking?) the performance they bring is more important.
Surely all good things come to an end, but where? i reckon there will be a C++29. what about C++38? C++43 sounds terrifying. Mid-century C++? there is no way in hell i will still be staying up to date with C++43. Personally I've already cut the cord at C++11.
As long as there are people willing to put in the work to convince the standards committee that the proposals they champion are worth adding to C++ (and as long as there is a committee to convince, I suppose), then new versions of C++ will continue to be released.
> there is no way in hell i will still be staying up to date with C++43. Personally I've already cut the cord at C++11.
Sure, different developers will find different features compelling and so will be comfortable living with different standards. That one group is fine with existing features shouldn't automatically prevent another from continuing to improve the language if they can gain consensus, though.
gblargg•3h ago
Is this to cover cases that would be hard/costly to detect? For example you pass the address of an uninitialized variable to a function in another source file that might read or just write to it, but the compiler can't know.
bluGill•3h ago