C++ is valuable, because the existing tooling enables you to optimize the runtime peformance of a program (usually you end up with figuring out the best memory layout and utilization).
C++ is valuable becaus it's industry support guarantees code bases live for decades _without the need to modify them_ to latest standards.
C++ is valuable because the industry tooling allows you to verify large areas of the program behaviour at runtime (ASAN etc).
I simply don't understand what type of industrial use this type of theoretical abstraction building serves.
Using the metaprogramming features makes code bases extremly hard to modify and they don't actually protect from a category of runtime errors. I'm speaking from experience.
I would much rather have a codebase with a bit more boilerplate, a bit more unit tests and strong integration testing suite.
The longer I use C++ the more I'm convinced something like Orthodox C++ is the best method to approach the language https://bkaradzic.github.io/posts/orthodoxc++/
This keeps the code maintainable, and performant (with less effor than metaprogramming directed C++).
Note: the above is just an opinion, with a very strong YMMV flavour, coming from two decades in CAD, real time graphics and embedded development.
> Don’t use stream (<iostream>, <stringstream>, etc.), use printf style functions instead.
and has a code example of what they argue 'Orthodox C++' should be, which uses printf.
I'm all for a more sensible or understandable C++, but not at the expense of losing safety. In fact I would prefer the other way: I still feel incredibly saddened that Sean Baxter's Circle proposal for Safe C++ is not where the language is heading. That, plus some deep rethinking and trimming of some of the worst edge behaviours, and a neater standard library, would be incredible.
Compare to <iostream>, which is stateful and slow.
There's also std::format which might be safe and flexible and have some of the advantages of printf. But I can't use it at any of the places I'm working since it's C++20. It probably also uses a lot of template and constexpr madness, so I assume it's going to be leading to longer compilation times and hard to debug problems.
I would not use iostreams, but neither would I use printf.
At the very least if you can't use std::format, wrap your printf in a macro that parses the format string using a constexpr function, and verifies it matches the arguments.
So such a strong "at the very least" is misapplied. All this template crap, I've done it before. All but the thinnest template abstraction layers typically end up in the garbage can after trying to use them for anything serious.
I also find it unreadable; beyond the trivial I always need to refer to the manual for the correct format string. In practice I tend to always put a placeholder and let clangd correct me with a fix-it.
Except that often clangd gives up (when inside a template for example), and in a few cases I have even seen GCC fail to correctly check the format string and fail at runtime (don't remember the exact scenario).
Speed is not an issue, any form of formatting and I/O is going to be too slow for the fast path and will be relegated to a background thread anyway.
Debugging and complexity has not ben an issue with std::format so far (our migration from printf based logging has been very smooth). I will concede that I do also worry about the compile time cost.
I think you're arguing from a position of willful ignorance. The article is clear on how it lauds C++'s std::printnl, not printf.
http://en.cppreference.com/w/cpp/io/println.html
Here's what the article argues:
> With std::format, C++ has gained a modern, powerful, and safe formatting system that ends the classic, error‑prone printf mechanisms. std::format is not merely convenient but fully type‑safe: the compiler checks that placeholders and data types match.
Solid remark, and the consensus on how std::printnl and std::format are an important improvement over std::cout or C's printf.
Metaprogramming style in C++20 only has a loose relationship to previous versions. It is now concise and highly maintainable. You can do metaprogramming in the old painful and verbose way and it will work but you can largely dispense with that.
It took me a bit to develop the intuitions for idiomatic C++20 because it is significantly different as a language, but once I did there is no way I could go back. The degree of expressiveness and safety it provides is a large leap forward.
Most C++ programmers should probably approach it like a new language with familiar syntax rather than as an incremental update to the standard. You really do need to hold it differently.
The fact that C++ is a very large and complex language and that makes it unapproachable is undeniable though, but I don't think the new releases make it significantly worse. If anything, I think that a some of the new stuff does ease the on-ramp a bit.
This is happening to many older languages because modern software has more intrinsic complexity and requires more rigor than when those languages were first designed. The languages need to evolve to effectively address those needs or they risk being replaced by languages that do.
I’ve been writing roughly the same type of software for decades. What would have been considered state-of-the-art in the 1990s would be a trivial toy implementation today. The languages have to keep pace with the increasing expectations for software to make it easier to deliver reliably.
The key thing to understand you are still using C with sugar on top. So you need to understand how the language concepts map to the hardware concepts. So it’s much more relevant to understand pointer arithmetic, the difference between stack and heap allocations and so on, rather what the most recent language standard changes.
You can write the same type of C++ for decades. It’s not going to stop compiling. As long as it compiles on your language standard (C++17 is fine I think unless you miss something specific) you are off to the races. And you can write C++17 for the next two decades if you want.
This was my takeaway as well when I revisited it a few years ago. It's a very different, and IMO vastly improved, language compared to when I first used it decades ago.
I have used many languages other than C++20 in production for the kind of software I write. I don’t have any legacy code to worry about and rarely use the standard library. The main thing that still makes it an excellent default choice, despite the fact that I dislike many things about the language, is that nothing else can match the combination of performance and expressiveness yet. Languages that can match the performance still require much more code, sometimes inelegant, to achieve an identical outcome. The metaprogramming ergonomics of C++20 are really good and allow you to avoid writing a lot of code, which is a major benefit.
What makes C++ valueable is being a TypeScript for C, born in the same UNIX and Bell Labs farm (so to speak), allowing me to tap into the same ecosystem, while allowing me to enjoy the high level abstractions of programming languages like Smalltalk, Lisp, or even Haskell.
Thus I can program on MS-DOS limited with 640 KB, an ESP32, Arduino, a CUDA card, or a distributed system cluster with TB of memory, selecting which parts are more convinient for the specific application.
Naturally I would like in 2025 to be able to exercise such workflows with a compiled managed language instead of C++, however I keep being in the minority, thus language XYZ + C++ it is.
Using Go as example, and the being in minority remark, you will remember the whole discussion about Go being a systems language or not, and how it was backpedaled to mean distributed systems, not low level OS systems programming.
Now, I remember when programming compilers, linkers, OS daemons/services, IoT devices, firmware was considered actual systems programming.
But since Go isn't bootstraped, TinyGo and TamaGo don't exist, that naturally isn't possible. /s
This is true for MANY other languages too, I don't see how this makes c++ different. With gdb its quite the opposite, handlig c++ types with gdb can be a nightmare and you either develop your own gdb glue code or write c-like c++.
> C++ is valuable becaus it's industry support guarantees code bases live for decades _without the need to modify them_ to latest standards.
In times of constant security updates (see the EU's CRA or equivalent standards in the US) you always gotta update your environment which often also means updating tooling etc. if you don't wanna start maintaining a super custom ecosystem.
I don't see this as a positive in general, there is bit rot and a software that is stuck in the past is generally not a good sign imo.
> C++ is valuable because the industry tooling allows you to verify large areas of the program behaviour at runtime (ASAN etc).
Sanitizers are not C++ exclusive too and with rust or C# you almost never need them for example. Yes C++ has extensive debugging tools but a big part of that is because the language has very few safeguards which naturally leads to a lot of crashes etc..
I think the idea of using only a small subset of C++ is interesting but it ignores the problem that many people have, you don't have the time to implement your own STL so you just use the STL. Ofc it gives me more control etc. but I'd argue most of the time writing orthodox c++ won't save time even in the long run, it will save you headaches and cursing about c++ being super complicated but in the end in modern environments you will just reinvent the wheel a lot and run into problems already solved by the STL.
That's why better to use lldb and it's scripts.
> I think the idea of using only a small subset of C++ is interesting but it ignores the problem that many people have, you don't have the time to implement your own STL so you just use the STL.
Yeah, agree. It's just much easier to take a "framework" (or frameworks) where all the main problems solved: convenient parallelism mechanisms, scheduler, reactor, memory handling, etc. So it's turning out you kinda writing in your own ecosystem that's not really different from another language, just in C++ syntax.
I can imagine it might be insanely faster to compile
Somebody really needs to rethink the entire commitment to meta-programming. I had some hope that concepts would improve reporting, but they seem to actually make it worse, and -- if they improve compile times at all, I'm not seeing it.
And it has nothing to do with historicity. Every time I visit another modern language (or use it seriously) I am constantly reminded that C++ compile times are simply horrible, and a huge impediment to productivity.
However too many folks are stuck in the UNIX command line compiler mindset.
I keep bumping into people that have no idea about the IDE based compilation workflows from C++ Builder and Visual C++, their multihreaded compilation, incremental compilation and linking, pre-compiled headers that actually work, hot code reloading, and many other improvments.
Or the CERN C++ interpreters for that matter.
Many don't seem to ever have ventured beyond calling gcc or clang with Makefiles, and nothing else.
The whole point of a programming language is to be an industrial productivity tool that is faster to use than hand writing assembly.
Performance is a core requirement industrial tools. It's totally fine to have slow compilers in R&R and academia.
In industry a slow compiler is an inexcusable pathology. Now, it can be that pathology can't be fixed, but not recognizing it as a pathology - and worse, inventing excuses for it - implies the writer is not really industrially minded. Which makes me very worried why they are commenting on an industrial language.
> C++ is often described as complex, hard to learn, and unsafe. That reputation is undeserved. The language itself is not unsafe. On the contrary: it is precise, honest, and consistent. What is unsafe is how it is used if it is misunderstood or if one remains in old patterns.
I think this take needs to stop. It’s a longer way to say “skill issue”. Meanwhile, decades of industry experience have shown that the governing principles of even modern C++ make it incredibly hard (expensive) to deliver high quality software. Not impossible - there’s lots of examples - but unreasonably hard.
C++ is fundamentally unsafe, because that’s how the language works, and if you think otherwise, you don’t know C++. There are patterns and paradigms that people use to limit the risk (and the size of the impact crater), and that’s helpful, but usually very difficult to get right if you also want any of the benefits of using C++ in the first place.
Certain people will disagree, but I surmise that they haven’t actually tried any alternative. Instead they are high on the feeling of having finally grokked C++, which is no small feat, and I know because I’ve been there. But we have to stop making excuses. The range of problems where C++ is unequivocally the superior solution is getting smaller.
The range of issues where the superior solutions offer language features superior to the features of modern C++ is getting smaller too.
To be clear, I like and continue to use modern c++ daily, but I also use rust daily and you cannot really make a straight faced argument that c++ is catching up. I do think both languages offer a lot that higher languages like go and Python don't offer which is why I never venture into those languages, regardless of performance needs.
> but I also use rust daily and you cannot really make a straight faced argument that c++ is catching up.
I mostly use std::ranges, lambdas, and concepts, and I see them catching up, as an evolutionary process rather than a fixed implementation in the current standard. Nowadays I can do familiar folds and parallel traversals that I couldn't do in the past without assuming third-party libraries. My optionals are empty vectors: it suits my algorithms and interfaces a lot, and I never liked `Maybe a` anyways (`Either errorOrDefault a` is so much better). I also use Haskell a lot, and I'm used to the idea that outside of my functional garden the industry's support for unions is hardly different to the support of 2-tuples of (<label>, <T>), so I don't mind the current state of std::variant either.
> In my streams I showed how RAII can thereby also be applied to database operations: a connection exists as long as it is in scope.
Only if that connection object doesn’t support move — we’re 12 years of C++ standards past the arrival of move, and it still leaves its source in an indeterminate state.
> With std::variant, C++ gained a tool that allows dynamic states without giving up type safety.
variant is not quite type-safe:
https://en.cppreference.com/w/cpp/utility/variant/valueless_...
> With C++20, std::ranges decisively extends this principle: it integrates the notions of iterator, container, and algorithm into a unified model that combines type safety and readability.
Ranges may be type-safe, but they’re not safe. Like string_view, a range is a reference, and the language does not help ensure that the referent remains valid.
I haven’t watched the streams he referred to, but… I am fairly certain the language itself says no such thing. You may be thinking of the standard library, which states that certain classes of moved-from objects have unspecified state. If you’re writing your own DB connection class, you can define moves to leave the object in whatever state you prefer, or disallow moves.
Admittedly it’s still a weird example IMO, because external factors can sever the connection while the object is in scope.
Also, I would say nothing is “unexpected” behavior if you document it and implement accordingly. And at least for this DB case, handling it is not onerous or stretching the idea of class invariants beyond usability.
I’m probably like what GP said, “high on the feeling of having finally grokked C++, which is no small feat.” But I either want to understand better why move is broken, or we can agree that things like move require too much skill to get right and there are better alternative languages.
Great, so now the stateful settings on my database connection change depending on whether I move from it.
Database connections are kind of a bad example — having a connection drop is not really unexpected behavior, and a program that uses a database should be prepared for a connection to drop, so there’s kind of an invalid state on a connection anyway. But things like file handles or mutex guards aren’t like this — it’s reasonable to expect that, on a functioning system, a file handle won’t go away. And if I’m using a type-safe language that supports RAII, I would like the compiler to ensure that I can’t use an object that isn’t in a valid state.
Rust can do this, as can lots of languages that support “affine” types (that name is absurd). GC languages can kind of do this too, as long as cloning the reference is considered valid. Languages with “linear” types can even statically guarantee that I don’t forget to close my object.
C++ can ensure that an object is valid if it’s in scope, but only if that object is not movable, so “consume” operations are not possible in a type-safe way.
How can "decades of experience" show the deficiencies of Modern C++, which was invented in 2011?
If you've worked on a project built entirely on the technologies and principles of modern C++, and found that it caused your software to be low-quality, then by all means share those experiences. C++ has no shortage of problems, there is no doubt. But hand-waving about "decades" of nondescript problems other people have had with older C++ versions, is somewhat of a lazy dismissal of the article's central thesis.
Please read the sentence you're quoting again.
The easiest path in C++ is almost always the dangerous path. (The classic example is operator[] versus at().) With every tiny feature of the language, things we take fully for granted in every other language, there is a host of caveats, gotchas, footguns, and trap doors.
This is as serious risk as a project and its team grows, because the accumulated sum of accidental complexity grows exponentially.
It’s possible to manage this risk, but it is also expensive, and every modern-ish competitor to C++ allows fewer people to deliver better software quicker.
It’s not a good choice for almost anything that doesn’t specifically require C++ for its own sake.
If you use warnings as errors they catch even subsets of dangling nowadays. Other ways of dangling have been made illegal (temporary conversions and range for lifetime extension).
I agree with you the defaults are still not the best but it is dteafily getting better.
But I think in the next years things are going to be tightened further in standard terms for better defaults. In fact, it is already happening.
The difficult part I think it is lifetimes.
Lifetimes are a crucial aspect of writing code in C++, yet they do not appear anywhere in the syntax. The same goes for synchronization. These problems are fundamentally unfixable without major, incompatible language changes.
Unreal Engine is C++ based and plenty of games have used it.
Fundamentally, when it comes to safety, its either everything or nothing. Rust is by definition unsafe, because it has "unsafe" keyword. If the programmer has enough discipline not use use unsafe everywhere, he/she has enough discipline to write normal C++ code.
But as far as C++ goes, the main problem is that the syntax still allows C style pointers and de referencing for compatibility with C code. Generally, if you stick to using std library constructs and smart pointers for everything, the code becomes very clean. Unique ptr is basically the thing that inspired Rust ownership semantics after all.
Honestly tho, I keep the tool in my belt because I believe it is still the best for what I use it for: low latency financial applications and game engines.
If I find some time to migrate from c++ to a different language I may for certain games, but thats a future bridge to cross.
So I agree it has its quirks but if the defaults keep changing and improving it keeps evolving into a safer by default thing compared to before.
No, I do not mean into you must be super-skillfull anymore: I mean that with all of that in, things are much safer by default.
Things keep improving a bit slower than we would like (this is design by committee) but steadily.
> But all of us—the C++ developers—must go back to school. We must learn C++ anew, not because we have forgotten it, but because through evolution it has become a different language. Only those who understand the modern language constructs can use the new tools properly and unfold the potential of this generation of libraries.
Once you get to that point, you might as well create and learn a different language.
Nope, it's still incredibly valuable to be able to c++14 and c++26 two different translation units and then later link them together (all without leaving the familiar toolchains and ecosystems). That's how big legacy projects can evolve towards better safety incrementally.
This flies in the face of modern principles like building all your C++, from source, at the same time, with the same settings.
Languages like Rust include these settings in symbol names as a hash to prevent these kinds of issues by design. Unless your whole team is a moderate-level language lawyer, you must enforce this by some other means or risk some really gnarly issues.
Historically, C++ compilers' name mangling scheme for symbols did precisely the same thing. The 2000-2008 period for gcc was particularly painful since the compiler developers really used it very frequently, to "prevent these kinds of issues by design". The only reason most C++ developers don't think about this much any more is that most C++ compilers haven't needed to change their demangling algorithm for a decade or more.
Optimization level should never cause link time or run time issues; if it does I'd consider that a compiler/linker bug, not an issue with the language.
This never changed.
In the past, hacking was exploiting human errors in writing faulty code. These days, its pretty much the same thing except the faulty code isn't things like buffer overflows due to no bounds checking, but more higher level faulty software with things like password reuse, no 2 factor authentication, and so on.
- https://stallman.org/articles/on-hacking.html
The notion that just using the new fancy types automatically makes everything memory safe has to stop. std::expected contains either a value or an error. If you call .value() and you're wrong you get an exception. If you call .error() and you're wrong you get undefined behaviour. This was added in C++23. Since there's no destructuring you have to call these methods btw, just don't make any mistakes with your preconditions! Regardless 90% of memory safety errors I see are temporal. Unless we completely ban references and iterators they will not be going anywhere. Using unique_ptr instead of new does not do anything when you insert into a map while holding a reference to an element.
Developers also have to be able to make their own things. We can't pretend that absolutely everything we will ever need is bundled up in some perfect library. To write a typesafe application you need to be able to create your own domain specific abstractions, which to me precludes them looking like this:
template <class ty>
concept db_result_tuple = requires { typename remove_cvref_t<ty>; }
&& []<class... Es>(std::tuple<Es...>*) {
return all_result_args_ok_v<Es...>;
}( static_cast<typename std::add_pointer_t<remove_cvref_t<ty>>>(nullptr) );And it was fixed in c++26
It was slightly improved in C++26 under hardening:
https://en.cppreference.com/w/cpp/utility/expected/error.htm...
I also think that named parameters would go a long way toward improving the language.
Lastly, explore some way to make possible a breaking change with "old C++".
pjmlp•2mo ago
kaiken1987•2mo ago
pjmlp•2mo ago
Sure there is FreePascal and Lazarus, sadly it doesn't get enough love.