Seems like the daily anti c++ post
> optional is unsafe in idiomatic use cases? I’d like to challenge that.
std::optional<int> x(std::nullopt);
int val = *x;
Optional is by default unsafe - the above code is UB.Anyway safety checked modes are sufficient for many programs, this article claims otherwise but then contradicts itself by showing that they caught most issues using .. safety checked modes.
I recently had a less wild but similarly baffling experience on an embedded-but-not-small device. Address 0 was actually a valid address. We were getting a HardFault because a device driver was dereferencing a pointer to an invalid but not-null address. Working backwards, I found that it was getting that invalid address not from 0x0 but rather from 0xC… because the pointer was stored in the third field of a struct and our pointer to that struct was null.
foo->bar->baz->zap
Foo = 0, &bar = 0xC, baz = invalid address, *baz to get zap is what blew up.No, it won't. https://gcc.godbolt.org/z/Mz8sqKvad
The problem is not nullopt, but that the client code can simply dereference the optional instead of being forced to pattern-match. And the next problem, like the other guy mentioned above, is that you cannot make any claims about what will happen when you do so because the standard just says "UB". Other languages like Haskell also have things like fromJust, but at least the behaviour is well-defined when the value is Nothing.
It's not a pointer.
I also agree with them: I am pro-C++ too, but the current standard is a fucking mess. Go and look at modules if you haven't, for example (don't).
You can't magically make all the member functions on std::vector safe after a move for example unless the moved from vector allocates itself a new (empty) buffer, which kills the performance benefits.
It's all by design.
Reusing of a moved from object only requires assignment and destruction to be well behaved.
The std library containers give you extra guarantees (a moved from object is effectively the same as a default constructed one), but the _language_ imposes no such requirements on your types
It's perfectly allowed by the language for the .size() member of your own vector type to return a random value after it's been moved from because you wanted to save 1 CPU instruction somewhere.
An empty std::vector does not require that any buffer is allocated. It just has a null data pointer.
It's not true that the only sensible choice for a moved-from object be equivalent to the defaulted constructed one.
If your move constructor doesn't exist then the copy constructor gets called under the language rules, so the sensible default is actually a copy.
Everything else is an optimisation that has a trade-off
A conforming implementation of std::list, for example, can have a default constructor and a move constructor that both allocate a sentinel node on the heap, which is why none of the constructors are noexcept.
If you don't allocate a sentinel on the heap, then moving std::list can invalidate iterators (which is what GNU stdlibc++'s implementation chooses).
It's a trade off.
But yes, the implication of C++ move semantics is that every movable object must also define an “empty” (moved-from) state, so you cannot have something like a never-null unique ptr.
Specifically, it is not allowed for the moved-from object to be inconsistent or to say “using it in any way is UB”, because its destructor will run.
I beg to differ. Humans are fallible. Static analysis of C++ cannot catch all cases and humans will often accept a change that passes the analyses.
You're ignoring how static analysis can be made to err on the side of safety rather than promiscuity.
Specifically, for optional dereferencing, static analysis can be made to disallow it unless it can prove the optional has a value.
Ho ho ho good one.
> The following code for example, simply returns an uninitialized value:
#include <optional>
int f() {
std::optional<int> x(std::nullopt);
return *x;
}tl;dr: use-after-move, or dereferencing null.
The only thing that's less great is that this got so much less upvotes than all the Safe-C++ languages that never really had the chance to get into production in old code.
Unfortunately the default changed when C++98 came out, and not everyone bothered with providing at least hardening mode in debug builds, VC++ followed by GCC, or compilers in high integrity computing like Green Hills.
Sadly the security and quality mentality seems to be a hard sell in areas where folks are supposed to be Engineers and not craftsmen.
#IF __cplusplus==202302LAre there C++-to-Rust converters? There are definitely C-to-Rust converters, but I haven't heard of anyone attempting to tackle C++.
> After converting C++ to Rust, then convert Rust to C++ and you now have clean code which can continue to use all the familiar tooling.
This only works if a hypothetical C++ to Rust converter converts arbitrary C++ to safe Rust. C++ to unsafe Rust already seems like a huge amount of work, if it's even possible in the first place; further converting to safe Rust while preserving the semantics of the original C++ program seems even more of a pie in the sky.
C++ compilers carry lots of baggage for backwards compatibility.
Your complaint doesn’t look valid to me: the feature in the article is implemented with compiler macros that work with old and new code without changes.
I’m all for C++ making these changes. For a lot of people, adding a bit of safety to the language they’re going to use anyway is a big win. But in general guarding against threading bugs, or use after free, or a lot of more obscure memory issues requires either expensive GC like runtime checks (Fil-C has 0.5x-4x performance overhead and a large memory overhead). Or compile time checks. And C++ will never get rust’s extensive compile time checks.
Golang is a different thing altogether (garbage collected), but they still somehow managed to have safety issues.
Google just did a talk at LLVM US 2025, regarding the state of clang lifetime analyser, the TL;DW is we're still quite far from the profiles dream.
its not
Just reviewing the actual hardening of the standard library, it looks like in C++26 an implementation may be considered hardened in which case if certain preconditions don't hold then a contract violation triggers an assertion which in turn triggers a contract violation handler which may or may not result in a predictable outcome depending on one of 4 possible "evaluation semantics".
Oh and get this... if two different translation units have different evaluation semantics, a situation known as "mixed-mode" then you're shit out of luck with respect to any safety guarantees as per this document [1] which says that mixed-mode applications shall choose arbitrarily among the set of evaluation semantics, and as it turns out the standard library treats one of the evaluation semantics (observe) as undefined behavior. So unless you can get all third party dependencies to all use the same evaluation semantic, then you have no way to ensure that your application is actually hardened.
So is C++26 adding changes? Yes it's adding changes. Are these changes actual improvements? It's way to early to tell but I do know one thing... it's not at all uncommon that C++ introduces new features that substitute one set of problems for a new set of problems. There's literally a 300 page book that goes over 20 distinct forms to initialize an object [2], many of these forms exist to plug in problems introduced by previous forms of initialization! For all we know the same thing might be happening here, where the classical "naive" undefined behavior is being alleviated but in the process C++ is introducing an entire new class of incredibly difficult to diagnose issues. And lest you think I'm just spreading FUD, consider this quote from a paper titled "C++26 Contracts are not a good fit for standard library hardening" [3] submitted to the C++ committee regarding this upcoming change arguing that it risks giving nothing more than the illusion of safety:
>This can result in violations of hardened preconditions being undefined behaviour, rather than guaranteed to be diagnosed, which defeats the purpose of using a hardened implementation.
[1] https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p29...
[2] https://www.amazon.ca/dp/B0BW38DDBK?language=en_US&linkCode=...
- P3878 [0] was adopted, so the standard now forbids "observe" semantics for hardened precondition violations. To be fair, the paper doesn't explicitly say how this change interacts with mixed mode contract semantics, and I'm not familiar enough with what's going on to fill in the gaps myself.
- It appears there is interest in adopting one of the changes proposed in D3911 [1], which introduces a way to mark contracts non-ignorable (example syntax is `pre!()` for non-ignorable vs. the current `pre()` for ignorable). A more concrete proposal will be discussed in the winter meeting, so this particular bit isn't set in stone yet.
https://libcxx.llvm.org/Hardening.html
https://gcc.gnu.org/wiki/LibstdcxxDebugMode (was already available for longer, the official hardening might take this over or do something else)
> The possibility to have a have a well-formed program in which the same function was compiled with different evaluation semantics in different translation units (colloquially called “mixed mode”) raises the question of which evaluation semantic will apply when that function is inline but is not actually inlined by the compiler and is then invoked. The answer is simply that we will get one of the evaluation semantics with which we compiled.
> For use cases where users require strong guarantees about the evaluation semantics that will apply to inline functions, compiler vendors can add the appropriate information about the evaluation semantic as an ABI extension so that link-time scripts can select a preferred inline definition of the function based on the configuration of those definitions.
The entirety of the STL is inlined so it's always compiled in every single translation unit, including the translation units of third party dependencies.
Also it's not me saying, it's literally the authors of the MSVC standard library and the GCC standard library pointing out these issues [1]:
If this is the extent of your understanding it's a fairly good indication you do not have sufficient background on this topic and may be expressing a very strong opinion out of ignorance of this topic. It's not at all uncommon that those with the most superficial understanding of a subject express the strongest views of said topic [1].
Doing a cursory review of some of your recent posts, it looks like this is a common habit of yours.
[1] https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect
This was acknowledge as a bug [0] and fixed in the draft C++26 standard pretty recently.
The proposal simply included a provision to turn off hardening, nothing else. Traditionally these checks were under #ifndef NDEBUG
(Guessing "the proposal" refers to the hardening proposal?)
I don't think that is correct since the authors of the hardening proposal agreed that allowing UB for hardened precondition violations was a mistake and that P3878 is a bug fix to their proposal. Presumably the intended way to turn off handling would be to just... not enable the hardened implementation in the first place?
At least traditionally it was common to not mix debug builds with optimized builds between dependencies, but now with contracts introducing yet another set of orthogonal configuration it will be that much harder to ensure that all dependencies make use of the same evaluation semantic.
Since then, libc++ has categorized the checks by cost and one can scale them back too.
std::vector<int> v;
v.push_back(123);
auto& n= v.front();
v.push_back(456);
auto n_doubled= n * 2;
A better language is needed in order to prevent such bugs, where such compile-time correctness checks are possible. Some static analyzers are able to detect it in C++, but only in some cases.Statically checking this specific example (or similarly simple examples) could be possible, sure. I'm not so sure about more complex cases, such as opaque functions (whether because the function is literally opaque or because not enough inlining occurred), stored references (e.g., std::span), unintentional mutation of the underlying data structure, etc.
Thats basically one of the main reason Rust's lifetimes exist - to explicitly encode information about when lifetimes are valid in the type system. C++ doesn't have an equivalent (yet?), so unless you're willing to use global analysis an/or non-standard annotations there's only so much static analysis can do.
In a lot of cases the solution is already sitting there for you in <algorithms> though. One of the more common places we’ve encountered this problem is when someone has a simple task like “delete items from this vector that match some predicate” and then just write a for-loop that does that but doesn’t handle the fact that the iterators can go bad when you modify the vector. The algorithms library has functions in it to handle that, but without a good mental checklist of what’s all in there people will generally just do the simple (and unfortunately wrong) thing.
tialaramex•2mo ago
Once again C++ people imagining into existence Undefined Behaviour which isn't Security Critical as if somehow that's a thing.
Mostly I read the link because I was intrigued as to how this counted as "at scale" and it turns out that's misleading, the article's main body is about the (at scale) deployment at Google, not the actual hardening work itself which wasn't in some special way "at scale".
AshamedCaptain•2mo ago
The author of TFA actually makes another related assumption:
> A crash from a detected memory-safety bug is not a new failure. It is the early, safe, and high-fidelity detection of a failure that was already present and silently undermining the system.
Not at all? Most memory-safety issues will never even show up in the radar, while with "Hardening" you've converted all of them into crashes that for sure will, annoying customers. Surely there must be a middle ground, which leads us back to the "debug mode" that the article is failing to criticize.
charleslmunger•2mo ago
Citation needed? There's all sorts of problems that don't "show up" but are bad. Obvious historical examples would be heartbleed and cloudbleed, or this ancient GTA bug [1].
1: https://cookieplmonster.github.io/2025/04/23/gta-san-andreas...
samdoesnothing•2mo ago
gishh•2mo ago
Most people around here don’t have any reason to have strong opinions about safety-critical code.
Most people around here spend the majority of their time trying to make their company money via startup culture, the annals of async web programming, and how awful some type systems are in various languages.
Working on safety-critical code with formal verification is the most intense, exhausting, fascinating work I’ve ever done.
Most people don’t work a company that either needs or can afford a safety-critical toolchain that is sufficient for formal, certified verification.
The goal of formal verification and safety critical code is _not_ to eliminate undefined behavior, it is to fail safely. This subtle point seems to have been lost a long time ago with “*end” developers trying to sell ads, or whatever.
kccqzy•2mo ago
gishh•2mo ago
josephg•2mo ago
And what makes you think buggy software only causes problems when hackers get in? Memory bugs cause memory corruption and crashes. I don’t want my pacemaker running somebody’s cowboy C++, even if the device is never connected to the internet.
gishh•2mo ago
> Your average web app can have security-critical issues but they probably won’t have safety-critical issues.
How many air-gapped systems have you worked on?
AlotOfReading•2mo ago
I've worked on safety critical systems with MAC addresses you can ping. Some of those systems were also air-gapped or partially isolated from the outside world. A rare few were even developed as safety critical.
AlotOfReading•2mo ago
You don't inherently need to eliminate UB to define the executable semantics of your code, but in practice you do. You could do binary analysis of the final image instead. You wouldn't even need a qualified toolchain this way. The semantics generated would only be valid for that exact build, and validation is one of the most expensive/time-consuming parts of safety critical development.
Most people instead work at the source code level, and rely on qualified toolchains to translate defined code into binaries with equivalent semantics. Trying to define the executable semantics of source code inherently requires eliminating UB, because the kind of "unrestricted UB" we're talking about has no executable semantics, nor does any code containing it. Qualified toolchains (e.g. Compcert, Green Hills, GCC with solidsand, Diab) don't guarantee correct translation of code without defined semantics, and coding standards like MISRA also require eliminating it.
As a matter of actual practice, safety critical processes "optimistically ignore" some level of undefined behavior, but that's not because it's acceptable from a principled stance on UB.
AlotOfReading•2mo ago
Language maintainers have no idea what code will be written. The people writing libraries have no idea how their library will be used. The application developers often don't realize the security implications of their choices. Operating systems don't know much about what they're managing. Users may not even realize what software they're running at all, let alone the many differing assumptions about threat model implicitly encoded into different parts of the stack.
Decades of trying to limit the complexity of writing "security critical code" only to the components that are security critical has resulted in an ecosystem where virtually nothing that is security critical actually meets that bar. Take libxml2 as an example.
FWIW, I disagree with the position in the article that fail-stop is the best solution in general, but there's experimental evidence to support it at least. The industry has tried many different approaches to these problems in the past. We should use the lessons of that history.
hgs3•2mo ago
Unless you're paying them, the people writing the libraries have no obligation to care. The real issue is Big Tech built itself on the backs of volunteer labor and expects that labor to provide enterprise-grade security guarantees. That's entitled and wholly unreasonable.
> Take libxml2 as an example.
libxml2 is an excellent example. I recommend you read what its maintainer has to say [1].
[1] https://gitlab.gnome.org/GNOME/libxml2/-/issues/913#note_243...
AlotOfReading•2mo ago
But this isn't a conversation limited to the big tech parasitism Nick is talking about. A quick check on my FOSS system implicates the text editor, the system monitor, the office suite, the windowing system, the photo editor, flatpak, the IDEs, the internationalization, a few daemons, etc as all depending on libxml2 and its nonexistent security.
criemen•2mo ago
But undefined behavior is literally introduced as "the compiler is allowed to do anything, including deleting all your files". Of course that's security critical by definition?
layer8•2mo ago
forrestthewoods•2mo ago
Undefined behavior in the (poorly written) spec doesn't mean undefined behavior in the real world. A given compiler is perfectly free to specify the behavior.
pjmlp•2mo ago
How well this will work out remains to be seen.
Rust still needs to get rid of its dependency on C++ compiler frameworks, and I don't see Cranelift matching GCC and LLVM any time soon.