https://leanpub.com/cppinitbook
(I don't know whether it's good or not, I just find it fascinating that it exists)
It's a systems language. Systems are not sane. They are dominated by nuance. In any case the language gives you a choice in what you pay for. It's nice to be able to allocate something like a copy or network buffer without having to pay for initialization that I don't need.
you don't know how much C++ code is being written for 100-200MHz CPUs everyday
https://github.com/search?q=esp8266+language%3AC%2B%2B&type=...
I have a codebase that is right now C++23 and soon I hope C++26 targeting from Teensy 3.2 (72 MHz) to ESP32 (240 MHz). Let me tell you, I'm fighting for microseconds every time I work with this.
the people who care about clock ticks should be the ones inconvenienced, not ordinary joes who are maintaining a FOSS package that is ultimately stuck by a 0-day. It still takes a swiss-cheese lineup to get there, for sure. but one of the holes in the cheese is C++'s default behavior, trying to optimize like it's 1994.
I mean that's pretty much the main reason for using c++ isn't it? Video games, real-time media processing, CPU ai inference, network middleware, embedded, desktop apps where you don't want startup time to take more than a few milliseconds...
In another era we would have just called this optimal. https://x.com/ID_AA_Carmack/status/1922100771392520710
“The systems programmer has seen the terrors of the world and understood the intrinsic horror of existence.”
All well and good if it is something you do not have to modify/maintain on a regular basis. But, if you do, then the ROI on replacing it might be high, depending on how much pain it is to keep maintaining it.
We have an old web app written in asp.net web forms. It mostly works. But we have to maintain it and add functionality to it. And that is where the pain is. We've been doing it for a few years but the amount of pain it is to work on it is quite high. So we are slowly replacing it. One page at a time.
It’s shameful that there’s no good successor to C++ outside of C# and Java (and those really aren’t successors). Carbon was the closest we came and Google seems to have preemptively dropped it.
Look, no one is more excited than me for this, but this is reaching Star Citizen levels of delays.
COBOL Language Frontend Merged For GCC 15 Compiler Written by Michael Larabel in GNU on 11 March 2025 at 06:22 AM EDT. 33 Comments
And that's not just said out of unfamiliarity. I'm a professional C++ developer, and I often find I'm more familiar with C++'s more arcane semantics than many of my professional C++ developer co-workers.
Would it really be that unreasonable to have initialisation be opt-out instead of opt-in? You'd still have just as much control, but it would be harder to shoot yourself in the foot by mistake. Instead it would be slightly more easy to get programs that can be optimised.
I'm more annoyed that C++ has some way to default-zero-init but it's so confusing that you can accidentally do it wrong. There should be only one very clear way to do this, like you have to put "= 0" if you want an int member to init to 0. If you're still concerned about safety, enable warnings for uninitialized members.
idk, seems like years of academic effort and research wasted if we do the way C++ do it
So, do you?
int i;
does not initialize the value. #include <string>
struct T1 { int mem; };
struct T2
{
int mem;
T2() {} // “mem” is not in the initializer list
};
int n; // static non-class, a two-phase initialization is done:
// 1) zero-initialization initializes n to zero
// 2) default-initialization does nothing, leaving n being zero
int main()
{
[[maybe_unused]]
int n; // non-class, the value is indeterminate
std::string s; // class, calls default constructor, the value is ""
std::string a[2]; // array, default-initializes the elements, the value is {"", ""}
// int& r; // Error: a reference
// const int n; // Error: a const non-class
// const T1 t1; // Error: const class with implicit default constructor
[[maybe_unused]]
T1 t1; // class, calls implicit default constructor
const T2 t2; // const class, calls the user-provided default constructor
// t2.mem is default-initialized
}
That `int n;` on the 11th line is initialized to 0 per standard. `int n;` on line 18, inside a function, is not. And `struct T1 { int mem; };` on line 3 will have `mem` initialized to 0 if `T1` is instantiated like `T1 t1{};`, but not if it's instantiated like `T1 t1;`. There's no way to tell from looking at `struct T1{...}` how the members will be initialized without knowing how they'll be called.C++ is fun!
[0]https://en.cppreference.com/w/cpp/language/default_initializ...
> "There's a great language somewhere deep inside of C++"
or something to that effect.
But I'm going to go out on a limb here and guess you don't do that. You actually do allow the C++ compiler to make assumptions that are not explicitly in your code, like reorder instructions, hoist invariants, eliminate redundant loads and stores, vectorize loops, inline functions, etc...
All of these things I listed are based on the compiler not doing strictly what you specified but rather reinterpreting the source code in service of speed... but when it comes to the compiler reinterpreting the source code in service of safety.... oh no... that's not allowed, those are training wheels that real programmers don't want...
Here's the deal... if you want uninitialized variables, then explicitly have a way to declare a variable to be uninitialized, like:
int x = void;
This way for the very very rare cases where it makes a performance difference, you can explicitly specify that you want this behavior... and for the overwhelming majority of cases where it makes no performance impact, we get the safe and well specified behavior.And if you used it with a default value of 0, you're going to end up operating on the 0th item in the array. That's probably a bug and it may even be a crasher if the array has length 0 and you end up corrupting something important, but the odds of it being disastrous are much lower.
A programmer wants the compiler to accept code that looks like a stupid mistake when he knows it's not.
But he also wants to have the compiler make sure he isn't making stupid mistakes by accident.
How can it do both? They're at odds.
By doing what’s right.
https://en.wikipedia.org/wiki/Principle_of_least_astonishmen...
Most programmers aren't that good and you're mostly running other people's code. Bad defaults that lead to exploitable security bugs is...bad defaults. If you want something to be uninitialized because you know it then you should be forced to scream it at the compiler.
I've been using C++ for a decade. Of all the warts, they all pale in comparison to the default initialization behavior. After seeing thousands of bugs, the worst have essentially been caused by cascading surprises from initialization UB from newbies. The easiest, simplest fix is simply to default initialize with a value. That's what everyone expects anyway. Use Python mentality here. Make UB initialization an EXPLICIT choice with a keyword. If you want garbage in your variable and you think that's okay for a tiny performance improvement, then you should have to say it with a keyword. Don't just leave it up to some tiny invisible visual detail no one looks at when they skim code (the missing parens). It really is that easy for the language designers. When thinking about backward compatibility... keep in mind that the old code was arguably already broken. There's not a good reason to keep letting it compile. Add a flag for --unsafe-initialization-i-cause-trouble if you really want to keep it.
C++, I still love you. We're still friends.
Oh how I wish the C++ committee and compiler authors would adopt this way of thinking... Sadly we're dealing with an ecosystem where you have to curate your compiler options and also use clang-tidy to avoid even the simplest mistakes :/
Like its insane to me how Wconversion is not the default behavior.
Many different committees, organizations etc. could benefit, IMO.
I disagree. If you expect anyone to adopt your new standard revision, the very least you need to do is ensure their code won't break just by flipping s flag. You're talking about production software, many of which has decades worth of commit history, which you simply cannot spend time going through each and every single line of code of your >1M LoC codebase. That's the difference between managing production-grade infrastructure and hobbyist projects.
> If zero initialization were the default and you had to opt-in with [[uninitialized]] for each declaration it’d be a lot safer.
I support that, too. Just seems harder than getting a flag into Clang or GCC.
Ideally, what you want is what Rust and many modern languages do: programs which don't explain what they wanted don't compile, so, when you forget to initialize that won't compile. A Rust programmer can write "Don't initialize this 1024 byte buffer" and get the same (absence of) code but it's a hell of a mouthful - so they won't do it by mistake.
The next best option, which is what C++ 26 will ship, is what they called "Erroneous Behaviour". Under EB it's defined as an error not to initialize something you use but it is also defined what happens so you can't have awful UB problems, typically it's something like the vendor specifies which bit pattern is written to an "unintialized" object and that's the pattern you will observe.
Why not zero? Unfortunately zero is too often a "magic" value in C and C++. It's the Unix root user, it's often an invalid or reserved state for things. So while zero may be faster in some cases, it's usually a bad choice and should be avoided.
I think you're confusing things. You're arguing about static code analysis being able to identify uninitialized var reads. All C++ compilers already provide support for flags such as -Wuninitiaized.
Come on. That's nothing compared to the horrors that lay in manual memory management. Like I've never worked with a C++ based application that doesn't have crashes lurking all around, so bad that even a core dump leaves you clueless as to what's happening. Couple OOP involving hundreds of classes and 50 levels deep calls with 100s of threads and you're hating your life when trying to find the cause for yet another crash.
The code is only broken if the data is used before anything is written to it. A lot of uninitialized data is wrapped by APIs that prevent reading before something was written to it, for example the capacity of a standard vector, buffers for IO should only access bytes that where already stored in them. I have also worked with a significant number of APIs that expect a large array of POD types and then tell you how many entries they filled.
> for a tiny performance improvement
Given how Linux allocates memory pages only if they are touched and many containers intentionally grow faster then they are used? It reduces the amount of page faults and memory use significantly if only the used objects get touched at all.
In effect, you are assuming that your uninitialized and initialized variables straddle a page boundary. This is obviously not going to be a common occurrence. In the common case you are allocating something on the heap. That heap chunk descriptor before your block has to be written, triggering a page fault.
Besides: taking a page fault, entering the kernel, modifying the page table page (possibly merging some VMAs in the process) and exiting back to userspace is going to be A LOT slower than writing that variable.
OK you say, but what if I have a giant array of these things that spans many pages. In that case your performance and memory usage are going to be highly unpredictable (after all, initializing a single thing in a page would materialize that whole page).
OK, but vectors. They double in size, right? Well, the default allocator for vectors will actually zero-initialize the new elements. You could write a non-initializing allocator and use it for your vectors - and this is in line with "you have to say it explicitly to get dangerous behavior".
The problem with your assumption is that you're just arguing that it's ok for code to be needlessly buggy if you believe the odds this bug is triggered are low. OP points out a known failure mode and explains how a feature eliminates it. You intentionally ignore it for no reason.
This assumption is baffling when, in the exact same thread, you see people whining about C++ for allowing memory-related bugs to exist.
You are assuming that I am working with small data structures, don't use arrays of data, don't have large amounts of POD members, ... .
> That heap chunk descriptor before your block has to be written, triggering a page fault.
So you allocate one out of hundreds of pages? The cost is significantly less than the alternative.
> In that case your performance and memory usage are going to be highly unpredictable (after all, initializing a single thing in a page would materialize that whole page).
As opposed to initializing thousands of pages you will never use at once? Or allocating single pages when they are needed?
> Well, the default allocator for vectors will actually zero-initialize the new elements.
I reliably get garbage data after the first reserve/shrink_to_fit calls. Not sure why the first one returns all zero, I wouldn't rely on it.
struct foo {
int a = 0;
};
In Python, which is higher-level ofc, I still have to do `foo = 0`, nice and clear.It is.
Your example of having a field called `a` that is initialized to 0 is perfectly valid C++ as well but it's not the same as an explicitly declared default constructor.
C does not have member functions, let alone special member functions such as constructors. It's understandable that someone with a C background who never had any experience using a language besides C would struggle with this sort of info.
C++ improved upon C's developer experience by introducing the concept of special member functions. These are functions which the compiler conveniently generates for you when you write a simple class. This covers constructors (copy constructors and move constructors too). This is extremely convenient and eliminates the need for a ton of boilerplate code.
C++ is also smart enough to know when not to write something it might surprise you. Thus, if you add anything to a basic class that would violate assumptions on how to generate default implementations for any of these special member functions, C++ simply backs off and doesn't define them.
Now, just because you prevented C++ from automatically defining your constructors, that does not mean you don't want them without having to add your boilerplate code. Thus, C++ allows developers to define these special member functions using default implementations. That's what the default keyword is used for.
Now, to me this sort of complaining just sounds like nitpicking. The whole purpose of special member functions and default implementations is to help developers avoid writing boilerplate code to have basic implementations of member functions you probably need anyway. For basic, predictable cases, C++ steps in and helps you out. If you prevent C++ from stepping in, it won't. Is this hard to understand?
More baffling, you do not have to deal with these scenarios if you just declare and define the special member functions you actually want. This was exactly how this feature was designed to work. Is this too hard to follow or understand?
I think the problem with C++ is that some people who are clearly talking out of ignorance feel the need to fabricate arguments about problems you will experience if you a) don't know what you are doing at all and aren't even interested in learning, b) you want to go way out of your way to nitpick about a tool you don't even use. Here we are, complaining about a keyword. If we go through the comments, most of the people doing the bulk of the whining don't even know what it means or how it's used. They seem to be invested in complaining about things they never learned about. Wild.
A wonderful exploration of an underexplored topic--I've pre-ordered the hard copy and have been following along with the e-book in the interim.
C++ sucks, it's too hard to use, the compiler should generate stores all over the place to preemptively initialize everything!
Software is too bloated, if we optimized more we could use old hardware!
Usually what happens is the language requires you to initialize the variable before it's read for the first time, but this doesn't have to be at the point of declaration. Like in Java you can declare a variable, do other stuff, and then initialize it later... so long as you initialize it before reading from it.
Note that in C++, reading from a variable before writing to it is undefined behavior, so it's not particularly clear what benefit you're getting from this.
You gain the benefit that the compiler can assume the code path in question is impossible to reach, even if there's an obvious way to reach it. To my understanding, this can theoretically back-propagate all the way to `main()` and make the entire program a no-op.
The compiler cannot always tell if a variable will be written to before it is accessed. if you have a 100kb network buffer and you call int read = opaque_read(buffer); the compiler cannot tell how much or if anything at all was written to buffer and how size relates to it, it would be forced to initialize every byte in it to zero. A programmer can read the API docs, see that only the first read bytes are valid and use the buffer without ever touching anything uninitialized. Now add in that you can pass mutable pointers and references to nearly anything in C++ and the compiler has a much harder time to tell if it has to initialize arguments passed to functions or if the function is doing the initialization for it.
No, it is bonkers; stick to your consistent point, please.
These two should have exactly the same effect:
bar() = default; // inside class declaration
bar::bar() = default; // outside class declaration
The only difference between them should be analogous to the difference between an inline and non-inline function.For instance, it might be that the latter one is slower than the former, because the compiler doesn't know from the class declaration that the default constructor is actually not user-defined but default. How it would work is that a non-inline definition is emitted, which dutifully performs the initialization, and that definition is actually called.
That's what non-bonkers might look like, in any case.
I.e. both examples are rewritten by the compiler into
bar() { __default_init; }
bar::bar() { __default_init; }
where __default_init is a fictitious place holder for the implementation's code generation strategy for doing that default initialization. It would behave the same way, other than being inlined in the one case and not in the other.Another way that it could be non-bonkers is if default were simply not allowed outside of the class declaration.
bar::bar() default; // error, too late; class declared already!
Something that has no hope of working right and is easily detectable by syntax alone should be diagnosed. If default only works right when it is present at class declaration time, then ban it elsewhere.If you want indiscriminate initialization, a compiler flag is the way, not forcing it in the source code.
gnabgib•7h ago
Related: Initialization in C++ is Seriously Bonkers (166 points, 2019, 126 points) https://news.ycombinator.com/item?id=18832311