https://leanpub.com/cppinitbook
(I don't know whether it's good or not, I just find it fascinating that it exists)
Knowing all of it just isn't possible from my experience.
It's a systems language. Systems are not sane. They are dominated by nuance. In any case the language gives you a choice in what you pay for. It's nice to be able to allocate something like a copy or network buffer without having to pay for initialization that I don't need.
That's because only this feature introduces the potential for safety problem, which is amusing because it doesn't actually do anything per se, it will often emit zero CPU instructions.
It's unsafe because before we called this function we had a MaybeUninit<T> and, well as it said, maybe it isn't initialized, so nothing will assume it is. But once we assume_init, we've got a T, all the code working with the T, including safe Rust code, is entitled to ignore any scenarios that could arise if it were not, in fact, initialized properly. Despite not in some sense "doing" anything, the unsafe call was critical.
you don't know how much C++ code is being written for 100-200MHz CPUs everyday
https://github.com/search?q=esp8266+language%3AC%2B%2B&type=...
I have a codebase that is right now C++23 and soon I hope C++26 targeting from Teensy 3.2 (72 MHz) to ESP32 (240 MHz). Let me tell you, I'm fighting for microseconds every time I work with this.
the people who care about clock ticks should be the ones inconvenienced, not ordinary joes who are maintaining a FOSS package that is ultimately stuck by a 0-day. It still takes a swiss-cheese lineup to get there, for sure. but one of the holes in the cheese is C++'s default behavior, trying to optimize like it's 1994.
I mean that's pretty much the main reason for using c++ isn't it? Video games, real-time media processing, CPU ai inference, network middleware, embedded, desktop apps where you don't want startup time to take more than a few milliseconds...
In another era we would have just called this optimal. https://x.com/ID_AA_Carmack/status/1922100771392520710
“The systems programmer has seen the terrors of the world and understood the intrinsic horror of existence.”
This has become a bit less true in C17 and C23, but a lot of that is driven by the urge from WG21 (the C++ committee) to have WG14 (C language) do their work for them, hopefully some WG14 members will push back against that.
As for latest ISO C revisions, not sure if it is doing any WG21 work other that the whole #embed drama, rather it looks to me pushing for a C++ without Classes, with a much worse design, e.g. _Generic.
pub fn main() void {
var x: i32;
...
}
If you do actually want an uninitialized variable, you say so: var x: i32 = undefined;
This rule also takes care of structs by applying recursively.All well and good if it is something you do not have to modify/maintain on a regular basis. But, if you do, then the ROI on replacing it might be high, depending on how much pain it is to keep maintaining it.
We have an old web app written in asp.net web forms. It mostly works. But we have to maintain it and add functionality to it. And that is where the pain is. We've been doing it for a few years but the amount of pain it is to work on it is quite high. So we are slowly replacing it. One page at a time.
It’s shameful that there’s no good successor to C++ outside of C# and Java (and those really aren’t successors). Carbon was the closest we came and Google seems to have preemptively dropped it.
Look, no one is more excited than me for this, but this is reaching Star Citizen levels of delays.
I think Cpp2 / cppfront will become the successor language instead.
COBOL Language Frontend Merged For GCC 15 Compiler Written by Michael Larabel in GNU on 11 March 2025 at 06:22 AM EDT. 33 Comments
I think we can do significantly better than C++ as a systems language. We just haven’t landed on a language that really nails the design.
Not really. Rust, ATS, D, and some implementations of Lisp and even Haskell if you slay the dragon of actually learning GHC. Modern C++ is honestly overrated in my opinion as an extensive user of the language with 300k lines in a modern dialect alone. It's a pain in the ass to step through in a visual debugger and core dumps may as well be useless no matter the platform. It's extremely grating to read and to write. Compilers have a tendency to crash like crazy if you get too cute. There's still no compile-time (or runtime) reflection. I would literally rather be writing proofs for a dependent type system than deal with heavy template metaprogramming.
I didn’t say C++ was amazing, just that recent dialects are better than the alternatives in practice, many of which I have also used for similar code.
Rust is not a substitute for C++ unless your code isn’t all that low-level, it lives closer to the abstraction level of Java. There are a number of odd gaps in the Rust feature set for low-level systems programming (database kernels in my case), the rigid ownership/lifetime model doesn’t play nicely with fairly standard systems-y things like DMA, and the code is always runs slower for some inexplicable reason.
I’d love something to replace C++ but to be a candidate it can’t be less expressive, slower, and so rigid that it is difficult to do some ordinary systems-y things. A nerfed programming language is not the answer to this question.
I disagree. Having to juggle two type systems and two languages simultaneously is inherently unclean, to say nothing of how noisy and unergonomic templates end up being. For anything non-trivial they're a mess. Imagine I handed you a codebase with a few hundred templated structures, averaging about 50 parameters each, many of which are variadic, many of which are nesting as parameters within each other. As you climb through this pile, you end up in a totally different layer, where these more complicated templated structures are passing around and working on a much larger collection of simpler templated structures and constexpr. You're not going to have a fun time, no matter how comfortable you are with templates.
> I can’t remember the last time I saw a compiler crash
How long of a parameter pack do you think it will take to cause a crash? Of course eventually the compiler has no more memory to spare, and that'll be a crash. Clang will helpfully complain:
warning: stack nearly exhausted; compilation time may suffer, and crashes due to stack overflow are likely
But I assure you, the compiler will crash long before it runs out of memory. If you're in the domain of non-trivial meta-templates, these packs can explode to ludicrous sizes from recursive concatenation, and especially if you have a logic error somewhere that generates and concatenates packs you didn't want. And that's just the obviously intuitive example. Now contextualize this in the codebase I describe above.The more you push templates as a turing-complete language to their maximum potential, the more compiler issues you run into. Templates are probably the most rigid and unstable metaprogramming facility I've experienced, honestly. GCC and cl.exe are the worst for it.
> it lives closer to the abstraction level of Java.
That's interesting, because Java isn't very abstracted over the JVM. It's that extra layer of the JVM being an abstraction over another CPU architecture that makes Java abstract. C and C++ are arguably more abstracted, because they have to be. But just like Rust, they provide good general semantics for any register machine that the abstractions can mostly be compiled away.
> the rigid ownership/lifetime model doesn’t play nicely with fairly standard systems-y things like DMA
Choosing references as your default is insane, honestly. Safe Rust is akin to writing formally verified software in the mental and ergonomic overhead it incurs. That's not coincidental. I've come to the conclusion one of the biggest mistakes of the community is not pushing pointer-only Rust harder, given they want it to be taken seriously as a systems language.
Rust's safety is irrelevant as far as I'm concerned. It's nice that it has it, but I (and everybody else working in C++) is used to working without it and don't really miss it when it's gone.
> and the code is always runs slower for some inexplicable reason.
As you come to understand Rust as a set of assembler macros and rewrite rules applied over them, I've found they're neither more or less granular than the C/C++ ones. They're just a different family, and it stems from a different style of assembly programming (of which there are many). If you've ever translated a lisp with manual memory management onto a register machine, it's similar to that way of thinking about how to compose LOAD/STORE/JUMP + arithmetic, which it got from the ISWIM/ML heritage. Just like with C/C++, everything else is syntax sugar and metaprogramming. At the core of it, you should still be able to translate your Rust code into your target's machine code in your head relatively trivially.
That's an interesting perspective I hadn't heard before, but I think there are a couple problems. One, the culture is really against using unsafe unless it's impossible to write the code using safe Rust. This happened with the Actix web drama in 2020 [1]. And, there is the opinion that unsafe Rust is harder than C [2]. Not just that, but unsafe Rust is really ugly and annoying to use compared to safe Rust. I think that hinders the potential adoption of your idea, too.
[1]: https://steveklabnik.com/writing/a-sad-day-for-rust/
[2]: https://chadaustin.me/2024/10/intrusive-linked-list-in-rust/
> The results show that, in the cases we evaluated, the performance gains from exploiting UB are minimal. Furthermore, in the cases where performance regresses, it can often be recovered by either small to moderate changes to the compiler or by using link-time optimizations.
And that's not just said out of unfamiliarity. I'm a professional C++ developer, and I often find I'm more familiar with C++'s more arcane semantics than many of my professional C++ developer co-workers.
Would it really be that unreasonable to have initialisation be opt-out instead of opt-in? You'd still have just as much control, but it would be harder to shoot yourself in the foot by mistake. Instead it would be slightly more easy to get programs that can be optimised.
I'm more annoyed that C++ has some way to default-zero-init but it's so confusing that you can accidentally do it wrong. There should be only one very clear way to do this, like you have to put "= 0" if you want an int member to init to 0. If you're still concerned about safety, enable warnings for uninitialized members.
The almost is unfortunate.
Things can change & grow, that's why we make new standards in the first place.
idk, seems like years of academic effort and research wasted if we do the way C++ do it
In a sane language that would be distinguishable by having the direction explicit (i.e. things like in/out/ref in C#), and then compiler could complain for in/ref but not for out. But this is C++, so...
So, do you?
int i;
does not initialize the value. #include <string>
struct T1 { int mem; };
struct T2
{
int mem;
T2() {} // “mem” is not in the initializer list
};
int n; // static non-class, a two-phase initialization is done:
// 1) zero-initialization initializes n to zero
// 2) default-initialization does nothing, leaving n being zero
int main()
{
[[maybe_unused]]
int n; // non-class, the value is indeterminate
std::string s; // class, calls default constructor, the value is ""
std::string a[2]; // array, default-initializes the elements, the value is {"", ""}
// int& r; // Error: a reference
// const int n; // Error: a const non-class
// const T1 t1; // Error: const class with implicit default constructor
[[maybe_unused]]
T1 t1; // class, calls implicit default constructor
const T2 t2; // const class, calls the user-provided default constructor
// t2.mem is default-initialized
}
That `int n;` on the 11th line is initialized to 0 per standard. `int n;` on line 18, inside a function, is not. And `struct T1 { int mem; };` on line 3 will have `mem` initialized to 0 if `T1` is instantiated like `T1 t1{};`, but not if it's instantiated like `T1 t1;`. There's no way to tell from looking at `struct T1{...}` how the members will be initialized without knowing how they'll be called.C++ is fun!
[0]https://en.cppreference.com/w/cpp/language/default_initializ...
> "There's a great language somewhere deep inside of C++"
or something to that effect.
(the original pre-ISO C++ is basically C with Simula bolted onto it.)
But I'm going to go out on a limb here and guess you don't do that. You actually do allow the C++ compiler to make assumptions that are not explicitly in your code, like reorder instructions, hoist invariants, eliminate redundant loads and stores, vectorize loops, inline functions, etc...
All of these things I listed are based on the compiler not doing strictly what you specified but rather reinterpreting the source code in service of speed... but when it comes to the compiler reinterpreting the source code in service of safety.... oh no... that's not allowed, those are training wheels that real programmers don't want...
Here's the deal... if you want uninitialized variables, then explicitly have a way to declare a variable to be uninitialized, like:
int x = void;
This way for the very very rare cases where it makes a performance difference, you can explicitly specify that you want this behavior... and for the overwhelming majority of cases where it makes no performance impact, we get the safe and well specified behavior.And if you used it with a default value of 0, you're going to end up operating on the 0th item in the array. That's probably a bug and it may even be a crasher if the array has length 0 and you end up corrupting something important, but the odds of it being disastrous are much lower.
If they added an explicit uninitialized value representation to the language, I bet it would look something like this:
int x {std::uninitialized<int>::value};
std::nullopt doesn't seem so different to what I was talking about; I guess it's just less verbose. When I wrote that, I was thinking of things like "std::is_same<T1, T2>::value" being there.
int x [[indeterminate]];
I'm not kidding here;The whole idea of optimizations is producing code that’s equivalent to the naiive version you wrote. There is no inconsistency here.
https://godbolt.org/z/Y4Yjb7z9c
clang notices that in the second loop I'm multiplying by 0, and thus the result is just 0, so it just returns that. Critically, this is not "exactly and only what the programmer specifies", since I very much told it to do all those additions and multiplications and it decided to optimize them away.
https://godbolt.org/z/jzWWTW85j
Compile that without optimizations and you get one set of output, compile it with optimizations and you get another. There are actually an entire suite of exceptions to the "as-if" rule.The second point is that the whole reason for having an "as if" rule in the first place is to give permission for the compiler to discard the literal interpretation of the source code and instead only consider semantics that are defined to be observable, which the language standard defines not you the developer.
There would be no need for an "as if" rule if the compiler strictly did exactly what it was told. Its very existence should be a clue that the compiler is allowed to reinterpret the source code in ways that do not reflect its literal interpretation.
What does this mean since I am not writing assembly and there is no specified correspondence between assembly and C++.
I'm not sure why you're bringing up assembly but it suggests that you might not correctly understand the example I provided for you, which I reiterate has absolutely nothing to do with assembly.
I agree you have a valid point though. I'd be interested to know the committee's reasoning.
this is what the D programming language does. Every var declaration has a well know value, unless it is initialized with void. This is nice, optimizing compilers are able to drop useless assignments anyway.
A programmer wants the compiler to accept code that looks like a stupid mistake when he knows it's not.
But he also wants to have the compiler make sure he isn't making stupid mistakes by accident.
How can it do both? They're at odds.
By doing what’s right.
https://en.wikipedia.org/wiki/Principle_of_least_astonishmen...
Most programmers aren't that good and you're mostly running other people's code. Bad defaults that lead to exploitable security bugs is...bad defaults. If you want something to be uninitialized because you know it then you should be forced to scream it at the compiler.
Many of these can't be blamed on C holdover. For example Vector.at(i) versus Vector[i] – most people default to the latter and don't think twice about the safety implications. The irony is that most of the time when people use std::vector, performance is irrelevant and they'd be much better off with a safe default.
Alas, we made our bed and now we have to lie in it.
Good luck with that.
I've been using C++ for a decade. Of all the warts, they all pale in comparison to the default initialization behavior. After seeing thousands of bugs, the worst have essentially been caused by cascading surprises from initialization UB from newbies. The easiest, simplest fix is simply to default initialize with a value. That's what everyone expects anyway. Use Python mentality here. Make UB initialization an EXPLICIT choice with a keyword. If you want garbage in your variable and you think that's okay for a tiny performance improvement, then you should have to say it with a keyword. Don't just leave it up to some tiny invisible visual detail no one looks at when they skim code (the missing parens). It really is that easy for the language designers. When thinking about backward compatibility... keep in mind that the old code was arguably already broken. There's not a good reason to keep letting it compile. Add a flag for --unsafe-initialization-i-cause-trouble if you really want to keep it.
C++, I still love you. We're still friends.
Oh how I wish the C++ committee and compiler authors would adopt this way of thinking... Sadly we're dealing with an ecosystem where you have to curate your compiler options and also use clang-tidy to avoid even the simplest mistakes :/
Like its insane to me how Wconversion is not the default behavior.
Many different committees, organizations etc. could benefit, IMO.
I disagree. If you expect anyone to adopt your new standard revision, the very least you need to do is ensure their code won't break just by flipping s flag. You're talking about production software, many of which has decades worth of commit history, which you simply cannot spend time going through each and every single line of code of your >1M LoC codebase. That's the difference between managing production-grade infrastructure and hobbyist projects.
But the annoyance comes when dealing with multiple compilers and versions. Then you have to add more compatibility macros all over. Say, when being a library vendor trying to support broad range of customers.
The tooling already exists. The bulk of the criticism in this thread is clearly made from a position of ignorance. For example, all major compilers already provide flags to enable checks for uninitialized variables being used. Onboarding a static code analysis tool nowadays requires setting a flag in CMake.
These discussions would be constructive if those engaging in them had any experience at all with the language and tooling. But no, it seems the goal is to parrot cliches out of ignorance. Complaining that they don't know what a reserved word means and using that as an argument to rewrite software in other languages is somehow something worth stating.
Why would you expect that a new revision can't cause existing code to compile? It means that "new" revisions can't fix old problems, and one thing you always get more of over time is perspective.
If you don't want your code "broken", don't migrate to a new standard. That's the point of supporting old standards. Don't hobble new standards because you both want new things, but don't want old things to change.
For staters, because that would violate the goals and prioritites of the C++ as established by the C++ standardization committee.
I could go on and on, but it's you who should provide any semblance of rationale: why do you believe existing software should break? What value do you see in it? Does it have any value at all?
How exactly can a _new_ standard cause existing software compiling on an old standard to break?
EDIT: As to the value, I've mentioned elsewhere in this thread; we know a lot more about the practice of programming than when C++ was created and standardised. Why would we say those design decisions can never be questioned or changed? It locks into place choices we never knew were bad, until they caused decades of problems.
> that would violate the goals and prioritites of the C++ as established by the C++ standardization committee.
Correct. It's these goals and priorities that are being criticised.
They can keep using the old standard.
But they cannot upgrade. Ever. At least without requiring major maintenance work. Which means never. Do you understand what that means?
Name a single long running compiler suite that can compile every version it once did.
Is this inaccurate?
GCC rarely, like never, supports the complete standards. Every release improves conformance, fixes errors, sometimes drops support for targets, perhaps changes how poorly defined areas work.
As a result, likely no version compiles code the same as a previous version across all possible code.
You’ll also note GCC doesn’t likely compile the variants of C++ it did before the 1998 version.
And this type of page is just the tip of the iceberg https://gcc.gnu.org/gcc-13/porting_to.html. You’ll note changes made that can render previously compilable code no longer compilable.
So no, you cannot naively think a new compiler version will simply run all your old code. On big projects upgrading is a huge endeavor to minimize problems.
We could do every supported language the same way.
> If zero initialization were the default and you had to opt-in with [[uninitialized]] for each declaration it’d be a lot safer.
I support that, too. Just seems harder than getting a flag into Clang or GCC.
You don't care because your job is not to ensure that a new release of C++ doesn't break production code. You gaze at your navel and pretend that's the universe everyone is bound to. But there are others using C++, and using it in production software. Some of them care, and your subjective opinions don't have an impact in everyone else's requirements.
> I only have to work with Clang, personally.
Read Clang's manual and check what compiler flags you need to flip to get that behavior. It's already there.
I believe this only helps for trivial automatic variables; not non-trivial automatic variables (structs/classes) that contain uninitialized trivial members.
I wonder if (for stack variables) this is due to an implementation detail in the compiler. After all, non-trivial classes have 'actual' constructors that run and that is supposed to initialize their respective class instances...
Ideally, what you want is what Rust and many modern languages do: programs which don't explain what they wanted don't compile, so, when you forget to initialize that won't compile. A Rust programmer can write "Don't initialize this 1024 byte buffer" and get the same (absence of) code but it's a hell of a mouthful - so they won't do it by mistake.
The next best option, which is what C++ 26 will ship, is what they called "Erroneous Behaviour". Under EB it's defined as an error not to initialize something you use but it is also defined what happens so you can't have awful UB problems, typically it's something like the vendor specifies which bit pattern is written to an "unintialized" object and that's the pattern you will observe.
Why not zero? Unfortunately zero is too often a "magic" value in C and C++. It's the Unix root user, it's often an invalid or reserved state for things. So while zero may be faster in some cases, it's usually a bad choice and should be avoided.
I think you're confusing things. You're arguing about static code analysis being able to identify uninitialized var reads. All C++ compilers already provide support for flags such as -Wuninitiaized.
(Safe) Rust does guarantee to identify uninitialised variable reads, but I believe the point is that you can get the optimisation of not forcing early initialisation in Rust, you just have to be explicit that that's what you want (you use the MaybeUninit type); you're forced to be clear that that's what you meant, not just by forgetting parens.
let mut jim: Goat;
// Potentially much later ...
if some_reason {
jim = make_a_new_goat();
} else {
jim = get_existing_goat();
}
use(jim); // In some way we use that goat now
The compiler can see OK, we eventually initialized this variable before we used it, there's no way we didn't initialize it so that's fine, this compiles.But, if we screw up and make it unclear whether jim is initialized, probably because in some cases it wouldn't be - that doesn't compile.
This is the usual "avoid early initialization" C++ programmers are often thinking of and it doesn't need MaybeUninit, since it's definitely fine if you're correct, it's just that the C++ compiler is happy (before C++ 26) with just having Undefined Behaviour if you make any mistakes and the Rust compiler will reject that.
[Idiomatically this isn't good Rust, Rust is an expression language so we can just write all that conditional if-else block in the initializer itself and that's nicer, but if you're new to this the above works fine.]
That's great. You can get that check on C++ projects by flipping a compiler flag.
Aren't we discussing C++?
Your knowledge doesn't seem to even reach the point of having googled the topic. If you googled it once, you'd not be commenting it doesn't exist. Hell, you don't even seem to have read the thread, let alone the discussion.
I'll note that you have failed to name this "obvious" flag that I'm missing.
From my experience, you're exaggerating the number of false negatives, which is more a factor of how you write your code than what static code analyzers do.
Also, your comment reads like an attempt at moving the goal post. We start this discussion being very adamant in accusing C++ of being impossible to detect uninitialized var reads. Once that assertion is thoroughly proven to be false and resulting from clueless ignorance, now we try to reframe it as being... Imperfect in some hypothetical scenarios? So what's supposed to be the actual complain?
The main problem with C++ is that some people somehow are personally invested in criticizing it from a position of complete ignorance. The problem is not technical, it's social.
Come on. That's nothing compared to the horrors that lay in manual memory management. Like I've never worked with a C++ based application that doesn't have crashes lurking all around, so bad that even a core dump leaves you clueless as to what's happening. Couple OOP involving hundreds of classes and 50 levels deep calls with 100s of threads and you're hating your life when trying to find the cause for yet another crash.
Have you tried fixing the bugs in your code?
That strategy has been followed by people writing code in every single language, and when used (even with C++) you do drive down the number of these crashes to a residual/purely theoretical frequency.
Scenarios such as those you've described are rare. There should be more to them than the tool you're using to do your job. So why blame the tool?
Good programmers have long ago written best practices guides based on hard learned experience. Newer languages (like Rust) were designed by people who read those guides and made a language that made using those features hard.
The code is only broken if the data is used before anything is written to it. A lot of uninitialized data is wrapped by APIs that prevent reading before something was written to it, for example the capacity of a standard vector, buffers for IO should only access bytes that where already stored in them. I have also worked with a significant number of APIs that expect a large array of POD types and then tell you how many entries they filled.
> for a tiny performance improvement
Given how Linux allocates memory pages only if they are touched and many containers intentionally grow faster then they are used? It reduces the amount of page faults and memory use significantly if only the used objects get touched at all.
In effect, you are assuming that your uninitialized and initialized variables straddle a page boundary. This is obviously not going to be a common occurrence. In the common case you are allocating something on the heap. That heap chunk descriptor before your block has to be written, triggering a page fault.
Besides: taking a page fault, entering the kernel, modifying the page table page (possibly merging some VMAs in the process) and exiting back to userspace is going to be A LOT slower than writing that variable.
OK you say, but what if I have a giant array of these things that spans many pages. In that case your performance and memory usage are going to be highly unpredictable (after all, initializing a single thing in a page would materialize that whole page).
OK, but vectors. They double in size, right? Well, the default allocator for vectors will actually zero-initialize the new elements. You could write a non-initializing allocator and use it for your vectors - and this is in line with "you have to say it explicitly to get dangerous behavior".
The problem with your assumption is that you're just arguing that it's ok for code to be needlessly buggy if you believe the odds this bug is triggered are low. OP points out a known failure mode and explains how a feature eliminates it. You intentionally ignore it for no reason.
This assumption is baffling when, in the exact same thread, you see people whining about C++ for allowing memory-related bugs to exist.
You failed to read what I wrote. I referred to why clients would choose to not initialize early to avoid scenarios such as Linux over committing, not that Linux had a bug.
Either you're replying without bothering to read the messages you're replying to, or you're failing to understand what is being written.
> If you want to avoid this (for some reason), choosing not to initialize early is not going to have the intended effect.
Read PP's comment.
You are assuming that I am working with small data structures, don't use arrays of data, don't have large amounts of POD members, ... .
> That heap chunk descriptor before your block has to be written, triggering a page fault.
So you allocate one out of hundreds of pages? The cost is significantly less than the alternative.
> In that case your performance and memory usage are going to be highly unpredictable (after all, initializing a single thing in a page would materialize that whole page).
As opposed to initializing thousands of pages you will never use at once? Or allocating single pages when they are needed?
> Well, the default allocator for vectors will actually zero-initialize the new elements.
I reliably get garbage data after the first reserve/shrink_to_fit calls. Not sure why the first one returns all zero, I wouldn't rely on it.
Sounds like a great set of use cases for explicit syntax to opt out of automatic initialization.
Reserve will not initialize, but then you have to keep track of the real vector size on the side, inevitably leading to bugs. Alternatively, something like this https://stackoverflow.com/questions/15967293/how-to-make-my-... will make resize() leave the elements uninitialized.
You already do that when you use push_back. It tracks the size for you, overallocates to amortize the cost of growing and most importantly does not initialize the overallocated memory before it is used, meaning pages will not be touched / mapped by the OS unless you actually end up using it. Giving you the benefit of amortizing vector growth without paying for the uninitialized memory it allocates behind the scenes for future use.
Directly accessing reserved memory instead of using resize was to check if the allocator zero initialized that overallocated memory. That the parts that are used end up initialized at a later point is entirely irrelevant to my point.
So your previous point:
> They double in size, right? Well, the default allocator for vectors will actually zero-initialize the new elements.
They double in capacity, not size when used with push_back. Which means exactly one new element will be initialized no matter how much uninitialized/unused/unmapped capacity the vector allocates for future use.
Reminder than compiler devs are usually paid by trillion dollar companies that make billions with 'old code'.
struct foo {
int a = 0;
};
In Python, which is higher-level ofc, I still have to do `foo = 0`, nice and clear.It is.
Your example of having a field called `a` that is initialized to 0 is perfectly valid C++ as well but it's not the same as an explicitly declared default constructor.
If the constructor is "default" then why do you need to explicit set it? Yeah, I know some objects don't have constructors, but it would make more sense if you had to explicit delete the default constructor, or the keyword "trivial" was used instead of default.
C does not have member functions, let alone special member functions such as constructors. It's understandable that someone with a C background who never had any experience using a language besides C would struggle with this sort of info.
C++ improved upon C's developer experience by introducing the concept of special member functions. These are functions which the compiler conveniently generates for you when you write a simple class. This covers constructors (copy constructors and move constructors too). This is extremely convenient and eliminates the need for a ton of boilerplate code.
C++ is also smart enough to know when not to write something it might surprise you. Thus, if you add anything to a basic class that would violate assumptions on how to generate default implementations for any of these special member functions, C++ simply backs off and doesn't define them.
Now, just because you prevented C++ from automatically defining your constructors, that does not mean you don't want them without having to add your boilerplate code. Thus, C++ allows developers to define these special member functions using default implementations. That's what the default keyword is used for.
Now, to me this sort of complaining just sounds like nitpicking. The whole purpose of special member functions and default implementations is to help developers avoid writing boilerplate code to have basic implementations of member functions you probably need anyway. For basic, predictable cases, C++ steps in and helps you out. If you prevent C++ from stepping in, it won't. Is this hard to understand?
More baffling, you do not have to deal with these scenarios if you just declare and define the special member functions you actually want. This was exactly how this feature was designed to work. Is this too hard to follow or understand?
I think the problem with C++ is that some people who are clearly talking out of ignorance feel the need to fabricate arguments about problems you will experience if you a) don't know what you are doing at all and aren't even interested in learning, b) you want to go way out of your way to nitpick about a tool you don't even use. Here we are, complaining about a keyword. If we go through the comments, most of the people doing the bulk of the whining don't even know what it means or how it's used. They seem to be invested in complaining about things they never learned about. Wild.
I don't think so. There might be tough topics, but special member functions ain't it. I mean, think about what you are saying.
* Is it hard to write a constructor?
* Is it hard to understand that a C++ compiler can help you out and write one for you when you write a basic class?
* Is it hard to understand if you step in and start defining your own constructors and destructors, C++ compilers step out of your way and let you do the work?
Let's now frame it the other way around: why don't you write all the constructors and destructors you need? Would it be nice if the compiler did at least some of that work? I mean, not all of it, but at least just the basic stuff. It could safely fill in constructors and destructors at least for the most basic cases without stepping on your toes. When you need something fancy or specialized, you could just do the work yourself. Does this sound good? Or is this too hard to understand?
But add templates to the mix and a generic default becomes quite useful.
int x = x;
has ever made sense? In hostoric minimalist direction?(and if you have to ask, x is initialized — with the uninitialized value of x)
struct foo { foo& p; ... };
foo x{x, ...};
Still, this is hardly a good justification for it to be the default behavior. There's a reason why ML has `let rec`.Address of variable does not depend on it's value, and can be known and used before the variable is defined. At no point in your example value of "x" is used before the end of initialization.
However, in order to allow that, the language goes and allows the use of uninitialized value too. That is just plain horrible design.
I actually used the reference deliberately here because, unlike a pointer, a reference cannot be rebound.
But yes, I agree, this isn't good design. Off the top of my head I can think of at least one way to enable the same thing with a tiny bit of special (and rather obvious) syntax without introducing a footgun. It wouldn't even need any new keywords!
foo x{this x, ...};
A wonderful exploration of an underexplored topic--I've pre-ordered the hard copy and have been following along with the e-book in the interim.
C++ sucks, it's too hard to use, the compiler should generate stores all over the place to preemptively initialize everything!
Software is too bloated, if we optimized more we could use old hardware!
Usually what happens is the language requires you to initialize the variable before it's read for the first time, but this doesn't have to be at the point of declaration. Like in Java you can declare a variable, do other stuff, and then initialize it later... so long as you initialize it before reading from it.
Note that in C++, reading from a variable before writing to it is undefined behavior, so it's not particularly clear what benefit you're getting from this.
You gain the benefit that the compiler can assume the code path in question is impossible to reach, even if there's an obvious way to reach it. To my understanding, this can theoretically back-propagate all the way to `main()` and make the entire program a no-op.
Nobody has written an actual format-hard-disk or start-world-war-III routine, because we're not crazy, but the alarming function naming is intended to signal what a terrible idea it is to have this programming language at all.
The compiler cannot always tell if a variable will be written to before it is accessed. if you have a 100kb network buffer and you call int read = opaque_read(buffer); the compiler cannot tell how much or if anything at all was written to buffer and how size relates to it, it would be forced to initialize every byte in it to zero. A programmer can read the API docs, see that only the first read bytes are valid and use the buffer without ever touching anything uninitialized. Now add in that you can pass mutable pointers and references to nearly anything in C++ and the compiler has a much harder time to tell if it has to initialize arguments passed to functions or if the function is doing the initialization for it.
The problem was never "This is always a bad idea" and instead only "This is usually a bad idea so you should need to explicitly ask for it when you want it, so that when somebody writes it by mistake they can be told about that".
Rust chooses to need a lot of ceremony to make a MaybeUninit, and especially to then assume_init with no justification because it's not actually initialized - but that's because almost nobody who wants to write that understands what will actually happen, they typically have the C programmer "All the world's a PDP-11" understanding and in that world this operation has well understood and maybe even desirable behaviour. We do not live in that world, this is not a PDP-11 and it isn't shy about that. The thing they want is called a "freeze semantic" and it's difficult to ensure on modern systems.
But if you hate the ceremony, regardless of the fact there's a good reason for it - many languages have less ceremony, often just a handful of keystrokes to say "Don't initialize this" will be enough to achieve what you intended. What they don't do, which C++ did, was just assume the fact you forgot to write an initializer means you want the consequences - when in reality it usually means you're just human and make mistakes.
They are finally fixing that in C++26 where it's no longer undefined behavior, it's "erroneous behavior" which will require a diagnostic and it has to have some value and compilers aren't allowed to break your code anymore.
I mean sure, the real cause is more likely their incompetent use of the wrong algorithms and data structures or the fact that it's too hard to rely on external dependencies so they're still using a stdlib feature that's known to be significantly slower than the best efforts but eh, it's just a single #include away.
We had languages with default initialization 30 years ago used to write DOS software running on 80386. It was plenty fast.
No, it is bonkers; stick to your consistent point, please.
These two should have exactly the same effect:
bar() = default; // inside class declaration
bar::bar() = default; // outside class declaration
The only difference between them should be analogous to the difference between an inline and non-inline function.For instance, it might be that the latter one is slower than the former, because the compiler doesn't know from the class declaration that the default constructor is actually not user-defined but default. How it would work is that a non-inline definition is emitted, which dutifully performs the initialization, and that definition is actually called.
That's what non-bonkers might look like, in any case.
I.e. both examples are rewritten by the compiler into
bar() { __default_init; }
bar::bar() { __default_init; }
where __default_init is a fictitious place holder for the implementation's code generation strategy for doing that default initialization. It would behave the same way, other than being inlined in the one case and not in the other.Another way that it could be non-bonkers is if default were simply not allowed outside of the class declaration.
bar::bar() default; // error, too late; class declared already!
Something that has no hope of working right and is easily detectable by syntax alone should be diagnosed. If default only works right when it is present at class declaration time, then ban it elsewhere.If you want indiscriminate initialization, a compiler flag is the way, not forcing it in the source code.
""" Default member initializers define a default value at the point of declaration. If there is a member that cannot be defined in such a way, it suggests that there may be no legal mechanism by which a default constructor can be defined. """
Of course we should provide a mechanism to allow large arrays to remain uninitialized, but this should be an explicit choice, rather than the default behaviour.
However, will it happen? It's arguably the easiest thing C++ could do to make software safer, but there appears to be no interest in the committee to do anything with safety other than talk about it.
The question is what to do about it - balancing the cost of change to code and to engineers who learned it.
> but there appears to be no interest in the committee to do anything with safety other than talk about it.
There is plenty of interest in improving C++ safety. It’s a regular topic of discussion.
Part of that discussion is how it will help actual code bases that exist.
Should the committee do some breaking changes to make HN commenters happier, who don’t even use the language?
https://open-std.org/jtc1/sc22/wg21/docs/papers/2023/p2754r0... provides what appears to be the answer to this question: "No tools will be able to detect existing logical errors since they will become indistinguishable from intentional zero initialization. The declarations int i; and int i = 0; would have precisely the same meaning." ...yes, they would. _That's the point_. The paper has it exactly the wrong way around: currently tools cannot distinguish between logical error and intentional deferred initialization, but having explicit syntax for the latter would make the intention clear 100% of the time. Leaving a landmine in the language just because it gives you more warnings is madness. The warning wouldn't be needed to begin with, if there were no landmine.
I'm not sure what you mean with "who don't even use the language". Are you implying that only people that program professionally in C++ have any stake in reliable software?
No. I’m saying people who don’t understand C++ aren’t going to have good ideas about how to make initialization better.
And yes - comments which simply name call can be safely discard.
Please read this thread, in particular the comments by James20k (who, incidentally, is a C++ standards committee member): https://www.reddit.com/r/cpp/comments/yzhh73/p2723r0_zeroini...
This was three years ago. He enumerates numerous reasons for introducing zero-initialisation as I described, and yet somehow here we are, with C++26 just around the corner, and safety on everyone's lips. Is it part of C++26 now? Nope...
int const<const> auto(decltype(int)) x requires(static) = {{{}}};
And when asked how on earth did this happen and why, there will be the same "we must think about the existing code, the defaults were very poor"Meanwhile they absolutely could make sane defaults when you plonk "#pragma 2033" in the source (or something, see e.g. Baxter's Circle compiler), but where would be the fun of that.
They still use single pass compiling (and order of definitions) as the main guiding principle...
> we must think about the existing code, the defaults were very poor"
What does adding bad features have to do with maintaining defaults?
template<typename T> T&& function_1(remove_reference_t<T> &x) {
return static_cast<T &&>(x);
}
template<typename T> remove_reference_t<T> && function_2(T &&x) {
return static_cast<remove_reference_t<T &&> >(x);
}
Which one is move and which is forward?Now, since all this jenga must remain functional, the only option they have to try and fix stuff that is a complete misfeature (like "this" being a pointer) is either requiring you to write make_this_code_not_stupid keywords everywhere ("explicit", "override", ...), or introduce magic rewriting rules (closures, range for, ...)
some fixes go much lower, "decltype(auto)" being a pinnacle of language design that will unlikely be surpassed.
It's also not clear to me what this has to do with ODR, ADL etc. That is to say, it's obviously a problem with a historical design decision (implicit overrides) that now has to be kept for backwards compatibility reasons, making it impossible to require a `new` annotation above and beyond opt-in compiler warnings. But that particular problem is not a uniquely C++ one, seeing how implicit overrides are actually more common than explicit ones in mainstream OOP languages, largely because they have all inherited this choice from either Simula (as C++ did) or Smalltalk. C# is more of an exception in that regard.
As phrased, you clearly want the answer to this question to be no, but the irony there is that that is how you kill a language. This is simply survivor bias, like inspecting the bullet damage only on the fighter planes that survive. You should also be listening to people who don't want to use your language to understand why they don't want that, especially people that stopped using the language. Otherwise you risk becoming more and more irrelevant. It won't all be valuable evidence, but they are clearly the people that cannot live with the problems. When other languages listen, better alternatives arise.
Uh yes. It’s phrased that way because it’s absurd. About half the comments in this section are a form of name calling by people who don’t understand constructors/destructors.
Those people who have no insight into how to make initialization better.
> Otherwise you risk becoming more and more irrelevant
Relevancy is relative to an audience. You want to listen to people who care and have your interests in mind.
C++ and Java are the most relevant languages in terms of professional software engineering.
> people who care and have your interests in mind
that have valuable insights about the problems of the language and so understand constructors and destructors but don't make you susceptible to survivor bias are
>> especially people that stopped using the language.
People that don't want to learn the language in the first place offer a different kind of insight, insight into a world where constructors and destructors aren't needed.
Of course we should provide a mechanism to allow large arrays to remain uninitialized, but this should be an explicit choice, rather than the default behaviour.
First you are saying "cost is minimal even negative" and then already arguing against it on the next paragraph. The general cost over a several large codebases has been observed to be minimal
Is this unexpected? A large code base has a lot of other things and it is normal that such changes will be a rounding error. There are lots of other bottlenecks that will just overwhelm such a such change. I don't think "it is not affecting large code bases as much", you can use that argument for pretty much anything that adds an overheadNot to mention if you change every int a to int a=0 right now, in those code bases, a=0 part will likely to be optimized away since that value is not being (shouldn't be) used at all and likely will be overwritten in all code paths
But yeah, most structs have a good zero value so a shorthand to create that can be ergonomic over forced explicitness.
Lists of the “good parts” of C++ over C usually include RAII. But f we imagine starting with C and adding C++ features to see when complexity explodes. I think the worst offender is constructor/destructor.
They require the language to perfectly track the lifetime of each member of every structure. If you resize a vector, every entry must call a constructor. If exceptions are possible, but insert little cleanup calls into all possible code paths.
Want to make a copy of something? Who is responsible for calling constructor/destructor. Want to make a struct? What if one member requires construction? How do you handle a union?
The result is micromanaging and turning most operations into O(n) init/cleanup calls.
The modern C approach avoids all of this and allows you to manage pieces of memory - rather than values. Zero initialize or leave uninitialized.
So what do we lose? Well classes own resources. If you have a vector<MyObject> and MyObject has a member vector<Items> then we should be able to cleanup without looking inside each member of each element.
I think we should separate resource allocation from use. Allocators are the things that care about cleanup, move, etc. This should be the exception - rather than the default way to think about structs.
What do you mean? The compiler will do it for you.
> This should be the exception - rather than the default way to think about structs.
the way that RAII in C++ recursively construct and destroys arbitrary object graphs is extremely powerful. It is something that very few other languages have (Rust, any other?). It should definitely be the default.
> I think we should separate resource allocation from use. Allocators are the things that care about cleanup, move, etc.
I'm not sure what you mean by use. If you mean we should separate allocation from construction, I agree! But then so does C++. They are tied by default, but it is easy to separate them if you need it.
It will. And the only cost to you is you have to understand initializer lists, pod types, copy constructors, move constructors, launder, trivially_destructible, default initialization, uninitialized_storage, etc.
> the way that RAII in C++ recursively construct and destroys arbitrary object graphs is extremely powerful
And extremely complex.
The big fallacy here is that you would want to manage resources at the individual node level - rather than for a batch.
> I'm not sure what you mean by use
It’s similar to the idea of arenas (also made difficult by constructors btw). You can make sophisticated systems for managing allocations of individual nodes in a graph - like reference counted smart pointers. Or you can avoid the problem entirely by deallocating the whole group at once.
Imagine if C++ structs were required to be pod and classes were not. Then you could always know that a struct can be trivially allocated/deallocated etc.
Then you could design data structures for pod types only that didn’t have to worry about O(n) cleanup and init.
And what’s the cost?
Note that a reference to a class is still POD. It just doesn’t have ownership.
Also I’m not making a specific policy proposal. Im identifying constructors and destructors as the source of complexity. What should we do about it?
> and force people to think about POD or not
No such thing as feature you don’t have to understand. The reason this article exists is that C++ programmers must deal with complex initialization behavior.
Can a C++ programmer not understand move semantics? Or copy constructors?
Most of the time you don't need to think about move or copy - the compiler does the right thing for you. When there are exceptions it is generally when you need to disable them which is simple enough. When you think about constructors you only need to think about the local class in general, and not how they interact with the larger world.
constructors and destructors eliminate a lot more complexity than they solve. They enable RAII and thus clear ownership of everything (not just memory!).
static_assert(is_trivial_v<T>)
So no this doesn’t solve C++ complexity including the initialization problem.
Essentially instead of doing
object.freeze();
object.setProperties(...);
object.thaw();
I tried to do {
Freezer freezer(object);
object.setProperties(...);
}
EVERY time I need a start/end matching function I used RAII instead because it looked neat. I tried to make "with" from Python in C++.The problem is that exceptions thrown in a destructor can't be caught, which made this coding style practically impossible to use. Thawing the properties meant dispatching property change events, which meant that event handlers all around the app would be executed inside a destructor unaware that they were being executed inside a destructor. If any of these threw an exception, the whole thing crashed. In hindsight dispatching these events from a setter instead of from the main event loop also creates all sorts of trouble.
object.batchEvents([&]{
object.setProperties(...);
});
(for bonus points, pass object reference as argument to the lambda)
gnabgib•1mo ago
Related: Initialization in C++ is Seriously Bonkers (166 points, 2019, 126 points) https://news.ycombinator.com/item?id=18832311