Chuanqi says "The data I have obtained from practice ranges from 25% to 45%, excluding the build time of third-party libraries, including the standard library."[1]
[1]: https://chuanqixu9.github.io/c++/2025/08/14/C++20-Modules.en...
If these aren't compelling, there's no real reason.
Program
- Module
- Module Partition
whereas in module systems that support module visibility, like Rust’s, you can decompose your program at multiple abstraction levels: Program
- Private Module
- Private Module
- Private Module
- Public Module
- Public Module
Maybe I am missing something. It seems like you will have to rely on discipline and documentation to enforce clean code layering in C++."Just one more level bro, I swear. One more".
I fully expect to sooner or later see a retcon on why really, two is the right number.
Yeah, I'm salty about this. "Submodules encourage dependency messes" is just trying to fix substandard engineering across many teams via enforcement of somewhat arbitrary rules. That has never worked in the history of programming. "The determined Real Programmer can write FORTRAN programs in any language" is still true.
And sure, "future extension" is nice. But not if the future arrives at an absolutely glacial pace and is technically more like the past.
This may be inevitable given the wide spread of the language, but it's also what's dooming the language to be the next COBOL. (On the upside, that means C++ folks can write themselves a yacht in retirement ;)
But, for the folks who didn't grow up with the Real Programmer jokes, this is rooted in a context of FORTRAN 77. Which was, uh, not famous for its readability or modularity. (But got stuff done, so there's that)
I think F77 was a pretty well designed language, given the legacy stuff it had to support.
But it was also behind the times. And, if we're fair, half of its reputation comes from the fact that half of the F77 code was written by PhDs, who usually have... let's call it a unique style of writing software.
[them] How can we get our code to work on the IBM?
[me] (examines code) This only looks vaguely like Fortran.
[them] Yes, we used all these wonderful extensions that Digital provides!
[me] (collapse on the floor laughing) (recover) Hmm. Go see Mike (our VAX systems programmer). You may be able to run on our VAXen, but I can't imagine it running on the IBMs without a major rewrite. Had they stuck to F77 there would have been few problems, and I could have helped with them.
Portability is always worth aiming for, even if you don't get all the way there.
Rust, Modula-2 and Ada are probably the only ones with module nesting.
However this is a different kind of modules, with them being present on the type system, and manipulated via functors.
Log scale: Less than 3% done, but it looks like over 50%.
Estimated completion date: 10 March 2195
It would be less funny if they used an exponential model for the completion date to match the log scale.
- import boost.type_index;
- import macro-defined;
- import BS.thread_pool;
You are excused if the site misleads anybody, just because you published "Estimated completion date: 2195". That's just so awesome. Kudos.
It seems likely I’ll have to move away from C++, or perhaps more accurately it’s moving away from me.
How well does this usually work, by the way?
It kinda is. The C++ committee has been getting into a bad habit of dumping lots of not-entirely-working features into the standard and ignoring implementer feedback along the way. See https://wg21.link/p3962r0 for the incipient implementer revolt going on.
So many features are starting to land which feel increasingly DoA, we seriously need a language fork
alignas(16) char buf[128];
What type is buf? What alignment does that type have? What alignment does buf have? Does the standard even say that alignof(buf) is a valid expression? The answers barely make sense.Given that this is the recommended replacement for aligned_storage, it’s kind of embarrassing that it works so poorly. My solution is to wrap it in a struct so that at least one aligned type is involved and so that static_assert can query it.
but in the rare case you need code like that be glad C++ has you covered
> but in the rare case you need code like that be glad C++ has you covered
I strongly disagree. alignof(buf) works correctly but is a GCC extension. alignof(decltype(buf)) is 1, because alignas is a giant kludge instead of a reasonable feature. C++ only barely has me covered here.
Just like Python was to blame for the horrible 2-to-3 switch, C++ is to blame for the poor handling of modules. They shouldn't have pushed through a significant backwards-incompatible change if the wide variety of vendor toolchains wasn't willing to adopt it.
But you might not be able to use libraries that insist upon modules. There won't be many until modules are widespread.
There literally isn't a plan or direction in place to add any way to compete with Rust in the safety space currently. They've got maybe until c++29 to standardise lifetimes, and then C++ will transition to a legacy language
The C++ WG keeps looking down at C and the old C++ sins, sees their unsafety, and still thinks that's the problem to fix.
Rust looks the same way at modern C++. The std collections and smart pointers already existed before the Rust project has been started. Modern C++ is the safety failure that motivated creation of Rust.
when people say this do they have like any perspective? there are probably more cpp projects started in one week (in big tech) than rust projects in a whole year. case in point: at my FAANG we have probably like O(10) rust projects and hundreds of cpp projects.
as they say "citation needed"
The current solution chosen by compilers is to basically have a copy of your code for every dependency that wants to specialize something.
For template heavy code, this is a combinatorial explosion.
It does not work very well at all if your goal is to port your current large codebase to incrementally use modules to save on compile time and intermediate code size.
It’s regrettable that the question of whether a type meets the requirements to call some overload or to branch in a particular if constexpr expression, etc, can depend on what else is in scope.
In Haskell, you can't ever check that a type doesn't implement a type class.
In Golang, a type can only implement an interface if the implementation is defined in the same module as the type.
In C++, in typical C++ style, it's the wild west and the compiler doesn't put guard rails on, and does what you would expect it to do if you think about how the compiler works, which probably isn't what you want.
I don't know what Rust does.
Generic code is stored in libraries as MIR, which is half way between AST and LLVM IR. It's still monomorphic and slow to optimize, but at least doesn't pay reparsing cost.
The actual rule is more complex due to generics:
https://github.com/rust-lang/rfcs/blob/master/text/2451-re-r...
and that document doesn’t actually seem to think that this particular property is critical.
It's a great idea when not abused too much for creating weird little DSLs that no one is able to read.
So no, modules aren't even here, let alone to stay.
Never mind using modules in an actual project when I could repro a bug so easily. The people preaching modules must not be using them seriously, or otherwise I simply do not understand what weed they are smoking. I would very much appreciate to stand corrected, however.
You are spoiled by the explosive growth of open source and the ease of accessing source code. Lots of closed source commercial libraries provide some .h files and a .so file. And even when open source, when you install a library from a package from a distribution or just a tarball, it usually installs some .h files and a .so file.
The separation between interface and implementation into separate files was a good idea. The idea seemed to be going out of vogue but it’s still a good idea.
I'm mostly talking about modules for internal implementation, which is likely to be the bulk of the exports. Yes, it's understandable that for dll / so files exporting something for external executables is more complicated also because of ABI compatibility concerns (we use things like extern "C"). So, yes header approach might be justified in this case, but as I stated, such exports are probably a fraction of all exports (if they are needed at all). I'll still prefer modules when it's possible to avoid them.
However as soon as you do C++ that goes away. With C++ you need implementation of templates available to the consumer (except cases with limited set of types where you can extern them), wmin many cases you get many small functions (basic operator implementations, begin()/end() for iterators in all variations etc.) which benefit from inking, thus need to be in the header.
Oh and did I mention class declarations tonthe the class size ... or more generic and even with plain C: As soon as the client should know about the size of a type (for being able to allocate it, have an array of those etc) you can't provide the size by itself, but you have to provide the full type declaration with all types down the rabbit hole. Till you somewhere introduce a pointer to opaque type indirection.
And then there macros ...
Modules attempt to do that better, by providing just the interface in a file. But hey, C++ standard doesn't "know" about those, so module interface files aren't a portable thing ...
I disagree. In fact, I would expect the following could be a pretty reasonable exercise in a book like "Software Tools"[1]: "Write a program to extract all the function declarations from a C header file that does not contain any macro-preprocessor directives." This requires writing a full C lexer; a parser for function declarations (but for function and struct bodies you can do simple brace-matching); and nothing else. To make this tool useful in production, you must either write a full C preprocessor, or else use a pipeline to compose your tool with `cpp` or `gcc -E`. Which is the better choice?
However, I do see that the actual "Software Tools" book doesn't get as advanced as lexing/parsing; it goes only as far as the tools we today call `grep` and `sed`.
I certainly agree that doing the same for C++ would require a full-blown compiler, because of context-dependent constructs like `decltype(arbitrary-expression)::x < y > (z)`; but there's nothing like that in K&R-era C, or even in C89.
No, I think the only reason such a declaration-extracting tool wasn't disseminated widely at the time (say, the mid-to-late 1970s) is that the cost-benefit ratio wouldn't have been seen as very rewarding. It would automate only half the task of writing a header file: the other and more difficult half is writing the accompanying code comments, which cannot be automated. Also, programmers of that era might be more likely to start with the header file (the interface and documentation), and proceed to the implementation only afterward.
[1] - K&P's "Software Tools" was originally published in 1976, with exercises in Ratfor. "Software Tools in Pascal" (1981) is here: https://archive.org/details/softwaretoolsinp00kern/
There you go. You just threw away the most difficult part of the problem: the macros. Even a medium-sized C library can have maybe 500 lines of dense macros with ifdef/endif/define which depends on the platform, the CPU architecture, as well as user-configurable options at ./configure time. Should you evaluate the macro ifdefs or preserve them when you extract the header? It depends on each macro!
And your tool would still be highly incomplete because it only handles function declarations not struct definitions, typedefs you expect the users to use.
> the other and more difficult half is writing the accompanying code comments, which cannot be automated
Again disagree. Newer languages have taught us that it is valuable to have two syntaxes for comments, one intended for implementation and one intended for the interface. It’s more popularly known as docstrings but you can just reuse the comment syntax and differentiate between // and /// comments for example. The hypothetical extractor tool will work no differently from a documentation extractor tool.
Implementing a C preprocessor is tedious work, but it's nothing remotely complex in terms of challenging data structures, algorithms, or requiring sophisticated architecture. It's basically just ensuring your preprocessor implements all of the rules, each of which is pretty simple.
It’s a cute learning project for a student of computer science for sure. It’s not remotely a useful software engineering tool.
I agree that the tool I sketched wouldn't let your .h file contain macros, nor C99 inline functions, nor is it clear how it would distinguish between structs whose definition must be "exported" (like sockaddr_t) and structs where a declaration suffices (like FILE). But:
- Does our hypothetical programmer care about those limitations? Maybe he doesn't write libraries that depend on exporting macros. He (counterfactually) wants this tool; maybe that indicates that his preferences and priorities are different from his contemporaries'.
- C++20 Modules also do not let you export macros. The "toy" tool we can build with 1970s technology happens to be the same in this respect as the C++20 tool we're emulating! A modern programmer might indeed say "That's not a useful software engineering tool, because macros" — but I presume they'd say the exact same thing about C++20 Modules. (And I wouldn't even disagree! I'm just saying that that particular objection does not distinguish this hypothetical 1970s .h-file-generator from the modern C++20 Modules facility.)
[EDIT: Or to put it better, maybe: Someone-not-you might say, "I love Modules! Why couldn't we have had it in the 1970s, by auto-generating .h files?" And my answer is, we could have. (Yes it couldn't have handled macros, but then neither can C++20 Modules.) So why didn't we get it in the 1970s? Not because it would have been physically difficult at all, but rather — I speculate — because for cultural reasons it wasn't wanted.]
Then whatever is relevant to the public interface gets generated by the compiler and put in a .h file. This file is not put in the same directory of the .c file to not encourage people to put the .h file in version control.
The version of modules that got standardized is anything but that. It's an incredibly convoluted mess that requires an enormous amount of effort for little benefit.
I'd say C++ as a whole is a complete mess. While it's powerful (including OOP), it's complicated and inconsistent language with a lot of historical baggage (40+ years). That's why people and companies still search for (or even already use) viable replacements for C++, such as Rust, Zig, etc.
https://devblogs.microsoft.com/cppblog/integrating-c-header-...
That is not the same as using modules, which they have not done.
I do agree, it's not _exactly_ the same as using _named modules_, but header units share an almost identical piece of machinery in the compiler as named modules. This makes the (future planned) transition to named modules a lot easier since we know the underlying machinery works.
The actual blocker for named modules is not MSVC, it's other compilers catching up--which clang and gcc are doing quite quickly!
(Buy my book)
- Single translation unit (main.cpp)
- Include all other cpp files in main
- Include files in dependency order (no forward declarations)
- No circular dependencies between files
- Each file has its own namespace (e.g. namespace draw in draw.cpp)
This works well for small to medium sized projects (on the order of 10k lines). I suspect it will scale to 100k-1M line projects as long as there is minimal use of features that kill compile times (e.g. templates).
I don't see many "fair" benchmarks about this, but I guess it is probably difficult to properly benchmarks module compilation as it can depend on cases.
If modules can reach that sort of speedup consistently, it's obviously great news.
whobre•1w ago
Dude…
on_the_train•1w ago
Night_Thastus•1w ago
whobre•1w ago
on_the_train•1w ago
cpburns2009•1w ago
> auto main() -> int
Isn't that declaring the return type twice, once as auto and the other as int?
yunnpp•1w ago
There is, however, a return type auto-deduction in recent standards iirc, which is especially useful for lambdas.
https://en.cppreference.com/w/cpp/language/auto.html
auto f() -> int; // OK: f returns int
auto g() { return 0.0; } // OK since C++14: g returns double
auto h(); // OK since C++14: h’s return type will be deduced when it is defined
maccard•1w ago
auto g() -> auto { return 0.0; }
yunnpp•1w ago
maccard•1w ago
maccard•1w ago
zabzonk•1w ago
maccard•6d ago
But on the flip side, there’s a theme of ignoring the actual state of the world to achieve the theoretical goals of the proposal when it suits. Modules are a perfect example of this - when I started programming professionally modules were the solution to compile times and to symbol visibility. Now that they’re here they are neither. But we got modules on part. The version that was standardised refused to accept the existence of the toolchain and build tools that exist, and as such refused to place any constraints that may make implementation viable or easier.
St the same time we can’t standardise Pragma once because some compiler may treat network shares or symlinks differently.
There’s a clear indication that the committee don’t want to address this, epochs are a solution that has been rejected. It’s clear the only real plan is shove awkward functional features into libraries using operator overloads - just like we all gave out to QT for doing 30 years ago. But at least it’s standardised this time?
CamperBob2•1w ago
few•1w ago
Davidbrcz•1w ago
direwolf20•1w ago
webdevver•1w ago
zabzonk•1w ago
they never were in C++.
sethops1•1w ago
rovingeye•1w ago
direwolf20•1w ago
rovingeye•1w ago
direwolf20•1w ago
cocoto•1w ago
vitaut•1w ago