Cannot recall any longer if the original article on the matter appeared on The C/C++ Users Journal or Dr. Dobbs.
Eventually it started to get abused and the Turing completeness has been discovered.
Since C++11, the approach to a more sane way to do metaprogramming with templates has been improving.
Instead of tag dispatch, ADL and SFINAE, we can make use of concepts, if constexpr/eval/init, type traits, and eventually reflection, instead of the old clunky ways.
http://web.archive.org/web/20131101122512/http://ubietylab.n...
Via "Accidentally Turing-Complete":
Computing things using templates is not intuitive. Many of us have gotten used to it - but that's because that's all we had for many years. It's a different sub-language within C++. As constexpr capabilities widen, we can avoid "tortuted" templates and can just write our compile-time checks and figurings in plain C++ - more or less.
Sometimes, enhanced language features in C++ allow us to actually throw away and forget about other existing features, or at least - complex and brittle idioms using existing features. Like SFINAE :-)
With C++23 can be made relatively readable, and with the right compiler, builds within sensible timeframe.
I've seen the vast majority of build time in a very large C++23 project be taken up by reflection in fmtlib and magic_enum because both have to use templates (I think).
The only reason I didn't write more stuff in D was that the stack traces from my programs were pretty much useless. Maybe I was supposed to set a --better-stack-traces flag when I compiled it or something idk.
[1] One of the algorithms used by https://github.com/TOGoS/PicGrid
Scala gets pretty close to LISP-level of metaprogramming support between its intrinsic support for macros[0] (not to be confused with the C/C++ preprocessor of the same name), the Scalameta project[1], and libraries such as Shapeless[2].
Not comparing Scala to D, just identifying a language with similar functionality.
0 - https://docs.scala-lang.org/scala3/reference/metaprogramming...
That being said, I can't think of many scenarios in which an application where user-code is all `@nogc` would be hindered by occasional GC'ed stdlib methods.
One standout example of viability is the "dplug" library for realtime audio processing plugins and the commercial AuburnSounds VST's written by the author.
That's one of the craziest things I've heard here. These two languages sit at opposite ends of abstraction.
In D, you can write Python-style slop code and/or C#-style slop code. Two very different styles of slop code, both of which can be written in D. Or even mixed and matched.
Gives what you wished for. It's functional, though (among other things). Unlike most lisps, (dependently) typed. But hey, available at compile-time.
I'm actually looking forward to the related reflection features that I think are currently in scope for C++26. I've run into a number of places where the combination of reflection and constexpr could be really valuable... the current workarounds often involving macros, runtime tricks, or both.
The core of reflection should be in C++26, yes. In my understanding, there's more to do after that as well. We'll see when the final meeting is done.
C++ is, should be, like COBOL. A very important language because of the installed base. But why the continual enhancements? Surely there are better uses of all those resources?
Adding features to a language that is still actively used does not seem like a bad thing.
For example, a week back I lost a few hours finding a segfault bug in C++ code, which ended up being a trivial lifetime error: I used a reference after it was invalidated due to a std::vector resize.
These kind of errors should be compile time errors, rather than hard to trace runtime errors.
How does your company go about changing to memory safety? Are new projects / libraries written in Rust for example? Do projects / libraries get (partially) rewritten?
Not exactly. There's a lot of C++ code that still can't be rewritten into cool languages overnight without risking correctness, performance and readability.
I'm always happy to see C++ pushing itself and the compiler backends as it benefits the victims of lame codebases and also the cool kids using the improved compiler backends.
As far as I can tell it requires Gl or WebGl though.
QtSerialPort (https://github.com/search?q=include+QSerialPort&type=code)
Qt3D (https://github.com/search?q=include+Qt3D&type=code)
QtWebSockets (https://github.com/search?q=include+Qt3D&type=code)
QtSensors (https://github.com/search?q=include+qtsensors&type=code)
Qt SCXML (https://github.com/search?q=include+qstatemachine&type=code)
etc etc..
Qt's value is as an app framework where you can assume interoperation across the same async runtime and enjoy a cohesive set of APIs for all your app's needs.
As a specific example, expanding constexpr means a codebase I recently worked on can move away from template metaprogramming magic to something that is more idiomatic. That means iterating on that code will be easier, faster, and less error-prone. I've already done static dispatch using constexpr and type traits that would have taken longer to do with templates.
Currently constexpr programming needs you to know the specifics of what is supported - ideally you'll be able to infer that from first principles of the invariants that are available at compile time. That leads to faster, more confident development.
It's a similar story for reflection: we were using custom scripts and soon won't have to. The changes usually come out of the problems people are already finding solutions for in the real world, rather than gilding a lily.
This doesn't make C++ dead, even if it is dead to them; see Aesop's fable about the Fox and the Grapes for an in-depth analysis.
Interesting you mention EDG, as it is now famously know as being the root cause why Visual Studio development experience lags behind cl.exe, pointing out errors that compile just fine, especially if using anything related to C++20 modules.
Apparently since the modules support introduced in VS 2019, there have been other priorities on their roadmap.
The problem is that with the new wind C++ got with C++11, its role in GCC and LLVM, and domains like HPC, HFT, GPGPU, WG21 got up to 300 something members, everyone wanting to leave their name on a C++ standard, many proposing features in PDF form only.
And since in ISO driven languages, what gets into a standard, does so by votes, not technical implementation merit, it is about doing a proper campaign to get the bases to vote for your feature, while being persistent enough to keep the process going. Some features have more than 20 revisions on their proposal.
I'm not saying that the existing situation is ideal and it's certainly not a dichotomy, but you have to consider the detriments as well as the benefits.
Then you have Stage 2, which would require an actual working implementation, so it can be properly tested and such. Acceptance in Stage 2 would result in the feature being in the standard.
Also I should add this is no different on wannabe C and C++ replacements, or most other programming languages with similar market share.
Go try to do a drive by pull request for a new language feature on those ecosystems.
But this would do nothing to introduce safety-related features, which are still sorely missing after 2+ decades. In light of upcoming regulation to exclude unsafe languages from certain new projects, maybe those features wouldn't be that unimportant after all.
When I was in college in the 90s, templates were new and had iffy support. In the 2000s, STL was still problematic. I remember the Microsoft Compiler spitting out error messages on STL types that were so long the compiler couldn’t handle the string length and truncated the error message so you couldn’t see what the problem was. In the late 00s and early 10s I worked in games, we wanted stability and for the code to be as non-tricky as possible (for example by banning use of exceptions, STL containers, and most casual heap allocation, among other things.) So we used older compilers by choice. My current job publishes an SDK and APIs that people have been using for many years, they need to be backward compatible and we can’t impose recent language features on our users because not everyone uses them, we have to stick to features that compile everywhere, so I still see lots of C++{11,14,17}.
I think what annoys me most is when a standard is implemented except for an important feature. E.g. ranges was unusable for ages on Clang. And modules is still not quite there. I want to use the latest standard to get these cool features so I can write code like it's 2025, not 1980. If the biggest features are unimplemented then why upgrade? My concern is the trend continues, and we get cool things like reflection, but it takes 10 years to be implemented properly.
Wow, the list of papers for Modules is large. I can understand why it is taking so long to implement. Also, there needs to be "fingers to do the typing". I assume most of the commercial devs (Google, Apple) just don't think modules are important enough. It looks like Concepts also took a very long time to be fully implemented. Hell, that feature was discussed for 10 years (a dream of Bjarne) before approved into a standard... then maybe 5 years to impl!
uniform function call syntax and a `restrict` mechanism have not made it in to the standard after so many years
Half of new features feel like "how to make STL implementation less embarassing".
Meanwhile there still is no language support for e.g. debugging constexpr, or printing internal private state of objects in 3rd party code.
That is more a dynamic monkey patching programming language kind of thing.
At no point were lambdas single return statement. You might be confusing it with some other language feature. Or language.
The "STL" part of the standard library - containers especially but not just that - has an outdated interface, and suffers from ABI being stuck:
https://cor3ntin.github.io/posts/abi/
so it's embarrassing regardless.
> there still is no language support for e.g. debugging constexpr, or printing internal private state of objects in 3rd party code.
Actually, reflection might make it easier to do that. Supposedly, you should be able to get a member pointer to the private member you're interested in (or even do it dynamically by iterating over all members, and figuring out which one you like), from that and an actual object obtain a regular pointer, and finally dereference it.
psyclobe•2mo ago
It’s too bad you still can’t cast a char to a uint8_t though in a constexpr expression.
vinkelhake•2mo ago
Uh, what? That has worked fine since the introduction of constexpr in C++11.
TuxSH•2mo ago
epcoa•2mo ago
TuxSH•2mo ago
lbhdc•2mo ago
psyclobe•2mo ago
Ah that’s what bitcast is for, neat!
TuxSH•2mo ago
Here the `constexpr` keyword means the function might be called in a constant-evaluated context. f doesn't need to have all its statements be able to be evaluated in constexpr, only those which are actually used are. You need to explicitly instantiate a constexpr variable to test this.
cppreference is very clear* about this, regarding bit_cast: https://en.cppreference.com/w/cpp/numeric/bit_cast
lbhdc•2mo ago
TuxSH•2mo ago
The consteval specifier declares a function or function template to be an immediate function, that is, every potentially-evaluated call to the function must (directly or indirectly) produce a compile time constant expression.
It's possible that the compiler just doesn't bother as long as you aren't actually calling the function.
lbhdc•2mo ago
psyclobe•2mo ago
Could be wrong I am no expert…
pjmlp•2mo ago
What is possible in constexpr contexts has been improving in each revision, the end goal is to support the whole language, eventually.
Naturally introducing everything at once would be too hard in a language with such a big ecosystem like C++.
psyclobe•2mo ago