Every project must colonize a valley of the language, declare a dialect, and bit-fiddle their own thing.
It might be a measure of popularity, but not of unity.
When projects choose a subset of language features, it's dictated by their needs (like embedded programs disabling the standard library, or safety-critical libraries forbidding "unsafe" code out of caution). There are some people who vocally hate async, but their complaint is usually that everyone uses async even where it's unnecessary (meaning that it actually has very broad adoption).
This feels very different than having an unwanted C subset, some '98 features that were replaced in '11 and '13, with fixes for them in '20 and '26 and then projects taking years to settle on a new baseline, and still bickering whether exceptions may be allowed or not.
Rust has "editions" that let new projects disable old misfeatures (which it hasn't got many yet). Rust ecosystem is fully on board with the latest version.
Lambdas are nice to have, just don’t nest them more than once.
I kinda wish things like std::variant had shorter syntax.
if anything i’m not a fan of c++ introducing language features as long verbose functions than to confidently make it an operator or a keyword.
Javascript is even more dramatic, where it will tell you to fix every single variable declaration, as people decided "var" was a mistake, and there is a whole new way of defining classes.
Neither of which are great measures probably. What about usefulness?
this is really not true in my experience. I don't remember last time I worked a project which outright banned specific C++ features or had a "dialect".
However it would be imperative for a push such as Carbon[1] to be similar to the kotlin to Java. A modernisation that simplifies , maintains backwards/forwards compatibility and reuses established libraries and tooling.
This however will need a entity that can champion and push it forward with a strong enough project to anchor it in the mainstream.The transitions are doable ,like Android dev from plain java to kotlin , or in OSX moving from Objective-C to Swift.
Additionally borrowing a robust batteries type standard library to reduce the sprawl of coding options and funnel greenfield projects into best practices and less boilerplate.
[1] https://www.infoworld.com/article/2337225/beyond-c-the-promi...
- The current CPP version is extremely bloated
- CPP is not going away anytime soon
- The rise of Rust/Go/Zig is not fighting for CPP's seat
- You can target CPP code using any of these aforementioned languages
- Rust has never claimed to be "safer", it just makes it harder to write unsafe code
Of course they are. Go less so, and Zig is really aiming for C. Rust is definitely meant to be a better alternative to C++.
> Rust has never claimed to be "safer"...
What? Of course it has (or Rust developers have; a language can't claim anything). And it is much safer.
I lack a degree though
I get regularly contacted my them, but they don't hire me
I take a very different view about the trajectory of languages given the current trends in software development. The more people rely upon agentic coding processes, the more they will demand faster compilation which will increasingly become a significant bottleneck on product velocity. The faster the llms get, the more important it is for the tools they use to be fast. Right now, I still think we are in an uncanny valley where llms are still slow enough that slow tooling does not seem that bad, but this is likely to change. People will no longer be satisfied asking their agent to make a change and come back in a minute or an hour. They will expect the result nearly instantaneously. C++ (and rust) compile times are too slow for the agent to iterate in the human reaction window so I believe that one of two things will happen over the next few years: llm progress will stall out or c++ and rust will nosedive in popularity.
Every language popular enough is like that.
That's not really plausible. Unfortunately this is all you get on the methodology:
> Our methodology is based on two main pillars. First, we make use of reliable sources of developer numbers or direct indicators of their activity. This includes the number of GitHub accounts, Stack Overflow accounts, and employment statistics from the USA and the European Union. Second, we rely on our Global Developer Survey data, where we directly measure developer activity. So far, we've run 29 waves of this large survey, and in each, we reach more than 10,000 developers globally. We combine these two main sources to derive our estimates.
Plenty of space for them to screw up I think.
with the caveat that i know it could be better. at this point i just think it's simpler than some of the stuff out there from a 'whats happening underneath the hood' perspective
paperplaneflyr•1mo ago
RadiozRadioz•1mo ago
Projecting into the future, hardware expenses have always been dwarfed by salaries. I don't expect that will change enough for it to be noticeable.
estimator7292•1mo ago
Until current computers cycle out, people will largely keep their 1-3 year old machine with sane amounts of memory. If we start seeing large numbers of machines in the wild with 4GB of memory, then maybe software will adapt. But that won't be for several years yet.