I think that implementations trying out their own experimental features is normal and expected. Ideally, standards would be pull-based instead of push-based.
The real question is what prevented this feature from being proposed to the standardization committee.
Stroustoup has one vote, not everything he advocates for wins votes, including having a saner C++ (Remember the Vasa paper).
Citation needed.
For starters, where is the paper?
[0] https://open-std.org/JTC1/SC22/WG21/docs/papers/2018/p0977r0...
Also WG14 famously killed Dennis Ritchie proposal to add fat pointers to C.
Language authors only have symbolic value once they relish control to a standards body.
from reading about contracts for C before i assumed it would be like what cake[1] does, which actually compile time enforces pointer (non)nullability, as well as resource ownership and a bunch of other stuff, very cool project, check it out if you haven't seen it yet :)
Let's say you have a header lib.h:
inline int foo(int i) {
assert(i > 0);
//...
}
In C, this function is unspecified behavior that will probably work if the compiler is remotely sane.In C++, including this in two C++ translation units that set the NDEBUG flag differently creates an ODR violation. The C++ solution to this problem was a system where each translation unit enforces its own pre- and post- conditions (potentially 4x evaluations), and contracts act as carefully crafted exceptions to the vast number of complicated rules added on top. An example is how observable behavior is a workaround for C++ refusing to adopt C's fix for time-traveling UB. Lisa Lippincott did a great talk on this last year: https://youtu.be/yhhSW-FSWkE
There's not much left of Contracts once you strip away the stuff that doesn't make sense in C. I don't think you'd miss anything by simply adding hygenic macros to assert.h as the author here does, except for the 4x caller/callee verification overhead that they enforce manually. I don't think that should be enforced in the standard though. I find hidden, multiple evaluation wildly unintuitive, especially if some silly programmer accidentally writes an effectful condition.
I think these conditions should be part of the type signature, different to what was suggested in the otherwise good talk you cited.
This is a huge challenge for a C-like language with pervasive global state. Might be more feasible for something like Rust, but still very difficult.
Given the examples, the author wants to ensure that 0 is not a possible input value, and NULL is not a possible output value.
This could be achieved with a simple inline wrapper function that checks pre and post conditions and does abort() accordingly, without all of this extra ceremony
But regardless of the mechansim you're left with another far more serious problem: You've now introduced `panic` to C.
And panics are bad. Panics are landmines just waiting for some unfortunate circumstance to crash your app unexpectedly, which you can't control because control over error handling has now been wrested from you.
It's why unwrap() in Rust is a terrible idea.
It's why golang's bifurcated error mechanisms are a mess (and why, surprise surprise, the recommendation is to never use panic).
A lot of important programs (like the Linux kernel) don't operate strictly on the exact letter of the standard's UB semantics. They do things like add compiler flags to specify certain behaviors, or assume implementation details.
If you think dealing with undefined behavior is easy and you assume that people have verified that their software triggers no undefined behavior at runtime is fair game, then you should grant that assumption in favor of Rust developers having done the same with their panics, because avoiding panics is child's play in comparison to avoiding UB.
I don't know what it is about panics that triggers some mania in people. UB does not interrupt the program and therefore allows memory corrupt and complete takeover of a program and the entire system as a consequence. C developers are like "this is fine", while sitting in a house that is burning down.
There used to be a pretty blatant hibernation bug with AMD GPUs on Linux that essentially crashes your desktop session upon turning your computer on from hibernation. I've also had a wifi driver segfault on login that forcibly logged you out so you couldn't login like 9 years ago. C doesn't magically fix these problems by not having an explicit concept of panics. You still need to write software that is correct and doesn't crash before you push an update.
There is no meaningful difference between a correctness bug and a panic triggering condition with the exception that the panic forces you to acknowledge the error during development, meaning it is more likely that the correctness bug gets caught in the first place.
But these contracts don't make things better.
Now you're removing control from the user. So now if an allocation fails, you crash. No way to recover from it. No getting an error signal back (NULL) so that you can say "OK, I need to clear up some memory and then try again". (Note that I'm not saying that inline error signaling such as NULL is good design - it's not).
Nope. No error handling. No recovery. You crash. And ain't nothing you can do about it.
That's just bad design on top of the existing bad design. Things that crash your app are bad. No need to add even more.
Do I want my app to crash at all? No, of course not. If it's crashing, there's a serious bug. At least now I know where to look for it.
Should we pass back up an error signal instead of crashing? Yes, if it all possible, do that instead. Sometimes it's not possible or not worth the hassle for something you're 99.99999% sure can't/won't happen. Or literally can't currently happen, but you're afraid someone on the project might do a bad refactor at some point 5 years down the road and you want to guard against some weird invariant.
Having the stdlib expose both ways, means either having two stdlibs (like on MS Windows), or dynamic checks in the stdlib. I don't think either way is a good idea. Thus, these contracts can't be used by libc.
f(int n, int a[n])
Actually do what it looks like it does. Sigh
What I want is -fbounds-safety from clang.
https://en.cppreference.com/w/cpp/container/span.html
Or if you want multidimensional span:
While there was no reason not to have .at(), lack of bound checks by default isn't a bad thing, as inlined bound checks have the potential to highly pessimize code (esp. in loops); also standard library hardening is a thing.
IMO there's much more value to be had in migrating C code (and pre-C++11 code, too) to C++ (or Rust, depending on one's tastes); RAII - that is to say, the ability to automatically run destructors on scope exit - and in particular shared_ptr/unique_ptr/bespoke intrustive pointers drastically reduce the risks of use-after-free
This way the indexing operation itself doesn't need to have bounds checks and it's easier for the compiler to optimize out the checks or for an "unchecked" section to be requested by the programmer.
#define contract_assume(COND, ...) do { if (!(COND)) unreachable(); } while (false)
But this means that the compiler is allowed to e.g. reorder the condition check and never output the message. (Or invoke nasal demons, of course).This doesn't make much sense. I get that you want the compiler to maybe do nothing different or panic after the assertion failed, but only really after triggering the assertion and the notion of after doesn't really exist with undefined behaviour. The whole program is simply invalid.
To the brain of a compiler writer UB means "the standard doesn't specify what should happen, therefore I can optimize with the assumption UB never happen." I disagree that this is how UB should be interpreted, but this fight is long lost.
With that interpretation of UB, all `unreachable()` means is that the compiler is allowed to optimize as if this point in the code will never be reached. The unreachable macro is standard in C23 but all major compilers provide a way to do it, for all versions of the language.
So if you have a statement like `if (x > 3) unreachable()` that serves as both documentation of the accepted values, as a constraint that the optimizer can understand - if x is an unsigned int, it will optimize with the assumption that the only possible values are 0,1,2.
Of course in a debug build a sane compiler would have `unreachable()` trigger an assert fail, but they're not required to, and in release they most definitely won't do so, so you can't rely on it as a runtime check.
Exactly. But we already have unreachable and assert. The whole point of contracts is, that they are checked by the compiler (when the compiler invoker asks for it).
Having the contract invoke UB in the fail case means that instead of replacing the error return with a diagnostic provable by the compiler, you replace the error return with potential corruption. In which case is that ever the right choice?
I cannot, in good conscience, use a technology that adds even more undefined behavior. Instead it reinforces my drive to avoid C whenever I can and use OCaml or Rust instead.
It's also a good thing to tell the compiler that the programmer intends that this case will never happen, so that the static analyzer can point out ways through the code, where it actually does.
I find the general concept incredibly useful, and apply it in the more general sense to my own code, but there's always a bit of "what do I actually want contracts to mean / do here" back-and-forth before they're useful.
PS: I do like how D does contracts; though I admit I haven't used D much yet, to my great regret, so I can't offer my experience of how well contracts actually work in D.
A good contract system may in fact rely on type-safety as part of its implementation, but types do not necessarily cover all aspects of contracts (unless you're referring to the full gamut of theoretical generalisations of things like dependent types, substructural types, constraint logic programming, etc), and are also not necessarily limited to things that only apply at compile-time.
__d•11h ago
But if I want to use Eiffel, I’ll use Eiffel (or Sather).
I’d rather C remained C.
Maybe that’s just me?
jimbob45•9h ago
dmitrygr•8h ago
HexDecOctBin•8h ago
Your question can be reflected back to you: if you want an ever changing languages, go to Java, C# or C++, why mess with C?
xboxnolifes•7h ago
The same thing can be said for every other language, yet they change.
oguz-ismail•6h ago
lifthrasiir•6h ago
brabel•7h ago
pjmlp•5h ago
sarchertech•2h ago
nananana9•8h ago
Most C developers don't want a modern C, they want a reliable C. WG14 should be pushing for clarifications on UB, the memory and threading model, documenting where implementations differ, and what parts of the language can be relied and what not.
Nobody really needs a new way to do asserts, case ranges, or a new way to write the word "NULL".
lifthrasiir•6h ago
pjmlp•5h ago
As someone that remembers the t-shirts with "my compiler compiles yours" that some C folks used to wear, it is kind of ironic having that turned around on them.
motorest•6h ago
I think this talk about "complexity" is a red herring. C++ remains one of the most popular languages ever designed, and one of the key reasons is that since C++11 the standardization effort picked up steam and started including features that the developer community wanted and was eager to get.
I still recall the time that randos criticized C++ for being a dead language and being too minimalistic and spartan.
> C++ has 3 reasonable implementations, C has hundreds, for all sorts of platforms, where you don't get anything else.
I don't understand what point you are trying to make. Go through the list of the most popular programming languages, and perhaps half of them are languages which only have a single implementation. What compelled you to criticize C++ for having at least 3 production-quality implementations?
> Most C developers don't want a modern C, they want a reliable C.
You speak only for yourself. Your personal opinion is based on survivorship bias.
I can tel you that as a matter of fact a key reason why the likes of Rust took off was that people working with low-level systems programming were desperate for a C with better developer experience and sane and usable standard library.
> Nobody really needs a new way to do asserts, case ranges, or a new way to write the word "NULL".
Again, you speak for yourself, and yourself alone. You believe you don't need new features. That's fine. But you speak for yourself.
npoc•6h ago
pjmlp•5h ago
Editions are rather limited in what they support.
Try to design a crate that stays compatible across editions, while using libraries that have changed signatures across editions.
The crate itself keeps its own edition fixed.
babaceca•5h ago
The "most popular programming languages" are irrelevant here.
C and C++ are standardized languages, and also the tools we use for code that actually matters. A standard that can't be implemented is worthless, and even the "3 high quality" implementations of C/C++ haven't fully implemented the latest 2 editions of either language.
There's a lot more riding on these two languages than you give credit for, and they should be held to a higher standard. C is not the language to experiment with shiny new features, it's the language that works.
> I can tel you that as a matter of fact a key reason why the likes of Rust took off
So what's the problem? If Rust is gaining traction on C/C++, and people are excited about what it brings to the table, use it. We'll both do our thing, let it play out - we'll see which approach yields better software in 10 years.
motorest•2h ago
I think this belief is based on faulty assumptions, such as survivorship bias.
C++ became popular largely because it started off by extending C with the introduction of important features that the developer community wanted to use. The popularity of C++ over C attests how much developers wanted to add features to C.
C++ also started being used over C in domains where it was not an excellent fit, such as embedded programming, because the C community prefered to deal with C++'s higher cognitive load as an acceptable tradeoff to leverage important features missing from C.
The success of projects such as Rust and even Zig, Nim also comes at the expense of C's inability to improve the developer experience.
Not to mention the fact that some projects are still developed in C because of a mix of inertia and lack of framework support.
So to claim that the C programmers do not want change, first you need to ignore the vast majority that do want but already dropped C in favor of languages that weren't frozen in time.
It's also unbelievable to claim that a language that precedes the concept of developer experience represents the apex of language design. This belief lands somewhere between Stockholm syndrome and being mentally constrained to not look beyond a tool.
babaceca•2h ago
Good, we can ignore them. It's not a language for everybody, and if you're happily using C++, or Zig, or Nim, keep doing that.
Developer experience is a weigted sum of many variables. For you cool syntax features may play a huge role of that, for most C programmers a simple language with clear and understandable semantics is much more important.
There are many languages with cool syntax and shiny features, and very few of the latter kind. C belongs to the latter, and it also happens to be running a vast majority of the world's most important software.
You keep bringing up Rust as an example. It's probably the most famous of the new-age systems languages. If it's such a great language, when will we see a useful program written in it?
pjmlp•5h ago
There is hardly any C compiler worth using that isn't equally a C++ compiler .
In fact, there is any C compiler left worth using that hasn't been rewriten into C++.
gustedt•1h ago
aninteger•33m ago
OCTAGRAM•7h ago
veltas•6h ago
pjmlp•5h ago
C especially was designed with lots of security defects, and had it not been for UNIX being available for free, it would probably never taken off.
HexDecOctBin•2h ago
johnisgood•5h ago
flykespice•7m ago