Because it's not. The whole point of C is that you know exactly what's going on and it's relatively clear in the code itself. C++ hides logic in abstractions for the sake of convenience. This is a C++ thing. How does it know how to iterate? Is it moving pointers or indexing them or what? Not only is it hiding logic but it also prevents me from modifying the logic. I could easily change a C for loop to use i += 2 instead of i++ if I wanted, that's the beauty of it. With this, I have to read some docs first to see how their abstraction works, and then hope it allows me to modify how it's used to how I need.
[1] https://c3-lang.org/implementation-details/specification/#fo...
If you're not doing something for each element of a collection, you should not be using a `foreach` loop. In exchange for not exposing the implementation, you immediately know the behavior. You also don't have to worry about checking the rest of the loop body for later mutations.
Given the widespread undefined behavior and the ways that compilers aggressively rely on that to reorganize and optimize your code, that hasn't been the case for many many years.
Sure, if you're using dmr's compiler on a PDP-11, then C is a pretty transparent layer over assembly, which is itself a fairly thin layer over the CPU. But today, C is an ambiguous high level communication language for a highly optimizing compiler which in turn produces output consumed by deep pipeline CPUs that freely reschedule the generated instructions.
I guess it probably depends on why a user might want to think of C as low level. The user visible semantics shouldn't change, I hope, but the performance might.
Interviews:
- https://www.youtube.com/watch?v=UC8VDRJqXfc
- https://www.youtube.com/watch?v=9rS8MVZH-vA
Here is a series doing various tasks in C3:
- https://ebn.codeberg.page/programming/c3/c3-file-io/
Some projects:
- Gameboy emulator https://github.com/OdnetninI/Gameboy-Emulator/
- RISCV Bare metal Hello World: https://www.youtube.com/watch?v=0iAJxx6Ok4E
- "Depths of Daemonheim" roguelike https://github.com/TechnicalFowl/7DRL-2025
Tsoding's "first impression" of C3 stream:
Edit: ABI compatibility & two way interop with C seems to be a pretty big selling point!
Take into consideration that most Rust programmers rely on https://docs.rs etc rather than Googling something.
https://www.tiobe.com/tiobe-index/programminglanguages_defin...
That seems like it's a potentially interesting signal, but the index implies that it is about adoption.
Looking at the index, it seems like Python has a 2.5x higher rating than C and Java. While I assume that Python is a widely adopted language, this feels wrong in many ways.
But given that they just look at search engine stats, one can explain the higher rating, because Python is often used by novice programmers and tech workers who are not primarily programmers/SWEs.
Python jobs: 0.9
Seems about right, maybe.
Except there's no way PHP (0.09), Ruby (0.07) and Go (0.1) are on the same magnitude as Rust jobs.
So this site doesn't pass the sniff test for me.
The comparison shouldn't be with C, it should be with C++, Rust, or Zig.
The place to go is actually the C3 comparison page:
https://c3-lang.org/faq/compare-languages/
There you can see that there are very few items "in C3 but not Rust", for example. Mainly "it's a familiar C-like language".
I am also suspicious of the macro system. I'd like more of an explanation of how it works. Especially how it relates to Zig comptime, and whether it has "hygiene" problems. Hygiene to me means: can a variable name in a macro expansion refer to a variable outside of the macro? (The concern is that this could be accidental.)
The macro cannot insert variables into the caller scope, nor cause the function to return. Mostly it's similar to a static inline function with optionally polymorphic arguments. But it can do some more things as well, but nothing violating hygiene.
There is a space for a C alternative, and Rust ain’t it.
Rust is solving a different problem, that of safety over all else. C3 on the other hand is more akin to developer experience above all else.
If you find something that should be easier to do in C3, that's a bug.
C3 is more complex than C (because of a net increase of features), but it's miles from C++ and Rust in complexity and it compiles as fast or faster than C.
So they should be the same, otherwise it's a bug.
And yet out all these newer C-like languages, it looks like Hare probably takes the crown for simplicity. Among other things, Hare uses QBE[1] as a backend compiler, which is about 10% the complexity of LLVM.
Plus the "frontend -> QBE -> assembler -> binary" process is slower than "frontend -> LLVM -> binary". And LLVM is known for being a fairly slow compiler.
Would be nice to have a list of these and comparisons
C23 got typeof, constexpr constants, enums with underlying type, embed, auto, _BitInt, checked integers, new struct compatibility rules, bit constants, nullptr, initialization with {}, and various other improvements and cleanups. Modern C code - while still being simple - can look quite different than what people might be used to.
C2Y already already got named loops, countof, if with declarations, case range expressions, _Generic with type arguments, and quite a lot of UB removed from the core language. (our aim is also to have a memory safe subset)
But one things for sure ... there's just not a lot of sample Zig code out there. Granted its simpler than Rust, but your average AI tool doesn't get how to write idiomatic Zig. Whereas most AI tools seem to get Rust code okay. Maybe idiomatic Zig just isn't a thing yet. Or maybe idiomatic Zig is just like idiomatic C ... in the eye of the beholder.
Unfortunately adhering to modern tooling is always a quixotic battle, even when they come for free on modern FOSS compilers.
C++ is fine, but it's insanely slow to compile.
I generally like C++, but I could trade anything to make it faster to compile, and most of the time, I just use a small subset of C++ that I feel okay with.
Personally when I initially learned C++ back in 1993, with Turbo C++ 1.0 for MS-DOS, I hardly saw a reason to further use C instead C++, other than being required to do so.
There are definitely advantages to simpler tools, you can streamline development and make people more productive quicker. Compare that scenario to C++ where you first have to agree the features you're allowing and then have to police that subset throughout on every PR.
a bit down the page there is stuff on the case syntax. The fact that "you can't have an empty break" is a good choice, but the fact that having two cases do the same thing has syntax
case X:
case Y:
is footgun waiting to happen. I would strongly suggest the authors of C3 make stacking cases look like this: case X, Y:
switch (x) {
case 0:
...
case 1 + 1:
...
}
This will behave in the normal way. But you can also have: switch {
case foo() > 0:
...
case bar() + baz() == s:
...
}
In which case it lowers to the corresponding if-else. case SOME_BAD_THING, SOME_OTHER_CONDITION, HERE_IS_NUMBER_THREE:
foo();
int y = baz();
Placing them on the next row is fairly hard to read case SOME_BAD_THING, SOME_OTHER_CONDITION,
HERE_IS_NUMBER_THREE, AND_NUMBER_FOUR, AND_NUMBER_FIVE,
AND_THE_LAST_ONE:
foo();
int y = baz();
In C I regularly end up with lists that have 10+ fallthroughs like this, because I prefer complete switches over default for enums at least. case SOME_BAD_THING:
case SOME_OTHER_CONDITION:
case HERE_IS_NUMBER_THREE:
case AND_NUMBER_FOUR:
case AND_NUMBER_FIVE:
case AND_THE_LAST_ONE:
foo();
int y = baz();
I understand the desire to use "case X, Y:" instead, and I did consider it at length, but I found the lack of readability made it impossible. One trade off would have been: case SOME_BAD_THING,
case SOME_OTHER_CONDITION,
case HERE_IS_NUMBER_THREE,
case AND_NUMBER_FOUR,
case AND_NUMBER_FIVE,
case AND_THE_LAST_ONE:
foo();
int y = baz();
But it felt clearer to stick to C syntax, despite the inconsistency.Frankly, that seems like a code smell, not a problem that needs a solution within the language.
case 'a' .. 'z', 'A' .. 'Z', '0' .. '9', '_': ...;
although when working with enumerators, there is a still a risk caused by the fact that re-ordering enumerators or adding new ones can break the switches.Despite of the drawback I prefer. Also a Range can be a formal expression which simplifies the grammar of other sub-expressions and statements, not only switches but also array slices, tuple slices, foreach, literal bitsets, etc.
https://github.com/jckarter/clay/wiki/Clay-for-C---programme...
Overloading is also generally missing from today's breed of C alternatives.
There has certainly been many attempts at C alternatives: eC, Cyclone etc etc
defer is the kind of thing I would mock up in a hurry in my code if a language or framework lacked the proper facilities, but I think you are better served with the with statement in Python or automated resource management in Java.
Similarly I think people should get over Optional and Either and all of that, my experience is that it is a lot of work to use those tools properly. My first experience with C was circa 1985 when I was porting a terminal emulator for CP/M from Byte magazine to OS-9 on the TRS-80 Color Computer and it was pretty traumatic to see how about 10 lines of code on the happy path got bulked up to 50 lines of code that had error handling weaved all around it and through it. When I saw Java in '95 I was so delighted [1] to see a default unhappy path which could be modified with catch {} and fortified with finally {}.
It's cool to think Exceptions aren't cool but the only justification I see for that is that it can be a hassle to populate stack traces for debugging and yeah, back in the 1990s, Exceptions were one of the many things in the C++ spec that didn't actually work. Sure there are difficult problems with error handling such as errors don't respect your ideas of encapsulation [2] but those are rarely addressed by languages and frameworks even though they could be
https://gen5.info/q/2008/08/27/what-do-you-do-when-youve-cau...
putting in ? or Optional and Either though are just moving the deck chairs on the Titanic around.
[1] I know I'm weird. I squee when things are orderly, more people seem to squee when they see that Docker lets them run 5 versions of libc and 7 versions of Java and 15 versions of some library.
[2] Are places where the "desert of the real" intrudes on "the way things are spozed to be"
The try-catch has nice composability:
try {
int x = foo_may_fail();
int y = bar_may_fail(x);
} catch (... ) {
...
}
Regular Result types need to use flatmap for this, and of course error codes or multiple returns also struggle with this. With C3: int? x = foo_may_fail();
int? y = bar_may_fail(x);
if (catch err = y) {
...
return;
}
// y is implicitly unwrapped to "int" here
This is not to say it would satisfy you. But just to illustrate that it's a novel approach that goes beyond Optional and Either and has a lot in common with try-catch.https://c3-lang.org/faq/compare-languages/
One would argue that the best C/C++ alternative/evolution language to use would be D. D also has its own cross-platform GUI library and an IDE.
I wonder for which reasons D doesn't have a large base adoption.
1. It is so big.
2. It still largely depends on GC (less important actually)
It keeps adding features, but adding features isn't what makes a language worth using. In fact, that's one of the least attractive things about C++ as well.
So my guess:
1. It betted wrong on GC trying to compete with C++.
2. After failing to get traction, kept adding features to it – which felt a bit like there was some feature that would finally be the killer feature of the language.
3. Not understanding that the added features actually made it less attractive.
4. C++ then left the GC track completely and became a more low level alternative to, at which point D ended up in a weird position: neither high level enough to feel like a high level alternative, nor low level enough to compete with C++.
5. Finally: the fact that it's been around for so long and never taking off makes it even harder for it to take off because it's seen as a has-been.
Maybe Walter Bright should create a curated version of D with only the best features. But given how long it takes to create a language and a mature stdlib, that's WAY easier said than done.
Even Andrei Alexandrescu eventually refocused on C++, and is contributing to some of the C++26 reflection papers.
I agree, and that applies to many software projects, and not just programming languages only.
>so there are quite a few half baked features by now
what are some of those half baked features?
Now there is a new GC being redesigned, and there are discussions about a possible Phobos V3.
Although, on the context of BetterC, there is the debate about having more regular features available in that mode as well.
I play with ImportC occasionally, a lot of those can actually be opt out by undef'ing __GNUC__ on the preprocessor invocation, idk why they don't do that. Oh, now it chokes on C23 features as well because system cpp defines __STDC_VERSION__=202311L now. Edit: that was solved: dlang/dmd/pull/21372
[1]: Specifically: "The Software is copyrighted and comes with a single user license, and may not be redistributed. If you wish to obtain a redistribution license, please contact Digital Mars."
Many people consider that an anti-feature.
2 more wishes: add named parameters and structured concurrency and I think it would be a very cool language.
Named parameters are already in the language.
Regarding concurrency, I don't want to pick a single concurrency model over another. I will see what hooks I can make for userland additions, but the language will not be opinionated about concurrency.
But this was distracting:
> Macros are a bag of worms. Sure, they can be a great source of protein, but will you really see me eating them? I might use worms when I'm fishing, but I don't see much use for them around the home. To express my opinion outside of a metaphor: macros have niche use cases, are good at what they do, but shouldn't be abused. One example of this abuse would be making a turing-complete domain-specific language inside of some macro-supporting programming language.
Defletter•20h ago
90s_dev•20h ago
But having them be inside comments is just weird.
Jtsummers•19h ago
joshring2•17h ago
lerno•20h ago
1. You want almost all pointer parameters non null.
2. Non-null variables is very hard to fit in a language without constructors.
Approaches to avoid constructors/destructors such as ZII play very poorly with ref values as well. What you end up with is some period of time where a value is quasi valid - since non-null types need to be assigned and it's in a broken state before it's initially assigned.
It's certainly possible to create generic "type safe" non-null types in C3, but they are not baked into the language.
aidenn0•19h ago
I don't see that as a problem; don't separate declaration from assignment and it will never be unassigned. Then a ZII non-null pointer is always a compile-time error.
wavemode•17h ago
That's tricky when you want to write algorithms where you can start with an uninitialized object and are guaranteed to have initialized the object by the time the algorithm completes. (Simplest example - create an array B which contains the elements of array A in reverse order.)
You can either allow declaring B uninitialized (which can be a safety hazard) or force B to be given initial values for every element (which can be a big waste of time for large arrays).
Maxatar•7h ago
lerno•17h ago
Otherwise it's quite straightforward that they have an uninitialized state (zero) and are then wired up when used. Trying to prevent null pointers here is something that the program to do. However, making the compiler guarantee without requiring constructors it is a challenge I don't know how to tackle.
aidenn0•17h ago
If you want to do that you can always use a nullable type. You can always assign it to a non-nullable type after initialization if you plan on using the aggregate a lot.
Usually you provide a vector type though, which has an underlying nullable array, but maintains a fill-index such that for all i < fill-index it the value is initialized, and then you have two indexing operations; one which returns a nullable type and the other which bounds-checks and returns a non-nullable type.
lerno•14h ago
I think Rust and similar languages fill that niche already, so there is no real need to try to offer that type of alternative.
aidenn0•13h ago
bryanlarsen•19h ago
trealira•19h ago
andyferris•9h ago
loupol•5h ago
lerno•17h ago
lerno•17h ago
netbioserror•10h ago
The entire rest of the language is built on pass-by-value using stack values and stack-managed hidden unique pointers. You basically never actually need to use a ref or a pointer unless you're building an interface to a C or C++ library. I having written a 40k line production application with no reference or pointer types anywhere. Almost any case you'd need is covered by simply passing a compound type or dynamic container as a mutable value, where it's impossible to perform any kind of pointer or reference semantics on it. The lifetime is already managed, so semantically it's just a value.
monkeyelite•10h ago