From what I can tell, the only significant difference between C and Odin mentioned in the post is that Odin zero-initializes everything whereas C doesn't. This is a fundamental limitation of C but you can alleviate the pain a bit by writing better primitives for yourself. I.e., you write your own allocators and other fundamental APIs and make them zero-initialize everything.
So one of the big issues with C is really just that the standard library is terrible (or, rather, terribly dated) and that there is no drop-in replacement (like in Odin or Rust where the standard library seems well-designed). I think if someone came along and wrote a new C library that incorporates these design trends for low-level languages, a lot of people would be pretty happy.
[1]: https://www.rfleury.com/p/untangling-lifetimes-the-arena-all...
I suppose glib comes the closest to this? At least the closest that actually sees fairly common usage.
I never used it myself though, as most of my C has been fairly small programs and I never wanted to bother people with the extra dependency.
Why has nobody come along and created an alternative standard library yet? I know this would break lots of things, but it’s not like you couldn’t transition a big ecosystem over a few decades. In the same time, entire new languages have appeared, so why is it that the C world seems to stay in a world of pain willingly?
Again, mind you, I’m watching from the outside, really just curious.
I tried to create my own alternative about a decade ago which eventually influenced my other endeavours.
But another big reason is that people use C and its stdlib because that's what it is. Even if it is bad, its the "standard" and trivially available. Most code relies on it, even code that has its own standard library alternative.
Everybody has created their own standard library. Mine has been honed over a decade, why would I use somebody else's? And since it is designed for my use cases and taste, why would anyone use mine?
Because people are so terribly opinionated that the only common denominator is that the existing thing is bad. For every detail that somebody will argue a modern version should have, there will be somebody else arguing the exact opposite. Both will be highly opinionated and for each of them there is probably some scenario in which they are right.
So, the inability of the community to agree on what "good" even means, plus the extreme heterogenity of the use cases for C is probably the answer to your question.
Probably, IMO, because not enough people would agree on any particular secondary standard such that one would gain enough attention and traction¹ to be remotely considered standard. Everyone who already has they own alternatives (or just wrappers around the current stdlib) will most likely keep using them unless by happenstance the new secondary standard agrees (by definition, a standard needs to be at least somewhat opinionated) closely with their local work.
Also, maintaining a standard, and a public implementation of it, could be a faffy and thankless task. I certainly wouldn't volunteer for that!
[Though I am also an outsider on the matter, so my thoughts/opinions don't have any particular significance and in insider might come along and tell us that I'm barking up the wrong tree]
--------
[1] This sort of thing can happen, but is rare. jquery became an unofficial standard for DOM manipulation and related matters for quite a long time, to give one example - but the gulf between the standard standard (and its bad common implementations) at the time and what libraries like jquery offered was much larger than the benefits a secondary C stidlib standard might give.
Instead, data needs to be viewed more abstractly. Yes, it will eventually manifest in memory as bytes in some memory cell, but how that's layouted and moved around is not the concern of you as the programmer that's a user of data types. Looking at some object attributes foo.a or foo.b is just that - the abstract access of some data. Whether a and b are adjacent in memory should be insubstantial or are even on the same machine or are even backed by data cells in some physical memory bank. Yes, in some very specific (!) cases, optimizing for speed makes it necessary to care about locality, but for those cases, the language or library need to provide mechanisms to specify those requirements and then they will layout things accordingly. But it's not helpful if we all keep writing in some kind of glorified assembly language. It's 2025 and "data type" needs to mean something more abstract than "those bytes in this order layed out in memory like this", unless we are writing hand-optimized assembly code which most of us never do.
> Yes, it will eventually manifest in memory as bytes in some memory cell...
So people view a program how the computer actually deals with it? And how they need to optimize for since they are writing programs for that machine?
So what is an example of you abstraction that you are talking about? Is there a language that already exists that is closer to what you want? Otherwise you are talking vaguely and abstractly and it doesn't really help anyone understand your point of view.
And yes, I explicitly asked for a language: "Is there a language that already exists that is closer to what you want?", which means you reading comprehension isn't very high.
In your analogy, it's still extremely oversimplified because what about a manual car, of which I have only ever driven. I don't have just acceleration and break, but also a clutch. I also have to many other things too to deal with. It's no where near as simple as you are making out, and thus kind of makes your analogy useless.
> And yes, I explicitly asked for a language: "Is there a language that already exists that is closer to what you want?", which means you reading comprehension isn't very high.
Really? Two insults packaged into two paragraphs? Was that really necessary? It's possible to discuss technical disagreements without insulting others.
I'm doing systems-level programming every day, some of it involves C. It provides me with the perspective from which I'm expressing my views. There are other views, thankfully, and a discussion allows to highlight the differences and perhaps provide everybody with a learning opportunity. That's what I'm here for.
Obviously I saw that you asked for a language and I replied to that. I separated the concrete answer to avoid getting things mixed up with the more general point.
Your initial comment was effectively describing object relational models for every expression, like where `a.b` is some database query across the world "shouldn't matter to you". So saying we should get away from the model of programming that reflects the underlying hardware and do something more "abstract" but not be clear on what you mean by this, this is all kind of insane.
And then the examples of languages you gave being Python (a high level interpreted language that is several orders of magnitude slower than any systems language) and "functional languages" which is still quite vague. If Python is close to what you want (and by that I mean the object-model, and not the declaration syntax), then it is not applicable to anything systems related.
> But throwing the "but performance" argument at [anything] that moves us beyond the 80s is really getting old.
And your knowledge of computers appears to be stuck in the 80s too. There is a reason people want what they want, and why the author of the article likes what Odin is offering. Systems-level programmers want the control to program effectively for the machine. And yes "performance" is actually important, and sadly most programmers don't seem to care whatsoever. There is a reason everything is a web browser now, even the Windows 11 task bar is a web browser. Everything is many many orders of magnitude slower than it needs to be, nor even would be if naively implemented. Knowing how memory is laid out, how it is allocated, how it will be affected by cache-lines, how to properly utilize SIMD, and so much more, is extremely important. None of which was even a concern in the 80s.
They never are.
Perhaps they were. Don't give them, even if they are warranted. See the site guidelines for why.
It's just very weird to see be very vague when questioned, and claim things which cannot be true based on what he's stated already in this comment chain.
DIDNT SHIFT UP
But there is a fine line between having general understanding of the details of what's going on inside your system and using that knowledge to do very much premature optimizations and getting stuck in a corner that is hard to get out of. Large parts of our industry are in such a corner.
It's fun to nerd out about memory allocators, but that's not contributing to overall improvements of software engineering as a craft which is still too much ad hoc hacking and hoping for the best.
Actually I do, and I include the inertia and momentum of every piece of the drive-train as well, and the current location of the center of gravity. I'm thinking about all of these things through the next predicted 5 seconds or so at any given time. It comes naturally and subconsciously. To say nothing of how you really aren't going to be driving a standard transmission without that mental model.
Your analogy is appropriate for your standard American whose only experience with driving a car is the 20 minute commute to work in an automatic, and thus more like a hobbyist programmer or a sysadmin than someone whose actual craft is programming. Do you really think truckers don't know in their gut what their fuel burn rate is based on how far they've depressed the pedal?
There is no instead here. This is not a choice that has to be made once and for all and there is no correct way to view things.
Languages exist if you want to have a very abstract view of the data you are manipulating and they come with toolchains and compilers that will turn that into low level representation.
That doesn’t preclude the interest of languages which expose this low level architecture.
It's obviously not for the low level parts of the toolchain which are required to make very abstract languages work.
Not everybody needs to worry about L2 or L3 most of the time, but if you are using a systems-level programming language where it might be of a concern to you at some point, it's extremely useful to be able to have that control.
> expecting every programmer to call malloc and free in the right order
The point of custom allocators is to not need to do the `malloc`/`free` style of memory allocation, and thus reduce the problems which that causes. And even if you do still need that style, Odin and many other languages offer features such as `defer` or even the memory tracking allocator to help you find the problems. Just like what was said in the article.
GHC, which is without a doubt the smartest compiler you can get your grubby mitts on, is still an extremely stupid git that can't be trusted to do basic optimizations. Which is exactly why it exposes so many special intrinsic functions. The "sufficiently smart compiler" myth was thoroughly discounted over 20 years ago.
ECS is vastly superior as an abstraction that pretty much everything that we had before in games. Tightly coupled inheritance chains of the 90s/2000s were minefields of bugs.
Of course perhaps not every type of app will have the same kind of goldilocks architecture, but I also doubt anyone will stumble into something like that unless they’re prioritizing it, like game programmers did.
But I agree that DOD in practice is not a compromise between performance and ergonomics, and Odin kind of shows how that is possible.
Cache is easily the most important consideration if you intend to go fast. The instructions are meaningless if they or their dependencies cannot physically reach the CPU in time.
The latency difference between L1/L2 and other layers of memory is quite abrupt. Keeping workloads in cache is often as simple as managing your own threads and tightly controlling when they yield to the operating system. Most languages provide some ability to align with this, even the high level ones.
data is bytes. period. your suggestion rests on someone else seeing how it is the case and dealing with it to provide you with ways of abstraction you want. but there is an infinity of possible abstractions – while virtual memory model is a single solid ground anyone can rest upon. you’re modeling your problems on a machine – have some respect for it.
in other words – most abstractions are a front-end to operations on bytes. it’s ok to have various designs, but making lower layers inaccessible is just sad.
i say it’s the opoposite – it’s 2025, we should stop stroking the imaginaries of the 80s and return to the actual. just invest in making it as ergonomic and nimble as possible.
i find it hard understand why some programmers are so intent on hiding from the space they inhabit.
I'm a game-play programmer and not really into memory management or complex math. I like things to be quick and easy to implement. My games are small. I have no need for custom allocators or SOA. All I want is a few thousand sprites at ~120fps. I normally just work in the browser with JS. I use Odin like it's a scripting language.
I really like the dumb stuff like... no semicolons at the end of lines, no parentheses around conditionals, the case statement doesn't need breaks, no need to write var or let, the basic iterators are nice. Having a built in vector 2 is really nice. Compiling my tiny programs is about as fast as refreshing a browser page.
I also really like C style procedural programing rather than object oriented code, but when you work in a language that most people use as OO, or the standard library is OO, your program will end up with mixed paradigms.
It's only been a few weeks, but I like Odin. It's like a statically typed and compiled scripting language.
So it strikes me that a new language may be the wrong approach to addressing C's issues. Can they truly not be addressed with C itself?
E.g., here's a list of some commonly mentioned issues:
* standard library is godawful, and composed almost entirely of foot guns. New languages fix this by providing new standard libraries. But that can be done just as well with C.
* lack of help with safety. The solutions people put forward generally involve some combination of static analysis disallowing potentially unsafe operations, runtime checks, and provided implementations of mechanisms around potentially unsafe operations (like allocators, and slices). Is there any reason these cannot be done with C (in fact, I know they all have been done).
* lack of various modern conveniences. I think there's two aspects of this. One is aesthetics -- people can feel that C code is inelegant or ugly. Since that's purely a matter of personal taste, we have to set that aside. The other is that C can often be pretty verbose. Although the syntax is terse, its low-level nature means that, in practice, you can end up writing a relatively large number of lines of code to do fairly simple things. C alternatives tend to provide syntax conveniences that streamline common & preferred patterns. But it strikes me that an advanced enough autocomplete would provide the same convenience (albeit without the terseness). We happen to have entered the age of advanced autocomplete.
Building a new language, along with the ecosystem to support it, is a lot of fun. But it also seems like a very inefficient way to address C's issues because you have to recreate so much (including all the things about C that aren't broken), and you have to reach some critical mass of adoption/usage to become relevant and sustainable. And to be frank, it's also a pretty ineffective way to address C's issues because it doesn't actually do anything to help all the existing C code. Very few projects are in a position to be rewritten. Much better would be to have a fine-grained set of solutions that code bases could adopt incrementally according to need and opportunity
Of course, I realize all this has been happening with C all along. I'm just pointing out that seems like the right approach, while these C alternatives, while fun and exciting (as far as these things go), they are probably just sound and fury that will ultimately fade away. (In fact, it might be worse if some catch on... C and all the C code bases will still be there, we'll just have more fragmentation.)
I made my own standard library to replace libc. The lack of safety is hard to do when you don't have a decent enough type system. C's lack of a proper array type is a good example of this.
Before making Odin, I tried making my own C compiler with some extensions, specifically adding proper arrays (slices) with bounds checking, and adding `defer`. This did help things a lot, but it wasn't enough. C still had fundamentally broken semantics in so many places that just "fixing" the problems of C in C was not enough.
I didn't want to make Odin initially, but it was the conclusion I had after trying to fix something that cannot be fixed.
I returned to the language after a stint of work in other tech and to my utter amazement, the parametric polymorphism that was added to the language felt “right” and did not ruin the comprehensibility of the core library.
Thank you gingerBill!
I am dropping the link here so for those who can, should donate, and even if you don't use it, you should consider supporting this and other similar endeavors so they can't stop the signal, and keep it going: https://github.com/sponsors/odin-lang
mrkeen•5h ago
> This makes ZII extra powerful! There is little risk of variables accidentally being uninitialized.
The cure is worse than the problem. I don't want to 'safely' propagate my incorrect value throughout the program.
If we're in the business of making new languages, why not compile-time error for reading memory that hasn't been written? Even a runtime crash would be preferable.
ratatoskrt•5h ago
so... like Rust?
Timwi•5h ago
dontlaugh•4h ago
neonsunset•3h ago
dontlaugh•2h ago
Without unsafe, zero init is not needed.
neonsunset•2h ago
This is opposite to the way unsafe (either syntax or known unsafe APIs) is used today.
dontlaugh•2h ago
All use of p/invoke is also unsafe though, even if the keyword isn’t used. And it’s much more common to wrap a C library than to write a buffer pool.
mrkeen•3h ago
thasso•5h ago
yusina•4h ago
90s_dev•4h ago
yusina•4h ago
I "often enough" drive around with my car without crashing. But for the rare case that I might, I'm wearing a seatbelt and have an airbag. Instead of saying "well I better be careful" or running a static analyzer on my trip planning that guarantees I won't crash. We do that when lives are on the line, why not apply those lessons to other areas where people have been making the same mistakes for decades?
sph•3h ago
It is impossible to post about a language on this forum before the pearl clutching starts if the compiler is a bit lenient instead of triple checking every single expression and making your sign a release of liability.
Sometimes, ergonomics and ease-of-programming win over extreme safety. You’ll find that billion dollar businesses have been built on zero-as-default (like in Go) and often people reaching for it or Go are just writing small personal apps, not cruise missile navigation system.
It gets really tiring.
/rant
yusina•2h ago
Or are you saying that a certain level of bugs is fine and we are at that level? Are you fine with the quality of all the software out there? Then yes, this discussion is probably not for you.
sph•2h ago
This is the kind of generalisation I'm ranting against.
It is not constructive to extrapolate any kind of discussion about a single, perhaps niche, programming languages with applicable advice for "all the software out there". But you probably knew that already.
vacuity•45m ago
There is a lot of subpar software out there, and the rest is largely decent-but-not-great. If it's security I want, that's commonly lacking, and hugely so. If it's performance I want, that's commonly lacking[0]. If it's documentation...you get the idea. We should have rigor by default, and if that means software is produced slower, I frankly don't see the problem with that. (Although commercial viability has gone out the window unless big players comply.) Exceptions will be carved out depending on the scope of the program. It's much harder to add in rigor post hoc. The end goal is quality.
The other issue is that a program's scope is indeed broader than controlling lives, and yet there are many bad outcomes. If I just get my passwords stolen or my computer crashes daily or my messaging app takes a bit too long to load every time, what is the harm? Of course those are wildly different outcomes, but I think at least the first and second are obviously quality issues, and I think the third is also important. Why is the third important? When software is such an integral part of users' lives, minor issues cause faults that prompt workarounds or inefficiencies. [1] discusses a similar line of thought. I know I personally avoid doing some actions commonly (e.g. check LinkedIn) because they involve pain points around waiting for my browser to load and whatnot, nothing major but something that's always present. Software ("automation") in theory makes all things that the user implicitly desires to be non-pain points for the user. An interesting blend of issues is system dialog password prompts, which users will generally try to either avoid or address on autopilot, which tends to reduce security. Or take system update restarts, which induce not updating frequently. Or take what is perhaps my favorite invectives: blaming Electron apps. One Electron app can be inconvenient. Multiple Electron apps can be absurd. I feel like I shouldn't have to justify calling out Electron on HN, but I do, but I won't here. And take unintended uses: if I need to set down an injured person across two chairs, I sure hope a chair doesn't break or something. Sure, that's not the intended use case of a chair, but I don't think it's unreasonable that a well-made chair would not fail to live up to my expectations. I wouldn't put an elephant on the chair either way, because intuitively I don't expect that much. Even then, users may expect more out of software than is reasonable, but that should be remedied and not overlooked.
Do not mistake having users for having a quality product.
[0] https://news.ycombinator.com/item?id=43971464 [1] https://blog.regehr.org/archives/861
90s_dev•11m ago
johnnyjeans•3h ago
bobbylarrybobby•8m ago
tlb•5h ago
thasso•5h ago
This is not the whole story. You're making it sound like uninitialized variables _have_ a value but you can't be sure which one. This is not the case. Uninitialized variables don't have a value at all! [1] has a good example that shows how the intuition of "has a value but we don't know which" is wrong:
If you assume an uninitialized variable has a value (but you don't know which) this program should run to completion without issue. But this is not the case. From the compiler's point of view, x doesn't have a value at all and so it may choose to unconditionally return false. This is weird but it's the way things are.It's a Rust example but the same can happen in C/C++. In [2], the compiler turned a sanitization routine in Chromium into a no-op because they had accidentally introduced UB.
[1]: https://www.ralfj.de/blog/2019/07/14/uninit.html
[2]: https://issuetracker.google.com/issues/42402087?pli=1
gingerBill•5h ago
Because that's a valid conceptualization you could have for a specific language. Your approach and the other person's approach are both valid but different, and as I said in another comment, they come with different compromises.
If you are thinking like some C programmers, then `int x;` can either have a value which is just not known at compile time, or you can think of it having a specialized value of "undefined". The compiler could work with either definition, it just happens that most compilers nowadays do for C and Rust at least use the definition you speak of, for better or for worse.
nlitened•2h ago
I am pretty sure that in C, when a program reads uninitialized variable, it is an "undefined behavior", and it is pretty much allowed to be expected to crash — for example, if the variable turned out to be on an unallocated page of stack memory.
So literally the variable does not have a value at all, as that part of address space is not mapped to physical memory.
gingerBill•48m ago
However, I was using that "C programmers" bit to explain the conceptualization aspect, and how it also applies to other languages. Not every language, even systems languages, have the same concepts as C, especially the same construction as "UB".
gingerBill•5h ago
iainmerrick•2h ago
I want to push back on the idea that it's a "trade-off", though -- what are the actual advantages of the ZII approach?
If it's just more convenient because you don't have to initialize everything manually, you can get that with the strict approach too, as it's easy to opt-in to the ZII style by giving your types default initializers. But importantly, the strict approach will catch cases where there isn't a sensible default and force you to fix them.
Is it runtime efficiency? It seems to me (but maybe not to everyone) that initialization time is unlikely to be significant, and if you make the ZII style opt-in, you can still get efficiency savings when you really need them.
The explicit initialization approach seems strictly better to me.
gingerBill•2h ago
The thing is, initialization cost is a lot more than you think it is, especially when it's done on a per-object level rather than a "group" level.
This is kind of the point of trying to make the zero value useful, it's trivially initialized. And in languages that are much more strict in their approach, it is done at that per-object level which means instead of the cost of initialization being anywhere from free (VirtualAlloc/mmap has to produce zeroed memory) to trivially-linear (e.g. memset), to being a lot more nested hierarchies of initialization (e.g. for-loop with constructor for each value).
It's non-obvious why the "strict approach" would be worse, but it's more about how people actually program rather than a hypothetical approach to things.
So of course each style is about trade-offs. There are no solutions, only trade-offs. And different styles will have different trade-offs, even if they are not immediately obvious and require a bit of experience.
A good little video on this is from Casey Muratori, "Smart-Pointers, RAII, ZII? Becoming an N+2 programmer": https://www.youtube.com/watch?v=xt1KNDmOYqA
iainmerrick•2h ago
But still, in other parts of the program, ZII is bad! That local or global variable pointing at an ArrayBuffer should definitely not be zero-initialized. Who wants a null pointer, or a pointer to random memory of unknown size? Much better to ensure that a) you actually construct a new TypedArray, and b) you don't use it until it's constructed.
I guess if you see the vast majority of your action happening inside big arrays of structs, pervasive ZII might make sense. But I see most of the action happening in local and temporary variables, where ZII is bad and explicit initialization is what you want.
Moving from JavaScript to TypeScript, to some extent you can get the best of both worlds. TS will do a very good (though not perfect) job of forcing you to initialize everything correctly, but you can still use TypedArray and DataView and take advantage of zero-initialization when you want to.
ZII for local variables reminds me of the SmallTalk / Obj-C thing where you could send messages to nil and they're silently ignored. I don't really know SmallTalk, but in Obj-C, to the best of my knowledge most serious programmers think messages to nil are a bad idea and a source of bugs.
Maybe this is another aspect where the games programming mindset is skewing things (besides the emphasis on low-level performance). In games, avoiding crashes is super important and you're probably willing to compromise on correctness in some cases. In most non-games applications, correctness is super important, and crashing early if something goes wrong is actually preferable.
gingerBill•39m ago
I normally say "try to make the zero value useful" and not "ZII" (which was a mostly jokey term Casey Muratori came up with to reflect against RAII) because then it is clear that there are cases when it is not possible to do ZII. ZII is NOT a _maxim_ but what you should default to and then do something else where necessary. This is my point, and I can probably tell you even more examples of where "ZII is bad" than you could think of, but this is what is a problem describing the problem to people: they take it as a maxim not a default.
And regarding pointers, I'm in the camp that nil-pointers are the most trivial type of invalid pointer to catch empirically speaking. Yes they cause problems, but because how modern systems are structured with virtual memory, they are empirically trivial to catch and deal with. Yes you could design the type system of a language to make nil-pointers not be a thing unless you explicit opt into them, but then that has another trade-off which may or may not be a good thing depending on the application.
The Objective-C thing is just a poorly implemented system for handling `nil`. It should have been more consistent but wasn't. That's it.
I'd argue "correctness" is important in games too, but the conception of "correctness" is very different there. It's not about provability but testability, which are both valid forms of "correctness" but very different.
And in some non-game applications, crashing early is also a very bad thing, and for some games, crashing early is desired over corrupted saves or other things. It's all about which trade-offs you can afford, and I would not try to generalize too much.
iainmerrick•6m ago
I don't think I'll ever abandon the idea that making code "correct by construction" is a good goal. It might not always be achievable or practical but I strongly feel it's always something to aim for. For me, silent zero initialization compromises that because there isn't always a safe default.
I think nil pointers are like NaNs in arithmetic. When a nil or a NaN crops up, it's too late to do anything useful with it, you generally have to work backwards in the debugger to figure out where the real problem started. I'd much rather be notified of problems immediately, and if that's at compile time, even better.
In the real world, sure, I don't code review every single arithmetic operation to see if it might overflow or divide by zero. But when the compiler can spot potential problem areas and force me to check them, that's really useful.
slowmovintarget•3h ago
Much better outcomes and failure modes than RAII. IIRC, Odin mentions game programming as one of its use cases.
CyberDildonics•20m ago
He thinks that every RAII variable is a failure point and that you only have to think about ownership if you are using RAII, so it incurs mental overhead.
The reality is that you have to understand the lifetime and ownership of your allocations no matter what. If the language does nothing for you the allocation will still have a lifetime and a place where the memory is deallocated.
He also talks about combining multiple allocations in to a single allocation that then gets split into multiple pointers, but that could easily be done in C++.
lerno•2h ago
Clearly the lack of zeroing in C was a trade-off at the time. Just like UB on signed overflow. And now people seem to consider them "obvious correct designs".
Tuna-Fish•2h ago
"Improperly using a variable before it is initialized" is a very common class of bug, and an easy programming error to make. Zero-initializing everything does not solve it! It just converts the bugs from ones where random stack frame trash is used in lieu of the proper value into ones where zeroes are used. If you wanted a zero value, it's fine, but quite possibly you wanted something else instead and missed it because of complex initialization logic or something.
What I want is a compiler that slaps me when I forget to initialize a proper value, not one that quietly picks a magic value it thinks I might have meant.
dooglius•1h ago
https://en.wikipedia.org/wiki/Rice%27s_theorem?useskin=vecto...
TheCoelacanth•54m ago
drannex•59m ago