People used to compare it as simpler than Rust. I don't agree that it's simple anymore at all.
None of this is meant to be badmouthing or insulting. I'm a polyglot but love simple languages and syntaxes, so I tend to overly notice such things.
I tend to fall into the former camp. Something like BF would be the ultimate simple language, even if not particularly useful.
a few things have been removed, too. and async/suspend/nosuspend/await, usingnamesplace are headed for the woodchipper.
I wish it moved to snake_case for functions, this is a cosmetic detail but it drives me crazy.
They’re not rushing, that’s for sure. But I’ve never felt worried about 1.0 never happening in an unending pursuit of unrealistic impossible ideals.
It seems like folks expect stability pre 1.0.
So where is Zig's OS, browser, docker, engine, security, whatever XYZ, that would make having Zig on the toolbox a requirement?
I don't see Bun nor Tiger Beetle being that app.
Most of it you can already get in C and C++, by using the tools that have in the market for the last 30 years.
I think the main big thing that’s left for 1.0 is to resurrect async/await.. and that’s a huge thing because arguably very few if any language has gotten that truly right.
As the PR description mentions: “This is part of a series of changes leading up to "I/O as an Interface" and Async/Await Resurrection.”
So this work is partially related to getting async/await right. And getting IO right is a very important part of that.
I think it’s a good idea for Zig to try to avoid a Python 3 situation after they reach 1.0. The project seems fairly focused to me, but they’re trying to solve some difficult problems. And they spend more time working on the compiler and compiler infrastructure than other languages, which is also good. Working on their own backend is actually critical for the language itself, because part of what’s holding Zig back from doing async right is limitations and flaws in LLVM
this was interesting! Do you have a link or something to be able to read about it?
https://www.reddit.com/r/Zig/comments/1d66gtp/comment/l6umbt...
Interesting. I like Zig. I dabble periodically. I’m hoping that maturity and our next generation ag tech device in a few years might intersect.
Throwing another colored function debacle in a language, replete with yet another round of the familiar but defined slightly differently keywords, would be a big turn off for me. I don’t even know if Grand Central Dispatch counts, but it—and of course Elixir/Erlang—are the only two “on beyond closures/callbacks” asynch system I’ve found worked well.
This would involve removing async/await as keywords from the language.
const pick_a_global_io = ...;
fn needs_io(io:IO) void {...}
fn doesnt_take_io() void {
needs_io(pick_a_global_io);
}
easy peasy. you've resolved the coloring boundary.now, if you want to be a library writer, yeah, you have to color your functions if you don't want to be an asshole, but for the 95% use case this is not function coloring.
Programming languages which do get used are always in flux, for good reason - python is still undergoing major changes (free-threading, immutability, and others), and I'm grateful for it.
I still think what drives languages to continuously make changes is the focus on developer UX, or at least the intent to make it better. So, PLs with more developers will always keep evolving.
JangaFX stuff is written in Odin and has some pretty big users.
Which is a pity because really liked the language, but the discovering what works with what, oh dear
When you break things regularly, you're forcing a choice on every individual package in the ecosystem: move forward, and leave the old users behind, or stay behind, and risk that the rest of the ecosystem moves forward without you. Now you've got a whole ecosystem in a prisoner's dilemma. For an individual, maybe you can make a choice and dig in and make your way along without too much trouble. But the ecosystem as a whole can't, the ecosystem fractures, and if it doesn't converge on the latest version, it slowly withers and dies.
Rust didn’t even have async await at that time
Citation needed. A lot of people wanted Rust to stabilize. Hence why they huried to Rust 1.0.
Also found that these interfaces only cause problems for performance and flexibility in rust so didn’t even look at them in zig.
Andrew’s design decisions in the language have always been impeccable. I’ve never seen him put a foot wrong and would have made the same change myself.
This is also not new to us, Andrew spoke about this at Systems Distributed ‘25.
Also, TigerBeetle has and owns its own IO stack in any event, and we’ve always been careful to use stable language features.
But regardless, it’s in our nature to “do the right thing”, even if that means a bit of change. We call this “Edge” and explicitly hire for people who have the same characteristic, the craftspeople who know how to spot great technical quality, regardless of how young (or old!) a project may be.
Finally, I’ve been in Zig since 2018. I wouldn’t exactly call it “shiny new”. Zig already has the highest quality toolchain and std lib of anything I would use.
I think you'll enjoy Andrew's talk on this too when it comes out in the next few weeks.
The velocity of Zig has been valuable for us. Being able to put things like io_uring or @prefetch in the std lib or language, and having them merged quickly. Zig has been so solid, even with all the fuzzing we do. It's really held up, and upgrades across versions have not been much work, only a pleasure.
Interesting, who designed the old Zig IO stack which alas Andrew needed to replace?
Everyday, more and more people started using that bridge.
In 2025, I've rebuilt the bridge twice as big to accommodate the demand of a growing community.
It's great and the people love it!
Wait till the SD25 talk on this comes out, to first understand the rationale a bit better!
The point was that if he did the old design, which needed improving enough to justify breaking the language backwards compatibility, then why say his decisions are impeccable? Pobody's nerfect.
Again, we use Zig, and this change is welcome for us.
We also like that Zig is able to break backwards compatibility, and are fully signed up for that.
The crucial thing for TigerBeetle is that Zig as language will make the right calls looking to the next few decades, rather than ossify for fear of people who don't use it.
My couple of days experience with Zig was very lackluster with the std lib, not that it is bad, but feels like it is lacking a lot of bare essentials. To be expected for a new pre-1.0 language of course.
The fact that another breaking change has been introduced confirms my suspicion that Zig is not ready for primetime.
My conclusion is to just use C. For low-level programming it's very hard to improve on C. There is not likely to be any killer feature that some other contender will allow you to write the same code in a fifth of the lines nor make the code any more understandable.
Yes, C may have its quirky behaviour that people gnash their teeth over. But ultimately, it's not that bad.
If you want to use a better C, use C++. C++ is perfectly fine for using with microcontrollers, for example. Now get back to work!
Huh, it was the 0.14 version number for me.
I also have to disagree with C++ for micro controllers / bare metal programming. You don't get the standard library so you're missing out on most features that make C++ worthwhile over C. Sure you get namespaces, constexpr and templates but without any standard types you'll have to build a lot on your own just to start out with.
I recently switched to Rust for a bare metal project and while its not perfect I get a lot more "high level" features than with C or C++.
This distinction makes it really comfortable to use.
Though one caveat about no_std is that you'll need some support library like https://docs.rs/cortex-m-rt/latest/cortex_m_rt/
Why is that? Sure, allocating containers and other exception-throwing facilities are a no-go but the stdlib still contains a lot of useful and usable stuff like <type_traits>, <utility>, <source_location>, <bit>, <optional>, <coroutine> [1] and so on
[1] yes they allocate, but operator new can easily be overridden for the promise class and can get the coro function arguments forwarded to it. For example if coro function takes a "Foo &foo", you can have operator new return foo.m_Buffer (and -fno-exceptions gets rid of unwinding code gen)
Vendors at this point seem to give their implementation of some of the std library components, but the one's I've seen were lacking in terms of features.
In C the freestanding environment doesn't provide any concrete features, you don't get any functions at all, you can get a bunch of useful constants such as the value of Pi or the maximum value that will fit in an unsigned integer, some typedefs, that's about it. Concrete stuff from the "C standard library" is not available, for example it does not provide any sort of in-place sort algorithm, or a way to compare whether two things are the same (if they fit in a primitive you can use the equality operator)
In C++ there are concrete functions provided by the language standard in freestanding mode. These, together with definitions for types etc. form the freestanding version of the "standard library" in C++. There's a long period where this was basically untended, it wasn't removed but it also wasn't tracking new features or feedback. In the last few C++ versions that improved, but even if you have a new enough compiler and it's fully compliant (most are not) there's still not always a rhyme or reason to what is or is not available.
In Rust it's really easy. You always have core, if you've got a heap allocator of some sort you can have alloc, and if there's a whole operating system it provides std.
In most cases a whole type lives entirely in one of those modules, Duration for example lives in core. Maybe your $5 device has no idea which year this is, let alone day but it does definitely know 60 seconds is a minute.
But in some cases modules extend a type. For example arrays exist in core of course - an array of sixty Doodads where Doodads claim to be Totally Ordered, can just be unstably sorted, that works. But, what if we want a stable sort, so that if two equal Doodads were arranged A, B they are not reversed B, A ? Well Rust's core module doesn't provide a stable sort, the stable sort provided uses an allocation and so the entire function you need just doesn't exist unless you've got allocators.
Also embedded covers a very wide range of computers.
There is also devkitPPC, shipping with the same toolchain (and which additionally has some Obj-C support iirc).
Custom patches to newlib and consorts (https://github.com/devkitPro/buildscripts/) introduce hooks and WEAK functions that allow to implement standard library functions on almost any platform, on a platform library basis or even on a per-program basis (with some restrictions on lock sizes).
> This is roughly analogous to Rust's nostd.
"freestanding" is actually worse that this. It means that the compiler can't even assume things about memcpy and optimize it out (as on gcc it implies -fno-builtin), which pessimizes a lot of idiomatic code (eg. serialization).
The "-nostdlib" option is usually what one wants in many cases (don't link against libc but still provide standard C and C++ headers), such as when compiling privileged code with -mgeneral-regs only and such. This way you can benefit from <chrono>, etc.
If you are writing userland code you should be using a toolchain for this, instead of relying of freestanding/nostdlib which are geared towards kernel code and towards working around defective toolchains.
Building our own types was a rite of passage for C++ programming back in the early 1990's, and university curriculums for C++ as well.
And in the end, things do improve significantly.
In this case, I think the new IO stuff is incredible.
Also, "Zig the language" is currently better designed than "Zig the stdlib", so breaking changes will actually be needed in the future at least in the stdlib because getting it right the first time is very unlikely, and I don't like to be stuck with bad initial design decisions which then can't be fixed for decades (again, as a perfect example of how not to do it, see C++)
If your micro controller is say <5000 lines maybe ... but an OS or a mellanox verbs or dpdk API won't fall so easily to such surface level thinking.
Maybe zig could help itself by providing through llvm what Google sometimes does for large api breaking changes ... have llvm tool that searches out old api invocation update to new so upgrading is faster, more operationally effective.
Google's tools do this and give the dev a source code pr candidate. That's how they can change zillions of calls with confidence.
"Software is just like lasagna. It has many layers, and it tastes best after you let it sit for a while".
I still follow this principle years down the line and avoid introducing shiny new things on my projects.
let him cook
If you want stability, stick to stuff that has stability guarantees, but at the very least let them make breaking changes during development.
At some point people just want their code to work so they go back to something that just works and won't break in a few years.
I hope that the Zig team invests more into helping with migration than they have in the past. My experience for past breaking changes is that downstream developers got left in the cold without clear guidance about how to fix breaking changes.
In Zig 0.12.0 (released just a year ago), there were a lot of breaking changes to the build system that the release notes didn't explain at all. To see what I mean, look at the changes I had to make[0] in a Zig 0.11.0 project and then search the release notes[1] for guidance on those changes. Most of the breaking changes aren't even mentioned, much less explained how to migrate from 0.11.0 to 0.12.0.
>Some of you may die, but that is a sacrifice I am willing to make.
>-Lord Farquaad
[0] https://github.com/mtlynch/zenith/pull/90/files#diff-f87bb35...
stitched2gethr•7h ago