But from what I remember, this still uses the LLVM backend, right? Sure, you can beat LLVM on compilation speed and number of platforms supported, but when it comes to emitting great assembly, it is almost unbeatable.
Time complexity may be O(lines), but a compiler can be faster or slower based on how long it takes. And for incremental updates, compilers can do significantly better than O(lines).
In debug mode, zig uses llvm with no optimization passes. On linux x86_64, it uses its own native backend. This backend can be significantly faster to compile (2x or more) than llvm.
Zig's own native backend is designed for incremental compilation. This means, after the initial build, there will be very little work that has to be done for the next emit. It needs to rebuild the affected function, potentially rebuild other functions which depend on it, and then directly update the one part of the output binary that changed. This will be significantly faster than O(n) for edits.
Color me skeptical. I've only got 30 years of development under the belt, but even a 1 minute compile time is dwarfed by the time it takes to write and reason about code, run tests, work with version control, etc.
Further, using Rust as an example, even a project which takes 5 minutes to build cold only takes a second or two on a hot build thanks to caching of already-built artifacts.
Which leaves any compile time improvements to the very first time the project is cloned and built.
Consequently, faster compile times would not alter my development practices, nor allow me to iterate any faster.
I think the web frontend space is a really good case for fast compile times. It's gotten to the point that you can make a change, save a file, the code recompiles and is sent to the browser and hot-reloaded (no page refresh) and your changes just show up.
The difference between this experience and my last time working with Ember, where we had long compile times and full page reloads, was incredibly stark.
As you mentioned, the hot build with caching definitely does a lot of heavy lifting here, but in some environments, such as a CI server, having minutes long builds can get annoying as well.
> Consequently, faster compile times would not alter my development practices, nor allow me to iterate any faster.
Maybe, maybe not, but there's no denying that faster feels nicer.
Given finite developer time, spending it on improved optimization and code generation would have a much larger effect on my development. Even if builds took twice as long.
I'm much more productive when I can see the results within 1 or 2 seconds.
That's my experience today with all my Rust projects. Even though people decry the language for long compile times. As I said, hot builds, which is every build while I'm hacking, are exactly that fast already. Even on the large projects. Even on my 4 year old laptop.
On a hot build, build time is dominated by linking, not compilation. And even halving a 1s hot build will not result in any noticeable change for me.
Rust has excellent support for shared libraries. Historically they have involved downcasting to C types using the C ABI, but now there are more options like:
With fast compile time, running the test suite (which implies to recompile it) is fast too.
Also if the language itself is optimized towards making easy to write a fast compiler, this also makes your IDE fast.
And just if you're wondering, yes, Go is my dope.
You are far from the embedded world if you think 1 minute here or there is long. I have been involved with many projects that take hours to build, usually caused by hardware generation (fpga hdl builds) or poor cross compiling support (custom/complex toolchain requirements). These days I can keep most of the custom shenanigans in the 1hr ballpark by throwing more compute at a very heavy emulator (to fully emulate the architecture) but that's still pretty painful. One day I'll find a way to use the zig toolchain for cross compiles but it gets thrown off by some of the c macro or custom resource embedding nonsense.
Edit: missed some context on lazy first read so ignore the snark above.
Yeah, 1 minute was the OP's number, not mine.
> fpga hdl builds
These are another thing entirely from software compilation. Placing and routing is a Hard Problem(TM) which evolutionary algorithms only find OK solutions for in reasonable time. Improvements to the algorithms for such carry broad benefits. Not just because they could be faster, but because being faster allows you to find better solutions.
So optimizing compile times isn’t worthwhile because we already do things to optimize compile times? Interesting take.
What about projects for which hot builds take significantly longer than a few seconds? That’s what I assumed everyone was already talking about. It’s certainly the kind of case that I most care about when it comes to iteration speed.
That seems strange to you? If build times constituted a significant portion of my development time I might think differently. They don't. Seems the compiler developers have done an excellent job. No complaints. The pareto principle and law of diminishing returns apply.
> What about projects for which hot builds take significantly longer than a few seconds?
A hot build of Servo, one of the larger Rust projects I can think of off the top of my head, takes just a couple seconds, mostly linking. You're thinking of something larger? Which can't be broken up into smaller compilation units? That'd be an unusual project. I can think of lots of things which are probably more important than optimizing for rare projects. Can't you?
Just for fun, I kicked off a cold build of Bevy, the largest Rust project in my working folder at the moment, which has 830 dependencies, and that took 1m 23s. A second hot build took 0.22s. Since I only have to do the cold build once, right after cloning the repository which takes just as long, that seems pretty great to me.
Are you telling me that you need faster build times than 0.22s on projects with more than 800 dependencies?
> > The reason to care about compile time is because it affects your iteration speed. You can iterate much faster on a program that takes 1 second to compile vs 1 minute.
> Color me skeptical. I've only got 30 years of development under the belt, but even a 1 minute compile time is dwarfed by the time it takes to write and reason about code, run tests, work with version control, etc.
If your counterexample to 1-minute builds being disruptive is a 1-second hot build, I think we’re just talking past each other. Iteration implies hot builds. A 1-minute hot build is disruptive. To answer your earlier question, I don’t experience those in my current Rust projects (where I’m usually iterating on `cargo check` anyway), but I did in C++ projects (even trivial ones that used certain pathological libraries) as well as some particularly badly-written Node ones, and build times are a serious consideration when I’m making tech decisions. (The original context seemed language-agnostic to me.)
I too have encountered slow builds in C++. I can't think of a language with a worse tooling story. Certainly good C++ tooling exists, but is not the default, and the ecosystem suffers from decades of that situation. Thankfully modern langs do not.
I found I work very differently in the two cases. In Delphi I use the compiler as a spell checker. With the C++ code I spent much more time looking over the code before compiling.
Sometimes though you're forced to iterate over small changes. Might be some bug hunting where you add some debug code that allows you to narrow things a bit more, add some more code and so on. Or it might be some UI thing where you need to check to see how it looks in practice. In those cases the fast iteration really helps. I found those cases painful in C++.
For important code, where the details matter, then yeah, you're not going to iterate as fast. And sometimes forcing a slower pace might be beneficial, I found.
Umm this is completely wrong. Compiling involves a lot of stuff, and the language design, as well as compiler design can make or break them. Parsing is relatively easy to make fast and linear, but the other stuff (semantic analysis) is not. Hence why we have a huge range of compile times across programming languages that are (mostly) the same.
For some work I tend to take a pen and paper and think about a solution, before I write code. For these problems compile time isn't an issue.
For UI work on the other hand, it's invaluable to have fast iteration cycles to nail the design, because it's such an artistic and creative activity
"Proebsting's Law: Compiler Advances Double Computing Power Every 18 Years"
The implication is that doing the easy, obvious and fast optimizations is Good Enough(tm).
Even if LLVM is God Tier(tm) at optimizing, the cost for those optimizations swings against them very quickly.
I think we'll see cranelift take off in Rust quite soon, though I also think it wouldn't be the juggernaut of a language if they hadn't stuck with LLVM those early years.
Go seems to have made the choice long ago to eschew outsourcing of codegen and linking and done well for it.
> The resulting code performs on average 14% better than LLVM -O0, 22% slower than LLVM -O1, 25% slower than LLVM -O2, and 24% slower than LLVM -O3
So it is more like 24% slower, not 14%. Perhaps a typo (24/14), or they got the direction mixed up (it is +14 vs -24), or I'm reading that wrong?
Regardless, those numbers are on a particular set of database benchmarks (TPC-H), and I wouldn't read too much into them.
I don’t think that means it’s not doable, though.
That's not to say Cranelift isn't a fantastic piece of tech, but I wouldn't take the "14%" or "24%" number at face value.
I cant see myself ever again working on a system with compile times that take over a minute or so (prod build not counting).
I wish more projects would have they own "dev" compiler that would not do all the shit llvm does, and only use llvm for the final prod build.
Eiffel, Common Lisp, Java and .NET (C#, F#, VB) are other examples, where we can enjoy fast development loops.
By combining JIT and AOT workflows, you can get best of both worlds.
I think the main reason this isn't as common is the effort that it takes to keep everything going.
I am on the same camp regarding build times, as I keep looking for my Delphi experience when not using it.
They aren't C++ levels bad, but they are slow enough to be distracting/flow breaking. Something like dart/flutter or even TS and frontend with hot reload is much leaner. Comparing to fully dynamic languages is kind of unfair in that regard.
I did not try Go yet but from what I've read (and also seeing the language design philosophy) I suspect it's faster than C#/Java.
Many things that Google uses to sell Java (the language) over Kotlin also steems from how bad they approach the whole infrastructure.
Try using Java on Eclipse with compilation on save.
This idea that it's all sunshine and lollipops for other languages is wrong.
Lets not mix build tools with compilers.
This however has nothing to do with Java — Kotlin compiler is written Kotlin, and Gradle is written in unholy mix of Kotlin, Java and Groovy (with later being especially notorious for being slow).
I wouldn’t put them together. C compilation is not the fastest but fast enough to not be a big problem. C++ is a completely different story: not only it orders of magnitude slower (10x slower probably not the limit) on some codebases compiler needs a few Gb RAM (so you have to set -j below the number of CPU cores to avoid OOM).
C++ builds can be very slow versus plain old C, yes, assuming people do all mistakes there can be done.
Like overuse of templates, not using binary libraries across modules, not using binary caches for object files (ClearMake style already available back in 2000), not using incremental compilation and incremental linking.
To this day, my toy GTK+ Gtkmm application that I used for a The C/C++ Users Journal article, and have ported to Rust, compiles faster in C++ than Rust in a clean build, exactly because I don't need to start from the world genesis for all dependencies.
Granted there are ways around it for similar capabilities, however they aren't the default, and defaults matter.
I do think that dynamic libraries are needed for better plugin support, though.
Unless a shared dependency gets updated, RUSTFLAGS changes, a different feature gets activated in a shared dependency, etc.
If Cargo had something like binary packages, it means they would be opaque to the rest of your project, making them less sensitive to change. Its also hard to share builds between projects because of the sensitivity to differences.
A lot of Rust packages that people ust are setup more like header-only libraries. We're starting to see more large libraries that better fit the model of binary libraries, like Bevy and Gitoxide. I'm laying down a vague direction for something more binary-library like (calling them opaque dependencies) as part of the `build-std` effort (allow custom builds of the standard library) as that is special cased as a binary library today.
Plenty of code was Tcl scritping, and when re-compiling C code, only the affected set of files would be re-compiled, everything else was kept around in object files and binary libraries, and if not affected only required re-linking.
> Fast! Compiles 34 000 lines of code per minute
This was measured on a IBM PS/2 Model 60.
So lets put this into perspective, Turbo Pascal 5.5 was released in 1989.
IBM PS/2 Model 60 is from 1987, with a 80286 running at 10 MHz, limited by 640 KB, with luck one would expad it up to 1 MB and use HMA, in what concerns using it with MS-DOS.
Now projecting this to 2025, there is no reason that compiled languages, when using a limited set of optimizations like TP 5.5 on their -O0, can't be flying in their compilation times, as seen in good examples like D and Delphi, to use two examples of expressive languages with rich type systems.
A Personal History of Compilation Speed (2 parts): https://prog21.dadgum.com/45.html
"Full rebuilds were about as fast as saying the name of each file in the project aloud. And zero link time. Again, this was on an 8MHz 8088."
Things That Turbo Pascal is Smaller Than: https://prog21.dadgum.com/116.html
Old versions of Turbo Pascal running in FreeDOS on the bare metal of a 21st century PC is how fast and responsive I wish all software could be, but never is. Press a key and before you have time to release it the operation you started has already completed.
- Turbo Pascal was compiling at o-1, at best. For example, did it ever in-line function calls?
- its harder to generate halfway decent code for modern CPUs with deep pipelines, caches, and branch predictors, than it was for the CPUs of the time.
Shouldn't be the case for an O0 build.
In my computer science class (which used Turbo C++), people would try to get there early in order to get one of the two 486 machines, as the compilation times were a huge headache (and this was without STL, which was new at the time).
That's with optimizations turned on, including automatic inlining, as well as a lot of generics and such jazz.
But I somewhat agree for an O0 the current times are not satisfactory, at all.
Lets apply the same rules then.
The C# compiler is brutally slow and the language idioms encourage enormous amounts of boilerplate garbage, which slows builds even further.
People tend to forget that LLVM was pretty much that for the C/C++ world. Clang was worlds ahead of GCC when first released (both in speed and in quality of error messages), and Clang was explicitly built from the ground up to take advantage of LLVM.
One example from 1980s, there are others to pick from as example,
https://en.wikipedia.org/wiki/Amsterdam_Compiler_Kit
So naturally a way to quickly reach the stage where a compiler is avaible for a brand new language, without having to write all compiler stages by hand.
A kind of middleware for writing compilers, if you wish.
MLIR is part of LLVM tooling, is the evolution of LLVM IR.
For me beyond the initial adoption hump, programming languages should bootstrap themselves, if nothing else, reduces the urban myth that C or C++ have to always be part of the equation.
Not at all, in most cases it is a convenience, writing usable compilers takes time, and it is an easy way to get everything else going, especially when it comes to porting across multiple platforms.
However that doesn't make them magical tools, without which is impossible to write compilers.
That is one area where I am fully on board with Go team's decisions.
Is it? I think Rust is a great showcase for why it isn't. Of course it depends somewhat on your compiler implementation approach, but actual codegen-to-LLVM tends to only be a tiny part of the compiler, and it is not particularly hard to replace it with codegen-to-something-else if you so desire. Which is why there is now codegen_cranelift, codegen_gcc, etc.
The main "vendor lock-in" LLVM has is if you are exposing the tens of thousands of vendor SIMD intrinsics, but I think that's inherent to the problem space.
Of course, whether you're going to find another codegen backend (or are willing to write one yourself) that provides similar capabilities to LLVM is another question...
> You bootstrap extra fast, you get all sorts of optimization passes and platforms for free, but you lose out on the ability to tune the final optimization passes and performance of the linking stages.
You can tune the pass pipeline when using LLVM. If your language is C/C++ "like", the default pipeline is good enough that many such users of LLVM don't bother, but languages that differ more substantially will usually use fully custom pipelines.
> I think we'll see cranelift take off in Rust quite soon, though I also think it wouldn't be the juggernaut of a language if they hadn't stuck with LLVM those early years.
I'd expect that most (compiled) languages do well to start with an LLVM backend. Having a tightly integrated custom backend can certainly be worthwhile (and Go is a great success story in that space), but it's usually not the defining the feature of the language, and there is a great opportunity cost to implementing one.
Nothing about LLVM is a trap for C++ as that is what it was designed for.
How much do you think the BSDs get back from Apple and Sony?
It is not even super optimized (single thread, no fancy tricks) but it is so far unbeaten by a large margin. Of course I use Clang for releases, but the codegen of tcc is not even awful.
Go’s main objectives were fast builds and a simple language.
Typescript is tacked onto another language that doesn’t really care about TS with three decades of warts, cruft and hacks thrown together.
One the one hand the go type system is a joke compared to typescript so the typescript compiler has a much harder job of type checking. On the other hand once type checking is done typescript just needs to strip the types and it's done while go needs to optimize and generate assembly.
vlang is really fast, recompiling itself entirely within a couple of seconds.
And Go's compiler is pretty fast for what it does too.
No one platform has a unique monopoly on build efficiency.
And also there are build caching tools and techniques that obviate the need for recompliation altogether.
Does V still just output C and use TCC under the hood?
As a warning, you need to be sure --lineDir=off in your nim.cfg or config.nims (to side-step the infinite loop mentioned in https://github.com/nim-lang/Nim/pull/23488). You may also need to --tlsEmulation=on if you do threads and you may want to --passL=-lm.
tcc itself is quite fast. Faster for me than a Perl5 interpreter start-up for me (with both on trivial files). E.g., on an i7-1370P:
tim "tcc -run /t/true.c" "/bin/perl</n"
58.7 +- 1.3 μs (AlreadySubtracted)Overhead
437 +- 14 μs tcc -run /t/true.c
589 +- 10 μs /bin/perl</n
{ EDIT: /n -> /dev/null and true.c is just "int main(int ac,char*av){return 0;}". }I never tried vlang but I would say this is pretty niche, while C is a standard.
AFAIK tcc is unbeaten, and if we want to be serious about compilation speed comparison, I’d say tcc is a good baseline.
In other words, one needs to have absolute trust in such tools to be able to rely on them.
I only wish it supported C23...
However there are several C++23 goodies on latest VC++, finally.
Also lets not forget Apple and Google no longer are that invested into clang, rather LLVM.
It is up for others to bring clang up to date regarding ISO.
https://en.cppreference.com/w/c/compiler_support/23.html
(not sure how much Apple/Google even cared about the C frontend before though, but at least keeping the C frontend and stdlib uptodate by far doesn't require as much effort as C++).
Most of the work going into LLVM ecosystem is directly into LLVM tooling itself, clang was started by Apple, and Google picked up on it.
Nowadays they aren't as interested, given Swift, C++ on Apple platforms is mostly for MSL (C++14 baseline) and driver frameworks (also a C++ subset), Google went their own way after the ABI drama, and they care about what fits into their C++ style guide.
I know Intel is one of the companies that picked up some of the work, yet other compiler vendors that replaced their proprietary forks with clang don't seem that eager to contribute upstream, other than LLVM backends for their platforms.
Such is the wonders of Apache 2.0 license.
https://github.com/aherrmann/rules_zig
Real world projects like ZML uses it:
[1]: https://andrewkelley.me/post/zig-cc-powerful-drop-in-replace...
That said, there are definitely still bugs to their self hosted compiler. For example, for SQLite I have to use llvm - https://github.com/vrischmann/zig-sqlite/issues/195 - which kinda sucks.
E.g., here it is kqueue-aware on FreeBSD: https://github.com/mitchellh/libxev/blob/34fa50878aec6e5fa8f...
Might not be that different to add OpenBSD. Someone would begin here: https://github.com/mitchellh/libxev/blob/main/src/backend/kq... It's about 1/3 tests and 2/3 mostly-designed-to-be-portable code. Some existing gaps for FreeBSD, but fixing those (and adding OpenBSD to some switch/enums) should get you most of the way there.
If you care about compilation speed b/c it's slowing down development - wouldn't it make sense to work on an interpreter? Maybe I'm naiive, but it seems the simpler option
Compiling for executable-speed seems inherently orthogonal to compilation time
With an interpreter you have the potential of lots of extra development tools as you can instrument the code easily and you control the runtime.
Sure, in some corner cases people need to only be debugging their full-optimization-RELEASE binary and for them working on a interpreter, or even a DEBUG build just doesn't makes sense. But that's a tiny minority. Even there, you're usually optimizing a hot loop that's going to be compiling instantly anyway
That true at the limit. As is often the case there's a vast space in the middle, between the extremes of ultimate executable speed and ultimate compiler speed, where you can make optimizations that don't trade off against each other. Where you can make the compiler faster while producing the exact same bytecode.
Even with C++ and heavy stdlib usage it's possible to have debug builds that are only around 3..5 times slower than release builds in C++. And you need that wiggle room anyway to account for lower end CPUs which your game should better be able to run on.
Debug builds are used sometimes, just not most of the time.
Naturally you can abstract them as middleware does it, however it is where tracking performance bugs usually lies.
I've never done it, but I just find it hard to believe the slow down would be that large. Most of the computation is on GPU, and you can set your build up such that you link to libraries built at different compilation optimizations.. and they're likely the ones doing most of the heavy lifting. You're not rebuilding all of the underlying libs b/c you're not typically debugging them.
EDIT:
if you're targeting a console.. why would you not debug using higher end hardware? If anything it's an argument in favor of running on an interpreter with a very high end computer for the majority of development..
This was building the entire game code with the same build options though, we didn't bother to build parts of the code in release mode and parts in debug mode, since debug performance was fast enough for the game to still run in realtime. We also didn't use a big integrated engine like UE, only some specialized middleware libraries that were integrated into the build as source code.
We did spend quite some effort to keep both build time and debug performance under control. The few cases were debug performance became unacceptable were usually caused by 'accidentially exponential' problems.
> Most of the computation is on GPU
Not in our case, the GPU was only used for rendering. All game logic was running on the CPU. IIRC the biggest chunk was pathfinding and visibility/collision/hit checks (e.g. all sorts of NxM situations in the lower levels of the mob AI).
I've heard Rust classified as Safety, Performance, Usability in that order and Zig as Performance, Usability, Safety. From that perspective the build speed-ups make sense while an interpreter would only fit a triple where usability comes first.
This is why the value proposition of Zig is a question for people, because if you got measurably better performance out of Zig in exchange for worse safety that would be one thing. But Rust is just as performant, so with Zig you're exchanging safety for expressivity, which is a much less appealing tradeoff; the learning from C is that expressivity without safety encourages programmers to create a minefield of exploitable bugs.
Zig is a lot safer than C by nature. It does not aim to be as safe as Rust by default. But the language is a lot simpler. It also has excellent cross-compiling, much faster compilation (which will improve drastically as incremental compilation is finished), and a better experience interfacing with/using C code.
So there are situations where one may prefer Zig, but if memory-safe performance is the most important thing, Rust is the better tool since that is what it was designed for.
Zig feels like a small yet modern language that lacks many of C's footguns. It has far fewer implicit type conversions. There is no preprocessor. Importing native zig functionality is done via modules with no assumed stable ABI. Comptime allows for type and reflection chicanery that even C++ would be jealous of.
Yet I can just include a C header file and link a C library and 24 times out of 25, it just works. Same thing if you run translate-c on a C file, and as a bonus, it reveals the hoops that the tooling has to go through to preserve the fiddly edge cases while still remaining somewhat readable.
Zig isn't C-like in its syntax or its developer experience (e.g. whether it has a package manager), but in its execution model: unsafe, imperative, deemphasizes objects, metaprogramming by token manipulation.
I am only aware of the memory model, specifically pointer and consistency/synchronization which extends the C memory model by abstractions deemed "memory-safe" based on Separation Logic and Strict Aliasing. In my understanding you criticize Zig for not offering safety abstractions, because you can also write purely unsafe Rust also covered by the Rust "execution model".
Rust has no inheritance typical for OOP. Zig has no token manipulation, but uses "comptime".
> expressivity without safety encourages programmers to create a minefield of exploitable bugs.
Also, psychologically, the combination creates a powerful and dangerously false sensation of mastery in practitioners, especially less experienced ones. Part of Zig's allure, like C's bizarre continued relevance, is the extreme-sport nature of the user experience. It's exhilarating to do something dangerous and get away with it.
Zig is not an extreme sports experience. It has much better memory guarantees than C. It's clearly not as good as rust if that's your main concern but rust people need to also think long and hard about why rust has trouble competing with languages like Go, Zig and C++ nowadays.
What's the use case for Zig? You're in that 1% of projects in which you need something beyond what a garbage collector can deliver and, what, you're in the 1% of the 1% in which Rust's language design hurts the vibe or something?
You can also get that "safer, but not Safe" feeling from modern C++. So what? It doesn't really matter whether a language is "safer". Either a language lets you prove the absence of certain classes of bug or it doesn't. It's not like C, Zig, and Rust form a continuum of safety. Two of these are different from the third in a qualitative sense.
You want a modern language that has package management, bounds checks and pointer checks out of the box. Zig is a language you can pick up quickly whereas rust takes years to master. It's a good replacement to C++ if you're building a game engine for example.
> Either a language lets you prove the absence of certain classes of bug or it doesn't. It's not like C, Zig, and Rust form a continuum of safety. Two of these are different from the third in a qualitative sense.
Again repeating my critique from the previous comment – yes Zig brings in additional safety compared to C. Dismissing all of that out of hand does not convince anyone to use rust.
It's value proposition is aiming for the next generation of system programmers that aren't interested in Rust like languages but C like ones.
Current system devs working on C won't find Zig or Odin's value propositions to matter enough or indeed as you point out, to offer enough of a different solution like Rust does.
But I'm 100% positive that Zig will be a huge competitor for Rust in the next decade because it's very appealing to people willing to get into system programming, but not interested in Rust.
I like the Julia approach. Full on interactivity, the “interpreter” just compiles and runs your code.
SBCL, my favorite Common Lisp, works the same way I think
It's finally as snappy as recompiling the "Borland Pascal version of Turbo Vision for DOS" was on an Intel 486 in 1995, when I graduated from high school...
They C version of Turbo Vision was 5-10x slower to compile at that time too.
Turbo Vision is a TUI windowing framework, which was used for developing the Borland Pascal and C++ IDEs. Kinda like a character mode JetBrains IDE in 10 MB instead of 1000 MB...
1. Zig uses three(!) frontend IRs, Odin and C3 only one.
2. Zig relies heavily on comptime to provide most of its language features. C3, which has a similar set of features, doesn’t encourage excessive use of compile time and offers more features built into the language (for example format checking is a builtin, whereas in Zig it’s userland, even if C3 has the capability to do it like Zig). Although note that Zig only checks code directly traced, whereas C3 will check all code, and so will - esp for simple code - check much more code than Zig does.
3. Zig generates A LOT of code for even just a ”safe” Hello World.
On top of that, maybe there are just some other inefficiencies that hasn’t been addressed yet by the Zig team?
Even though I have stakes in the game I hope Zig gets faster. We need to move away from the illusion that Swift, C++ and Rust compilation speeds are anywhere near acceptable.
jtrueb•4mo ago