At some point you realize you need type information, so you just add it to your func params.
That bubbles all the way up and you are done. Or you realize in certain situation it is not possible to provide the type and you need to solve a arch/design issue.
Perhaps the safety is the tradeoff with the comparative ease of using the language compared to Rust, but I’d love the best of both worlds if it were possible
TypeScript is to JavaScript
as
Zig is to C
I am a huge TS fan.
[0] https://en.m.wikipedia.org/wiki/Embrace,_extend,_and_extingu...
To be fair, I don't believe there is a centralized and stated mission with Zig but it does feel like the story has moved beyond the "Incrementally improve your C/C++/Zig codebase" moniker.
Definitely not the case in Zig. From my experience, the relationship with C libraries amounts to "if it works, use it".
And the relationship with C libraries certainly feels like a placeholder, akin to before the compiler was self-hosted. While I have seen some novel projects in Zig, there are certainly more than a few "pure Zig" rewrites of C libraries. Ultimately, this is free will. I just wonder if the Zig community is teeing up for a repeat of Rust's actix-web drama but rather than being because of the use of unsafe, it would be due to the use of C libraries instead of the all-Zig counterparts (assuming some level of maturity with the latter). While Zig's community appears healthier and more pragmatic, hype and ego have a way of ruining everything.
Yes
> Dynamic linking?
Yes
> Importing/inclusion?
Yes
> How does this translate (no pun intended) when the LLVM backend work is completed?
I'm not sure what you mean. It sounds like you think they're working on being able to use LLVM as a backend, but that has already been supported, and now they're working on not depending on LLVM as a requirement.
> Does this extend to reproducible builds?
My hunch would be yes, but I'm not certain.
> Hermetic builds?
I have never heard of this, but I would guess the same as reproducible.
> While I have seen some novel projects in Zig, there are certainly more than a few "pure Zig" rewrites of C libraries.
It's a nice exercise, especially considering how close C and Zig are semantically. It's helpful for learning to see how C things are done in Zig, and rewriting things lets you isolate that experience without also being troubled with creating something novel.
For more than a few not rewrites, check out https://github.com/allyourcodebase, which is a group that repackages existing C libraries with the Zig package manager / build system.
andrewrk's wording towards C and its main ecosystem (POSIX) is very hostile, if that is something you'd like to go by.
C interop is very important, and very valuable. However, by removing undefined behaviours, replacing macros that do weird things with well thought-through comptime, and making sure that the zig compiler is also a c compiler, you get a nice balance across lots of factors.
It's a great language, I encourage people to dig into it.
Whether that ends up happening is obviously yet to be seen; as it stands there are plenty of Zig codebases with C in the mix. The idea, though, is that there shouldn't be anything stopping a programmer from replacing that C with Zig, and the two languages only coexist for the purpose of allowing that replacement to be gradual.
It’s totally subjective but I find the language boring to use. For side projects I like having fun thus I picked zig.
To each his own of course.
Hard disagree about refactoring. Rust is one of the few languages where you can actually do refactoring rather safely without having tons of tests that just exist to catch issues if code changes.
I am just going to quote what pcwalton said the other day that perhaps answer your question.
>> I’d be much more excited about that promise [memory safety in Rust] if the compiler provided that safety, rather than asking the programmer to do an extraordinary amount of extra work to conform to syntactically enforced safety rules. Put the complexity in the compiler, dudes.
> That exists; it's called garbage collection.
>If you don't want the performance characteristics of garbage collection, something has to give. Either you sacrifice memory safety or you accept a more restrictive paradigm than GC'd languages give you. For some reason, programming language enthusiasts think that if you think really hard, every issue has some solution out there without any drawbacks at all just waiting to be found. But in fact, creating a system that has zero runtime overhead and unlimited aliasing with a mutable heap is as impossible as finding two even numbers whose sum is odd.
Seeing how most people hate the lifetime annotations, yes. For the foreseeable future.
People want unlimited freedom. Unlimited freedom rhymes with unlimited footguns.
I ask because I am obvious blind to other cases - that's what I'm curious about! I generally find the &s to be a net help even without mem safety ... They make it easier to reason about structure, and when things mutate.
The refusal to accept code that the developer knows is correct, simply because it does not fit how the borrow checker wants to see it implemented. That kind of heavy-handed and opinionated supervision is overhead to productivity. (In recent times, others have taken to saying that Rust is less "fun.")
When the purpose of writing code is to solve a problem and not engage in some pedantic or academic exercise, there are much better tools for the job. There are also times when memory safety is not a paramount concern. That makes the overhead of Rust not only unnecessary but also unwelcome.
How do you know it is correct? Did you prove it with pre-condition, invariants and post-condition? Or did you assume based on prior experience.
How do you know you haven't been writing unsafe code for years, when C unsafe guidelines have like 200 entries[1].
[1]https://www.dii.uchile.cl/~daespino/files/Iso_C_1999_definit... (Annex J.2 page 490)
The aliasing rules in Rust are also pretty strict. There are plenty of single-threaded programs where I want to be able to occasionally read a piece of information through an immutable reference, but that information can be modified by a different piece of code. This usually indicates a design issue in your program but sometimes you just want to throw together some code to solve an immediate problem. The extra friction from the borrow checker makes it less attractive to use Rust for these kinds of programs.
You could do that using Cell or RefCell. I agree that it makes it more cumbersome.
I'm curious what are these other languages that can do these things? I read HN regularly but don't recall them. Or maybe that's including things like Java's annotation processing which is so clunky that I wouldn't classify them to be equivalent.
It’s beautiful to implement an incredibly fast serde in like 10 lines without requiring other devs to annotate their packages.
I wouldn’t include Rust on that list if we’re speaking of compile time and compile time type abilities.
Last time I tried it Rust’s const expression system is pretty limited. Rust’s macro system likewise is also very weak.
Primarily you can only get type info by directly passing the type definition to a macro, which is how derive and all work.
Annotations themselves are pretty great, and AFAIK, they are most widely used with reflection or bytecode rewriting instead. I get that the maintainers dislike macro-like capabilities, but the reality is that many of the nice libraries/facilities Java has (e.g. transparent spans), just aren't possible without AST-like modifications. So, the maintainers don't provide 1st class support for rewriting, and they hold their noses as popular libraries do it.
Closely related, I'm pretty excited to muck with the new class file API that just went GA in 24 (https://openjdk.org/jeps/484). I don't have experience with it yet, but I have high hopes.
(Not having much Spanish, I at first thought "Odin's disco(teque)" and then "no, that doesn't make sense about sides", but then, surely primed by English "disco", thought "it must mean Odin's record/lp/album".)
"It's Odin's Disc. It has only one side. Nothing else on Earth has only one side."
Unrolling as a performance optimization is usually slightly different, typically working in batches rather than unrolling the entire thing, even when the length is known at compile time.
The docs suggest not using `inline` for performance without evidence it helps in your specific usage, largely because the bloated binary is likely to be slower unless you have a good reason to believe your case is special, and also because `inline` _removes_ optimization potential from the compiler rather than adding it (its inlining passes are very, very good, and despite having an extremely good grasp on which things should be inlined I rarely outperform the compiler -- I'm never worse, but the ability to not have to even think about it unless/until I get to the microoptimization phase of a project is liberating).
To me, the uniqueness of Zig's comptime is a combination of two things:
1. comtpime replaces many other features that would be specialised in other languages with or without rich compile-time (or runtime) metaprogramming, and
2. comptime is referentially transparent [1], that makes it strictly "weaker" than AST macros, but simpler to understand; what's surprising is just how capable you can be with a comptime mechanism with access to introspection yet without the referentially opaque power of macros.
These two give Zig a unique combination of simplicity and power. We're used to seeing things like that in Scheme and other Lisps, but the approach in Zig is very different. The outcome isn't as general as in Lisp, but it's powerful enough while keeping code easier to understand.
You can like it or not, but it is very interesting and very novel (the novelty isn't in the feature itself, but in the place it has in the language). Languages with a novel design and approach that you can learn in a couple of days are quite rare.
[1]: In short, this means that you get no access to names or expressions, only the values they yield.
The innovation in Zig is the restrictions that limit the power of macros.
If I understand correctly you're using the term in a different (perhaps more correct/original?) sense where it roughly means that two expressions with the same meaning/denotation can be substituted for each other without changing the meaning/denotation of the surrounding program. This property is broken by macros. A macro in Rust, for instance, can distinguish between `1 + 1` and `2`. The comptime system in Zig in contrast does not break this property as it only allows one to inspect values and not un-evaluated ASTs.
Practically, "zig build"-time-eval. As such there's another 'comptime' stage with more freedom, unlimited run-time (no @setEvalBranchQuota), can do IO (DB schema, network lookups, etc.) but you lose the freedom to generate zig types as values in the current compilation; instead of that you of course have the freedom to reduce->project from target compiled semantic back to input syntax down to string to enter your future compilation context again.
Back in the day, where I had to glue perl and tcl via C at one point in time, passing strings for perl generated through tcl is what this whole thing reminds me of. Sure it works. I'm not happy about it. There's _another_ "macro" stage that you can't even see in your code (it's just @import).
The zig community bewilders me at times with their love for lashing themselves. The sort of discussions which new sort of self-harm they'd love to enforce on everybody is borderline disturbing.
Personally, I find the idea that a compiler might be able to reach outside itself completely terrifying (Access the network or a database? Are you nuts?).
That should be 100% the job of a build system.
Now, you can certainly argue that generating a text file may or may not be the best way to reify the result back into the compiler. However, what the compiler gets and generates should be completely deterministic.
gitroom•4h ago