Zig got me excited when I stumbled into it about a year ago, but life got busy and then the io changes came along and I thought about holding off until things settled down - it's still a very young language.
But reading the first couple of chapters has piqued my interest in a language and the people who are working with it in a way I've not run into since I encountered Ruby in ~2006 (before Rails hit v1.0), I just hope the quality stays this high all the way through.
It does not. It dives into compiler details on chapter 01, and smells heavily AI (not in a good way) as others have pointed out
HOWTO: The text can be found per-chapter in `./pages/{chapter}.adoc` but each chapter includes code snippets found in a respective `./chapters-data/code/{chapter}/` subdirectory. So, perhaps a hacky way to do it but quite lazy to fully figure asciidoctor flags, created using a script a combined book.adoc that includes all others with `include::{chapter}.adoc` directives, then run `asciidoctor-pdf -a sourcedir=../chapters-data/code -r asciidoctor-diagram -o book.pdf ./pages/book.adoc`.
though maybe AI is getting to the point it can do stuff like this somewhat decently
I specify the accuracy and false positive rate because otherwise skeptics in comment sections might otherwise think it's one of the plethora of other AI detection tools that don't really work
Because it’s written like a tagline instead of like a sentence people would say to each other.
I'll keep exploring this book though, it does look very impressive.
I was trying to solve a simple problem but Google, the official docs, and LLMs were all out of date. I eventually found what I needed in Zig's commit history, where they casually renamed something without updating the docs. It's been renamed once more apparently, still not reflected in the docs :shrugs:.
Case of a person who is relying on LLMs so much he cannot imagine doing something big by themselves.
> Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software.
The "it's not X, it's Y" phrasing screams LLM these days
As someone who uses em-dashes a lot, I’m getting pretty tired of hearing something “screams AI” about extremely simple (and common) human constructs. Yeah, the author does use that convention a number of times. But that makes sense, if that’s a tool in your writing toolbox, you’ll pull it out pretty frequently. It’s not signal by itself, it’s noise. (does that make me an AI!?) We really need to be considering a lot more than that.
Reading through the first article, it appears to be compelling writing and a pretty high quality presentation. That’s all that matters, tbh. People get upset about AI slop because it’s utterly worthless and exceptionally low quality.
E.g., "Zig takes a different path. It reveals complexity—and then gives you the tools to master it."
If we had a reliable oracle, I would happily bet a $K on significant LLM authorship.
The repetitiveness of the shell commands (and using zig build-exe instead of zig run when the samples consist of short snippets), the filler bullet points and section organization that fail to convey any actual conceptual structure. And ultimately throughout the book the general style of thought processes lacks any of the zig community’s cultural anachronisms.
If you take a look at the repository you’ll also notice baffling tech choices not justified by the author that runs counter against the zig ethos.
(Edit: the build system chapter is an even worse offender in meaningless cognitively-cluttering headings and flowcharts, it’s almost certainly entirely hallucinated, there is just an absurd degree of unziglikeness everywhere: https://www.zigbook.net/chapters/26__build-system-advanced-t... -- What’s with the completely irrelevant flowchart of building the zig compliler? What even is the point of module-graph.txt? And icing on the cake in the “Vendoring vs Registry Dependencies” section.)
For some of my projects I develop against my own private git server, then when I'm ready to go public, create a new git repo with a fully squashed history. My early commits are basically all `git commit -m "added stuff"`
One example is in chapter 1. It talks about symbol exporting based on platform type, without explaining ELF. This is before talking about while loops.
It's had some interesting nuggets so far, and I've followed along since I'm familiar with some of the broad strokes, but I can see it being confusing to someone new to systems programming.
> Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software.
“Not just X - Y” constructions.
> By Chapter 61, you will not just know Zig; you will understand it deeply enough to teach others, contribute to the ecosystem, and build systems that reflect your complete mastery.
More not just X - Y constructions with parallelism.
Even the “not made with AI” banner seems AI generated! Note the 3 item parallelism.
> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
I don’t have anything against AI generated content. I’m just confused what’s going on here!
EDIT: after scanning the contents of the book itself I don’t believe it’s AI generated - perhaps it’s just the intro?
EDIT again: no, I’ve swung back to the camp of mostly AI generated. I would believe it if you told me the author wrote it by hand and then used AI to trim the style, but “no AI” seems hard to believe. The flow charts in particular stand out like a sore thumb - they just don’t have the kind of content a human would put in flow charts.
Humans were learning the same patterns off each other. Such style advice has been floating around on e.g. LinkedIn for a while now. Just a couple years later, humans are (predictably) still doing it, even if the LLMs are now too.
We should be giving each other a bit of break. I'd personally be offended if someone thought I was a clanker.
I'm not sure what to make of that either.
I think it's time to have a badge for non LLM content, and avoid the rest.
Edit: So I wrote this before I read the rest of the thread where everyone is pointing out this is indeed probably AI, so right of the bat the "AI-free" label is conning people.
https://cadence.moe/blog/2024-10-05-created-by-a-human-badge...
> Zig takes a different path. It reveals complexity—and then gives you the tools to master it.
> This book will take you from Hello, world! to building systems that cross-compile to any platform, manage memory with surgical precision, and generate code at compile time. You will learn not just how Zig works, but why it works the way it does. Every allocation will be explicit. Every control path will be visible. Every abstraction will be precise, not vague.
But sadly people like the prompter of this book will lie and pretend to have written things themselves that they did not. First three paragraphs by the way, and a bingo for every sign of AI.
Anyway, if someone says they didn't use AI, I would personally give them the benefit of the doubt for a while at least.
The formal version is "not only... but also" https://dictionary.cambridge.org/us/grammar/british-grammar/..., which I personally use regularly but I often write formally even in informal settings.
"not just... but" is just the less formal version.
Google ngrams shows the "not just ... but" construction has a sharp increase starting in 2000. https://books.google.com/ngrams/graph?content=not+just+*+but...
Same with "not only ... but also" https://books.google.com/ngrams/graph?content=not+only+*+but...
Like many scholarly linguistic construction, this is one many of us saw in latin class with non solum ... sed etium or non modo ... sed etium: https://issuu.com/uteplib/docs/latin_grammar/234. I didn't take ancient Greek, but I wouldn't be surprised if there's also a version there.
More info
- https://www.phrasemix.com/phrases/not-just-something-but-som...
- https://www.merriam-webster.com/dictionary/not%20just
- https://www.grammarly.com/blog/writing-techniques/parallelis...
- https://www.crockford.com/style.html
- https://englishan.com/correlative-conjunctions-definition-ru...
I had a discussion on some other submission a couple of weeks back, where several people were arguing "it's obviously AI generated" (the style btw was completely different to this, quite a few explicitives...). When I put the the text in 5 random AI detectors the argument who except for one (which said mixed, 10% AI or so) all said 100% human I was being down voted and the argument became "AI detection tools can detect AI" but somehow the people claim there are 100% clear telltale signs which says it's AI (why those detection tools can detect them is baffling to me).
I have the feeling that the whole "it's AI" stick has become a synonym for I don't like this writing style.
It really does not add to the discussion. If people would post immediately "there's spelling mistakes this is rubbish", they would rightfully get down voted, but somehow saying "it's AI" is acceptable. Would the book be any more or less useful if somebody used AI for writing it? So what is your point?
> Would the book be any more or less useful if somebody used AI for writing it?
Personally, I don't want to read AI generated texts. I would appreciate if people were upfront about their LLM usage. At the very least they shouldn't lie about it.
I have no problem at all reading AI-generated content if it's good, but I don't appreciate dishonesty.
[1]: https://www.pangram.com/ [2]: https://arxiv.org/pdf/2402.14873
Would most LLMs have written that invalid fragment sentence "Because they are." ?
I don't think you have enough to go on to make this accusation.
This kind of “workflow” llm use has the potential to deliver a lot of value even to a scenario where the final product is human-composed.
The book content itself is deliberately free of AI-generated prose. Drafts may start anywhere, but final text should be reviewed, edited, and owned by a human contributor.
There is more specificity around AI use in the project README. There may have been LLMs used during drafting, which has led to the "hallmarks" sticking around that some commenters are pointing out.Yes.
But it's not “free from AI-generated prose”, so why advertise it as such?
And since the first sentence is a lie, why should we believe the second sentence at all?
People have the illusion of reviewing and "owning" the final product, but that is not how it looks like from the outside. The quality, the prose style, the errors that pass through due to inevitable AI-induced complacency ALWAYS EVENTUALLY show. If people got out of the AI bubbles they would see it too, alas.
We keep reading the same stories for at least a couple of years now. There is no novelty anymore. The core issues and problems have stayed the same since gpt3.5. And because they are so omnipresent in the internet, we have grown to be able to recognise them almost automatically. It is no longer just a matter of quality, it is an insult to the readers when an author pretends that content is not AI generated just because they "reviewed it". Reviewing sth that somebody else wrote is not ownership, esp when that sth is an LLM.
In any case, I do not care if people want to read or write AI generated books, just don't lie about it being AI generated.
I am just impressed by the quality and details and approach of it all.
Nicely done (PS: I know nothing about systems programming and I have been writing code for 25 years)
I too have this feeling sometimes. It's a coping mechanism. I don't know why we have this but I guess we have to see past it and adapt to reality.
Because AI gets things wrong, often, in ways that can be very difficult to catch. By their very nature LLMs write text that sounds plausible enough to bypass manual review (see https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...), so some find it best to avoid using it at all when writing documentation.
Quality prose usually only becomes that after many reviews.
But why would a serious person claim that they wrote this without AI when it's obvious they used it?!
Using any tool is fine, but someone bragging about not having used a tool they actually used should make you suspicious about the amount of care that went to their work.
But if you carefully review and iterate the contributions of your writers - human or otherwise - you get a quality outcome.
But why would you trust the author to have done that when they are lying in a very obvious way about not using AI?
Using AI is fine, it's a tool, it's not bad per se. But claiming very loud you didn't use that tool when it's obvious you did is very off-putting.
If it was so obviously written by AI then finding those mistakes should be easy?
Passing even correct information through an LLM may or may not taint it; it may create sentences which on first glance are similar, but may have different, imprecise meaning - specific wording may be crucial in some cases. So if the style is under question, the content is as well. And if you can write the technically correct text at first, why would you put it through another step?
A calculator exists solely for the realm of mathematics, where you can afford to more or less throw away the value of human input and overall craftsmanship.
That is not the case with something like this, which - while it leans in to engineering - is in effect viewed as a work of art by people who give a shit about the actual craft of writing software.
“The Party told you to reject the evidence of your eyes and ears. It was their final, most essential command.”> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
If the site would have said something like "We use AI to clean up our prose, but it was all audited thoroughly by a human after", I wouldn't have an issue. Even better if they shared their prompts.
No it isn't. My TI-83 is deterministic and will give me exactly what I ask for, and will always do so, and when someone uses it they need to understand the math first or otherwise the calculator is useless.
These AI models on the other hand don't care about correctness, by design don't give you deterministic answers, and the person asking the question might as well be a monkey as far as their own understanding of the subject matter goes. These models are if anything an anti-calculator.
As Dijkstra points out in his fantastic essay on the idiocy of natural language "computation", what you are doing is exactly not computation but a kind of medieval incantation. Computers were designed to render impossible precisely the nonsense that LLMs produce. The biggest idiot on earth will still get a correct result from the calculator because unlike the LLM it is based on boolean logic, not verbal or pictorial garbage.
https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
I suppose the author may have deliberately added the "No AI assistance" notice - making sure all the hallucinated bugs are found via outraged developers raising tickets. Without that people may not even have bothered.
I am just a human supremacist.
Agree. What matters is quality, regardless of what/who made it.
O.t.o.h., it is funny to see tech people here, that work on implementing technology, taking an approach so... Luddite and "anti-tech".
i think 90% of the comments were about the AI part rather than the actual product - which seems very cool and definitely took a lot of effort to put together.
Zig is just C with a marketing push. Most developers already know C.
I really don't need this kind of self-enlightenment rubbish.
What if I read the whole book and felt no change?
I think I understand SoA just fine.
Zig also encourages you to "think like a computer" (also an explicit goal stated by Andrew) even more than C does on modern machines, given things like real vectors instead of relying on auto vectorization, the lack of a standard global allocator, and the lack of implicit buffering on standard io functions.
I would definitely put Zig on the list of languages that made me think about programming differently.
Zig _the language_ barely does any of the heavy lifting on this front. The allocator and io stories are both just stdlib interfaces. Really the language just exists to facilitate the great toolchain and stdlib. From my experience the stdlib seems to make all the right choices, and the only time it doesn't is when the API was quickly created to get things working, but hasn't been revisited since.
A great case study of the stdlib being almost perfect is SinglyLinkedList [1]. Many other languages implement it as a container, but Zig has opted to implement it as an intrusively embedded element. This might confuse a beginner who would expect SinglyLinkedList(T) instead, but it has implications surrounding allocation and it turns out that embedding it gives you a more powerful API. And of course all operations are defined with performance in mind. prepend is given to you since it's cheap, but if you want postpend you have to implement it yourself (it's a one liner, but clearly more expensive to the reader).
Little decisions add up to make the language feel great to use and genuinely impressive for learning new things.
[1] https://ziglang.org/documentation/master/std/#std.SinglyLink...
const a = @Vector(4, i32){ 1, 2, 3, 4 };
const b = @Vector(4, i32){ 5, 6, 7, 8 };
const c = a + b;
This compiles to this x86-64 code: vmovdqa xmm0, xmmword ptr [rip + .LCPI5_0]
vmovdqa xmmword ptr [rbp - 48], xmm0
vmovdqa xmm0, xmmword ptr [rip + .LCPI5_1]
vmovdqa xmmword ptr [rbp - 32], xmm0
vmovdqa xmm0, xmmword ptr [rip + .LCPI5_2]
vmovdqa xmmword ptr [rbp - 16], xmm0
C does not provide vector primitive to expose the vector primitives in modern machines. C compilers rely on analyzing loops to see when auto-vectorization is applicable. Auto-vectorization is a higher level of abstraction than directly exposing vector primitives.Regarding the lack of a standard global allocator, and the lack of implicit buffering on standard io functions, these are simply features of the Zig standard library which are true of computers (computers do not have a standard global allocator nor do they implicitly buffer IO) but are not features of the C standard library, and therefore are not encouraged to use custom allocators or explicit buffering.
And agree with allocators; in C I always considered using custom allocators but never really needed to. Having them just available in the zig std means I actually use them. The testing allocator is particularly useful IMO.
Never used Go but if it's Zig-like I might give it a shot! Thanks!
- structs and functions are the main means of composition
- the pattern of: allocate resource, immediately defer deallocating the resource
- errors are values, handled very similarly (multiple return values vs error unions)
- built in json <-> struct support
- especially with the 0.16.0 Io changes in Zig, the concurrency story (std.Io.async[0] is equivalent to the go keyword[1], std.Io.Queue[2] is equivalent to channels[3], std.Io.select[4] is equivalent to the select keyword[5])
- batteries included but not sprawling stdlib
- git based dependencies
- built in testing
[0] https://ziglang.org/documentation/master/std/#std.Io.async[1] https://go.dev/tour/concurrency/1
[2] https://ziglang.org/documentation/master/std/#std.Io.Queue
[3] https://go.dev/tour/concurrency/2
[4] https://ziglang.org/documentation/master/std/#std.Io.select
DHH does a great job of clarifying this during his podcast with Lex Friedman. The "why" is immediately clear and one can decide for themselves if it's what they're looking for. I have not yet seen a "why" for Zig.
Convincing enough?
Learning LISP, Fortran, APL, Perl, or really any language that is different from what you’re used to, will also do this for you.
/S
I'm not sure what they expect, but to me Zig looks very much like C with a modern standard lib and slightly different syntax. This isn't groundbreaking, not a thought paradigm which should be that novel to most system engineers like for example OCaml could be. Stuff like this alienates people who want a technical justification for the use of a language.
As a general point I'd like to state that I don't think it really matters what "people" do when you're learning for yourself. In the grand scheme of things approximately no one uses the BEAM, but this doesn't mean that learning how to use it is somehow pointless.
What are those aspects?
newb to this area.
- Leaning on a pre-emptive scheduler to maintain order even in the presence of ridiculous amounts of threads ("processes" on the BEAM) running
- Using supervision trees to specify how and when processes and their dependents should be restarted
- Using `gen_server` processes as a standard template for how a thread should be running
There's more to mine from using the BEAM, but I think the above are some of the most important aspects. The first two I've never found to be fully replicated anywhere other than in OTP/BEAM. You don't need them, but once you're bought into the BEAM they're incredibly nice to have.
I looked at array languages briefly, and my impression was that"ooh this is just Numpy but weirder."
And 6502 assembly. ;)
And SNOBOL.
And Icon.
And ...
Rust is the small, beautiful language hiding inside of Modern C++. Ownership isn't new. It's the core tenet of RAII. Rust just pulls it out of the backwards-compatible kitchen sink and builds it into the type system. Rust is worth learning just so that you can fully experience that lens of software development.
Zig is Modern C development encapsulated in a new language. Most importantly, it dodges Rust and C++'s biggest mistake, not passing allocators into containers and functions. All realtime development has to rewrite their entire standard libraries, like with the EASTL.
On top of the great standard library design, you get comptime, native build scripts, (err)defer, error sets, builtin simd, and tons of other small but important ideas. It's just a really good language that knows exactly what it is and who its audience is.
No it's not. Rust has roots in functional languages. It is completely orthoganol to C++.
https://graydon2.dreamwidth.org/307291.html
And on slide #4, he mentions that "C++ is well past expiration date" :
https://venge.net/graydon/talks/intro-talk-2.pdf
It's possible that Graydon's earliest private versions of Rust the 4 years prior to that pdf were an OCaml-inspired language but it's clear that once the team of C++ programmers at Mozilla started adding their influences, they wanted it to be a cleaner version of C++. That's also how the rest of the industry views it.
Alternative yes, derivative no. Rust doesn't approach C++'s metaprogramming features, and it probably shouldn't given how it seems to be used. It's slightly self-serving for browser devs to claim Rust solves all relevant problems in their domain and therefore eclipses C++, but to me in the scientific and financial space it's a better C, making tradeoffs I don't see as particularly relevant.
I say this as a past contributor to the Rust std lib.
Zig, D, and C are also alternatives to C++. It’s a class of languages that have zero cost abstractions.
Rust is NOT a beautiful language hiding inside of C++. It is not an evolution of C++. I’m pointing out that what you said is objectively wrong.
Can rust replace C++ as a programming language that has a fast performance profile due to zero cost abstractions? Yes. In the same way that Haskell can replace Python, yes it can.
Funny. This was a great sell to me. I wonder why it isn’t the blurb. Maybe it isn’t a great sell to others.
The problem for me with so many of these languages is that they’re always eager to teach you how to write a loop when I couldn’t care less and would rather see the juice.
However, nowadays with comprehensive books like this, LLM tools can better produce good results for me as I try it out.
Thank you.
Zig is nice too, but it's not that.
Zig on the other specifically addresses syntax shortcomings in part of C. And it does it well. That claim of rust making C more safe because it’s more readable applies to Zig more than it does to Rust.
I feel like the reason the rust zealots lobby like crazy to embed rust everywhere is twofold. One is that they genuinely believe in it and the other is that they know that if other languages that address one of the main rust claims without all the cruft gains popularity they lose the chance of being permanently embdedded in places like the kernel. Because once they’re in it’s a decade long job market
First of all, I'm really opposed to saying "the kernel". I am sure you're talking about the Linux kernel, but there are other kernels (BSD, Windows etc.) that are certainly big enough to not call it "the" kernel, and that may also have their own completely separate "rust-stories".
Secondly, I think the logic behind this makes no sense, primarily because Rust at this point is 10 years old from stable and almost 20 years old from initial release; the adoption into the Linux kernel wasn't exactly rushed. Even if it was, why would Rust adoption in the Linux kernel exclude adoption of another language as well, or a switch to another, if it's better? The fact that Rust was accepted at all to begin with aside from C disproves the assumption, because clearly that kernel is open for "better" languages.
The _simplest_ explanation to why Rust has succeeded is that it's solves actual problems, not that "zealots" are lobbying for it to ensure they "have a job".
Rust is not stable even today! There is no spec, no alternative implementations, no test suite... "Stable" is what "current compiler compiles"! Existing code may stop compiling any day....
Maybe in 10 years it may become stable, like other "booring" languages (Golang and Java).
Rust stability is why Linus opposes its integration into kernel.
I'm waiting for gccrs to start using the language, actually.
But regardless of how much one likes Zig, it addresses none of the problems that Rust seeks to solve. It's not a replacement for Rust at all, and isn't suitable for any of the domains where Rust excels.
That's a pretty bold claim since Zig is specifically designed for systems programming, low level stuff, network services, databases (think Tigerbeetle). It's not memory safe like Rust is, but it comes with constructs that make it simple to build largely memory safe programs.
Right, this is the specific important thing that Rust does that Zig doesn't (with the caveat that Rust includes the `unsafe` mechanism - as a marked, non-default option - specifically to allow for necessary low-level memory manipulation that can't be checked for correctness by the compiler). Being able to guarantee that something can't happen is more valuable than making it simple to do something correctly most of the time.
So Zig would fail that, but then you could also consider C++ unsuitable for production software - and we know it clearly is still suitable.
I predict Zig will just become more and more popular (and with better, although not as complete- memory safety), and be applied to mission critical infra.
Introducing a language with the same safety as Modula-2 or Object Pascal, would make sense in the 1990's, nowadays with improved type systems making the transition from academia into mainstream, we (the industry) know better.
It is not only Rust, it is Linear Haskell, OCaml effects, Swift 6 ownership model, Ada/SPARK, Chapel,....
Speak for yourself, I never want to write C++ ever again in my life.
I'm not a huge fan of the language of responsibility. I don't think there should be a law banning the use of C or C++ or any other programming language on account of it being unsafe, I don't think that anyone who writes in C/C++ is inherently acting immorally, etc.
What I do think is that Rust is a better-designed language than C or C++ and offers a bunch of affordances, including but not limited to the borrow checker, unsafe mode, the type system, cargo, etc. that make it easier and more fun for programmers to use to write correct and performant software, most of the time in most cases. I think projects that are currently using C/C++ should seriously consider switching off of them to something else, and Rust is an excellent candidate but not the only candidate.
I think Zig is also almost certainly an better language than C/C++ in every respect (I hedge more here because I'm less familiar with Zig, and because it's still being developed). Not having as strong memory safety guarantees as Rust is disappointing and I expect that it will result in Zig-written software being somewhat buggier than Rust-written software over the long term. But I am not so confident that I am correct about this, or that Zig won't bring additional benefits Rust doesn't have, that I would argue that people shouldn't use Zig or work on languages like Zig.
And while I don't have enough experience with Rust to claim this first hand, my understanding is that writing correct unsafe Rust code is at least an order of magnitude harder than writing correct Zig code due to all of the properties/invariants that you have to preserve. So it comes with serious drawbacks, it's not just a quick "opt out of the safety for a bit" switch.
> Being able to guarantee that something can't happen is more valuable than making it simple to do something correctly most of the time.
Of course, all other things being equal, but they're not.
How do you make such boldly dismissive assertions if you don't have enough experience with Rust? You are talking as if these invariants are some sort of requirements/constraints that the language imposes on the programmer. They're not. It's a well-known guideline/paradigm meant to contain any memory safety bugs within the unsafe blocks. Most of the invariants are specific to the problem at hand, and not to the programming language. They are conditions that must be met in any language - C and Zig are no exceptions. Failure to adhere to them will land you in trouble, no matter what sort of safety your language guarantees. They are often talked about in the context of Rust because the ones related to memory-unsafe operations can be tackled and managed within the small unsafe blocks, instead of being sprawling it throughout the code base.
> So it comes with serious drawbacks, it's not just a quick "opt out of the safety for a bit" switch.
Rust is not the ultimate solution to every problem in the world. But this sort of exaggeration and hyperbole is misleading and doesn't help anyone choose any better.
As I said that's my understanding from talking and listening to people who have a lot of experience with Rust, Zig, and C.
So generally speaking, are you saying that writing correct unsafe Rust is only as difficult as writing correct Zig code and not, as I understand it to be, significantly more difficult?
Yes. That's correct. The point is, unsafe Rust is pretty unremarkable. Safe Rust doesn't just do borrow checking of references. It also forbids certain risky actions like raw pointer indirection or calling unsafe functions (across FFI, for example) [1]. Unsafe Rust just enables those features. That's it! Unsafe Rust doesn't disable anything or impose any additional restrictions. Contrary to a popular misconception, it doesn't even disable the borrow checker. Unsafe Rust actually gives you extra freedoms on top of what you already have (including the restrictions).
And now you have to be careful because Rust just gave you a footgun that you asked for. In a manually memory-managed language, you'd get fatigued by the constant worry about this footgun. In Rust, that worry is limited to those unsafe blocks, giving you the luxury to workout strategies to avoid shooting yourself in the foot. The 'invariants' are that strategy. You describe the conditions under which the code is valid. Then you enforce it there, so that you can breath freely in Safe Rust.
[1] https://doc.rust-lang.org/nomicon/what-unsafe-does.html#what...
If references are involved, Rust becomes harder, because the precise semantics are not decided or documented. The semantics aren't complicated; they're along the lines of "while a reference is live, you can't perform a conflicting access from a pointer or reference not derived from that reference". But there aren't good resources for learning this or clarifying the precise details. This area is an active work-in-progress; there is a subteam of the Rust project led by Ralf Jung (https://www.ralfj.de/blog/) working on fully and clearly defining the language's operational semantics, and they are doing an excellent job of it.
When it comes to Zig, the precise rules and semantics of the memory model are much less clear than C. There's essentially no documentation, and if you search GitHub issues a lot of it is undecided and not actively being worked on. This is completely understandable given Zig's stage in development, but for me "how easy it is to write UB-free code" boils down to "how easy is it to understand the rules and apply them correctly", and so to me Zig is very hard to write correctly if you can't even figure out what "correct" is.
Once Zig and Rust both have their memory models fleshed out, I hope Zig lands somewhere comparable to where Rust-without-references is today, and I hope that Rust-with-references ends up being only a little bit harder (and still easier than C).
In my opinion Zig does not move the needle on real safety when the codebase becomes sufficiently complex.
[1]: https://security.googleblog.com/2025/11/rust-in-android-move...
[2]: https://github.com/oven-sh/bun/issues?q=segfault%20OR%20segm...
This near-miss inevitably raises the question: "If Rust can have memory safety vulnerabilities, then what’s the point?"
The point is that the density is drastically lower. So much lower that it represents a major shift in security posture. Based on our near-miss, we can make a conservative estimate. With roughly 5 million lines of Rust in the Android platform and one potential memory safety vulnerability found (and fixed pre-release), our estimated vulnerability density for Rust is 0.2 vuln per 1 million lines (MLOC).
Our historical data for C and C++ shows a density of closer to 1,000 memory safety vulnerabilities per MLOC. Our Rust code is currently tracking at a density orders of magnitude lower: a more than 1000x reduction.
https://security.googleblog.com/2025/11/rust-in-android-move...What we ultimately care about is how many preventable, serious defects sneak into production code - particularly those concerning data security, integrity, and physical safety. The only statistics we should all care about is how many serious CVEs end up in the final product, everything else is just personal preference.
Eliminating a segfault when `--help` is provided twice is nice, but it didn't fix a security vulnerability so using it to bolster the security argument is dishonest.
I think this is hard to generalize about. There are many instances where one might want to do unsafe memory operations in rust, with different correctness implications. I am suspicious that in Zig you do actually have to preserve all the same properties and invariants, and there's just nothing telling you if you did so or not or even what all of them are, so you don't know if your code is correct or not.
There is some overlap but that's still different. The Zig approach to memory safety is to make everything explicit, it is good in a constrained environment typical of embedded programming. The Rust approach is the opposite, you don't really see what is happening, but there are mechanisms to keep your safe. It is good for complex software with lots of moving parts in an unconstrained environment, like a browser.
For a footgun analogy, one will hand you a gun that will never go off unless you aim and pull the trigger, so you can shoot your foot, but no sane person will. It is a good sniper rifle. The Rust gun can go off at any time, even when you don't expect it, but it is designed in such a way that it will never happen when it is pointed at your foot, even if you aim it there. It is a good machine gun.
Pray tell, with Rust already being used in kernels, drivers, and embedded what makes Zig better suited for low-level systems?
More chance to explode a UB in your hand? For that, there is C.
You can use Rust in kernel/embedded code, you can also use C++ (I did) and even Java! but most prefer to use C, and I think that Zig is a better alternative to C for those in the field.
There is still one huge drawback with Zig and that's maturity. Zig is still in beta, and the closest you get to the metal, the more it tends to matter. Hardware projects typically have way longer life cycles and the general philosophy is "if it ain't broke, don't fix it". Rust is not as mature as C by far, there is a reason C is still king, but at least, it is out of beta and is seeing significant production use.
I remember when I talk about Zig to the CTO of the embedded branch of my company. His reaction was telling. "I am happy to hear someone mention Zig, it is a very interesting language and it is definitely on my watch list, but not mature enough to invest in it". He was happy that I mentioned Zig because in the company, the higher ups are all about Rust because of the hype, even though we do very little of if BTW, it is still mostly C and C++. And yeah, hype is important, customers heard about Rust as some magical tech that will make the code bug-free, they didn't hear about Zig, so Rust sells better. In the end, they go for C anyways.
C interop and arena allocators aren't hard requirements for a kernel language. In fact, why would a kernel in <INSERT LANG> need to talk to C? You need it to talk to Assembly/Machine code, not C.
It helps if it can talk to/from C but it's not a requirement.
> customers heard about Rust as some magical tech that will make the code bug-free
That's on customers not having a clear picture. What we can look at experimentally is that yes, Rust will remove a whole suite of bugs, and no, Zig won't help there. Is Zig better than C? Sure, but so is C++ and it still sucks at it.
Like, the few big things wrong with Rust is probably compilation speed and async needing more tweaks (pinned places ergonomics, linear types to deal with async drop...) to make it way better.
https://github.com/oven-sh/bun/issues?q=segfault%20OR%20segm...
It should be noted that most of those issues are created by opening a link that bun creates when it crashes, they are not yet reviewed/confirmed and most likely a lot of them are dulplicates of the same issue.
I don't know if it's my lack of practice, but I never felt the same about, say, Rust's syntax, or the syntax of any other language for that matter.
I'm no Rust fan, but beauty of a syntax is always in the eye of the beholder.
I personally find Go, C++ and Python's syntax beautiful. All can be written in very explicit or expressive forms. On the other hand, you can hide complexity to a point.
If you are going to do complex things in a compact space, you'll asymptotically approach Perl or PCRE. It's maths.
All code is maths, BTW.
I don’t see where the comment you’re replying to does that (was it edited?). Their comment says nothing about aesthetics.
> Rust is the small, beautiful language hiding inside of Modern C++
Needless flame bait follows.
That's true for people who doesn't read and think about the code they write. For people who think from the perspective of a computer, Rust is "same checks, but forced by the compiler".
Make no mistake, to err is human, but Rust doesn't excite me that much.
What it has achieved is making affine types something mainstream developers would care about.
The same outcome can be achieved via affine types, linear types, effects, dependent types, regions, proofs, among many other CS research in type systems.
Which is why following Rust's success, plenty of managed languages are now going through the evolution step to combine automatic resource management with improved type systems.
Taking the one that best approaches their current design.
No? It's more akin to flow analysis with special generic types called lifetimes.
> The same outcome can be achieved via affine types, linear types, effects, dependent types, regions, proofs, among many other CS research in type systems.
Sure, and sounds, colors, and instruments are the same, but they are mixed to create an audio-video song. I'm not saying that what Rust did is something that came about ex nihilo, without precedence.
But having it all unified uniquely the way Rust did it is frankly revolutionary. Until now, people assumed if you want memory safety, you have to add a GC (tracing or RC). Or alternatively write extensive proofs about types like Ada/Spark.
Rather, basing its entire personality around this philosophy is Zig's biggest mistake. If you want to pass around allocators in C++ or Rust, you can just go ahead and do that. But the reason people don't isn't because it's impossible in those languages, it's because the overwhelming majority of the time it's a lot of ceremony for no benefit.
Like, surely people see that in C itself there's nothing stopping anyone from passing around allocators, and yet almost nobody ever does. Ever wonder why that is?
> Most importantly, it dodges Rust and C++'s biggest mistake, not passing allocators into containers and functions
I think that is just a symptom of a broader mistake made by C++ and shared by Rust, which is a belief (that was, perhaps, reasonable in the eighties) that we could and should have a language that's good for both low-level and high-level programming, and that resulted in compromises that disappoint both goals.
To be clear, I really like Zig. But C is also a relatively simple language to both understand and implement because it doesn't have many features, and the features it does have aren't overly clever. Zig is a pretty easy language to learn, but the presence of comptime ratchets up the implementation difficulty significantly.
A true C successor might be something like Odin. I am admittedly not as tuned into the Odin language as I am Zig, but I get the impression that despite being started six months after Zig, the language is mostly fully implemented as envisioned, and most of the work is now spent polishing the compiler and building out the standard library, tooling and package ecosystem.
That being said, I suppose my ultimate wonder is how small a Zig implementation could possibly be, if code size and implementation simplicity was the priority. In other words, could a hypothetical version of the Zig language have existed in the 80's or 90's, or was such a language simply out of reach of the computers of the time.
I don't think there's any programming language today that couldn't have been implemented in the 90s, unless the language relies on LLMs.
To my understanding (and I still haven’t used Zig) the “comptime” inherently (for sufficiently complex cases) leads to library code that needs to be actively tested for potential client use since the instantiation might fail. Which is not the case for the strict subset of “compile time” functionality that Java generics and whatnot bring.
I don’t want that in any “the new X” language. Maybe for experimental languages. But not for Rust or Zig or any other that tries to improve on the mainstream (of whatever nice) status quo.
True, like templates in C++ or macros in C or Rust. Although the code is "tested" at compile time, so at worst your compilation will fail.
> I don’t want that in any “the new X” language
Okay, and I don't want any problem of any kind in my language, but unfortunately, there are tradeoffs in programming language design. So the question is what you're getting in exchange for this problem. The answer is that you're getting a language that's both small and easy to inspect and understand. So you can pick having other problems in exchange for not having this one, but you can't pick no problems at all. In fact, you'll often get some variant of this very problem.
In Java, you can get by with high-level abstractions because we have a JIT, but performance in languages that are compiled AOT is more complicated. So, in addition to generics, low-level languages have other features that are not needed in Java. C++ has templates, which are a little more general than generics, but they can fail to instantiate, too. It also has preprocessor macros that can fail to compile in a client program. Rust has ordinary generics, which are checked once, but since that's not enough for a low-level language, it also has macros, and those can also fail to expand correctly.
So in practice, you either have one feature that can fail to compile in the client, or you can have the functionality split among multiple features, resulting in a more complicated language, and still have some of those features exhibit the same problem.
Why? Because that leads to better ergonomics for me, in my experience. When library authors can polish the interface with the least powerful mechanism with the best guarantees, I can use it, misuse it, and get decent error messages.
What I want out of partial evaluation is just the boring 90’s technology of generalized “constant folding”.[1] I in principle don’t care if it is used to implement other things... as long as I don’t have surprising instantiation problems when using library code that perhaps the library author did not anticipate.
[1]: And Rust’s “const” approach is probably too limited at this stage. For my tastes. But the fallout of generalizing is not my problem so who am I to judge.
> Okay, and I don't want any problem of any kind in my language, but unfortunately, there are tradeoffs in programming language design.
I see.
> So in practice, you either have one feature that can fail to compile in the client, or you can have the functionality split among multiple features, resulting in a more complicated language,
In my experience Rust being complicated is more of a problem for rustc contributors than it is for me.
> and still have some of those features exhibit the same problem.
Which you only use when you need them.
(I of course indirectly use macros since the standard library is full of them. At least those are nice enough to use. But I might have gotten some weird expansions before, though?)
That will have to do until there comes along a language where you can write anything interesting as library code and still expose a nice to use interface.
It's not that that single mechanism can fail in all situations. It's very unlikely to fail to compile in situations where the complicated language always compiles, and more likely to fail to compile when used for more complicated things, where macros may fail to compile, too.
It's probability of compilation failure is about the same as that of C++ templates [1]. Yeah, I've seen compilation bugs in templates, but I don't think that's on any C++ programmer's top ten problem list (and those bugs are usually when you start doing stranger things). Given that there can be runtime failures, which are far more dangerous than compilation failures and cannot be prevented, that the much less problematic compilation failures cannot always be prevented is a pretty small deal.
But okay, we all prefer different tradeoffs. That's why different languages choose design philosophies that appeal to different people.
[1]: It's basically a generalisation of the same idea, only with better error messages and much simpler code.
Rust and Zig aren't merely very good, they are better than the alternatives when you need a "zero cost abstraction" option.
But sure, go ahead and dismiss it as a cult if it makes you feel better. I bet you were one of the people who dismissed the iPhone as "just apple fanbois" back in the day. Won't amount to anything.
Original quote:
> [Learning Zig] is about fundamentally changing how you think about software.
This is not the same. Something like it could be said about Lisp, Forth, Prolog, Smalltalk, Fractran or APL, even Brainfuck, not Rust or Zig. No, thinking about object lifetimes or allocators is not "fundamental change" in how to think about software. It is bread and butter of thinking about software. Therefore I believe this is cultish behavior - you assign extraordinary properties to something rather dull and not that much different from other mainstream languages.
> I bet you were one of the people who dismissed the iPhone as "just apple fanbois" back in the day
Wrong. I still dismiss people praising Apple, swallowing some bullshit about "vision" etc. as fanboys.
PS: I'm stealing it, by the way.
I wouldn't say that about OCaml either really though. It's not wildly different in the way that e.g. Lean's type system, or Rust's borrow checker or Haskell's purity is.
I'm not a D programmer though so I could be wrong.
Look up static if - AST manipulation in native D code.
> I'm not sure what they expect, but to me Zig looks very much like C
Yes. I think people should sincerely stop with this kind of wording.
That makes Zig looks like some kind of cult.
Technically speaking, Zig democratized the concept of imperative compile time meta-programming (which is an excellent thing).
For everything else, this is mainly reuse and cherry pick from other languages.
I'm not saying everyone should like Zig, but its design is revolutionary:
One comment: About the syntax highlighting, the dark blue for keywords against a black background is very difficult to read. And if you opt for the white background, the text becauses off white / grey which again is very difficult to read.
If I want to do system or network programming, my current stack already covers those needs — and adding Rust would probably make it even more future-proof. But Zig? This is a genuine question, because the "Zig book" doesn’t give me much insight into what are the real use cases for Zig.
Enough rant, now back on some reasons for why choosing Zig:
- Cross platform tools with tiny binaries (Zig's built in cross compilation avoids the complex setup needed with C)
- System utilities or daemons (explicit error handling instead of silent patterns common in C)
- Embedded or bare metal work (predictable rules and fewer footguns than raw C)
- Interfacing with existing C libraries (direct header import without manual binding code)
- Build and deployment tooling (single build system that replaces Make and extra scripts)
For my personal usage, I'm working on replacing Docker builds for some Go projects that rely heavily on CGO by using `zig cc`. I'm not using the Zig language itself, but this could be considered one of its use cases.Hm, i can see a good use case when we want to have reproducible builds from go packages, including its C extensions. Is that your use case, or are you aiming for multi-environment support of your compiled "CGO extensions"
need to bundle a lot of C libraries, some using dynamic linking and some using static linking, and I need to deploy them on different operating systems, including some that are difficult to work with like RHEL. Right now the builds are slow because I use a separate Dockerfile for each platform and then copy the binary back to the host. With Zig CC I could build binaries for different platforms and architectures without using Docker.
In my experience with Zig, you have the feeling of thinking more about systems engineering using the language to help you implement that without resorting to all sort of language idioms and complexity. It feels more intuitive in way giving it tries to stay simple and get out of your way. Its a more "unsurprising" programming language in terms of what you end up getting after you code into it, in terms of understanding exactly how the code will run.
In terms of ecosystem, lets say you have Java lunch, C lunch and C++ lunch (established languages) in their domains. Go is eating some Java(C#, etc..) lunch and in smaller domains some C++ lunch. Rust is in the same heavy weight category as Go, but it can eat more C++ lunch than Go ever could.
Now Zig will be able to compete in ways that it can really be an alternative to C core values, which other programming languages failed to achieve. So it will be aimed at things C and C++ are doing now and where Go and Rust wont be good candidates.
If you used Rust long enough you can see that while it can cover almost all ground its not a good fit for lower level stuff or at least not without some compromises either in performance or complexity (affecting productivity). So its more in the same family as C++ in terms of what you pay for (again nothing wrong with that, is just that some complex codebases will need a good amount of man-hours effort in the same line as C++ does).
Don't get me wrong, Rust can be good at low level stuff too, is just that some of its choices make you as a developer pay a price for those niceties when you need to get your hands dirty in specific domains.
With Zig you fell more focused on the machine with less abstractions as in C but with enough goodies that can make even the most die-hard C developer think about using it (something C++ and Rust never managed to do it).
So i think Zig will have its place in the sun as Rust does. But I see Rust taking more the place where Java used to be (together with Go) + some things that were made in C++ where Zig will be more focused on system and low level stuff.
Modern C++ will still be around, but Rust and Zig will used more and more where languages like C and C++ used to be the only real contenders, which is quite good in my POV.
What will happen is that Rust and Zig programmers might overlap and offer tools in the same area (see Bun and Deno for instance) but the tools will excel on their own way and with time it will be more clear into which domain Rust and Zig are better at.
Written by ChatGPT?
With only half serious intent, I think only the real wizard types, like Jarred Sumner (Bun) and Mitchell Hashimoto (Ghostty), who understand both low level systems and higher level languages, should be writing big tools in Zig. The tough part in the next few years will not be building things, it will be keeping them alive if the authors step away or the ecosystems move in a different direction.
Not an expert but Zig seems like a modern C - you manage memory yourself. I guess if you want more modern features than C offers, and actively don't want the type-system sort of features that Zig has (or are grumpy about compile times, etc) then it's there for you to try!
Neither of those things are control flow, and yet again I’m reading a pro-Zig text taking a dig at Go without any substance to the criticism.
Also funny having a dig at goroutines when Zig is all over the place with its async implementation.
I’m not sure how much value is to be had here, and it’s unfortunate the author wasn’t honest about how it was created.
I wish I wouldn’t have submitted this so quickly but I was excited about the new resource and the chapters I dug into looked good and accurate.
I worry about whether this will be maintained, if there are hallucinations, and if it’s worth investing time into.
> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
The author could of course be lying. But why would you use AI and then very explicitly call out that you’re not using AI?
There are too many things off about the origin and author to not be suspicious of it. I’m not sure what the motivation was, but it seems likely. I do think they used the Zig source code heavily, and put together a pipeline of some sort feeding relevant context into the LLM, or maybe just codex or w/e instructed to read in the source.
It seems like it had to take quite a bit of effort to make, and is interesting on its own. And I would trust it more if I knew how it was made (LLMs or not).
As another suspicious data point see this issue by the author: https://github.com/microsoft/vscode/issues/272725
Edit: https://news.ycombinator.com/item?id=45952581 found some concrete issues
Plenty. I assumed that the code examples had been cleaned up manually, so instead I looked at a few random "Caveats, alternatives, edge cases" sections. These contain errors typically made by LLMs, such as suggesting to use features that doesn't exist (std.mem.terminated), are non-public (argvToScriptCommandLineWindows) or removed (std.BoundedArray). These sections also surfaces irrelevant stdlib and compiler implementation details.
We're used to errata and fixing up stuff produced by humans, so if we can fix this resource, it might actually be valuable and more useful than anything that existed before it. Maybe.
One of my things with AI is that if we assume it is there to replace humans, we are always going to find it disappointing. If we use it as a tool to augment, we might find it very useful.
A colleague used to describe it (long before GenAI, when we were talking about technology automation more generally) as following: "we're not trying to build a super intelligent killer robot to replace Deidre in accounts. Deidre knows things. We just want to give her better tools".
So, it seems like this needs some editing, but it still has value if we want it to have value. I'd rather this was fixed than thrown away (I'm biased, I want to learn systems programming in zig and want a good resource to do so), and yes the author should have been more upfront about it, and asked for reviewers, but we have it now. What to do?
They should not have lied about. That's not someone I would want to trust and support. There's probably a good reason why they decided to stay anonymous.
Personally, I would want no involvement in a project where the maintainer is this manipulative and I would find it a tragedy if any people contributed to their project.
1. There is no evidence this is AI generated. The author claims it wasn't, and on the specific issue you cite, he explains why he's struggling with understanding it, even if the answer is "obvious" to most people here.
2. Even if it were AI generated, that does not automatically make it worthless. In fact, this looks pretty decent as a resource. Producing learning material is one of the few areas we can likely be confident that AI can add value, if the tools are used carefully - it's a lot better at that than producing working software, because synthesising knowledge seen elsewhere and moving it into a new relatable paradigm (which is what LLMs do, and excel at), is the job of teaching.
3. If it's maintained or not is neither here nor there - can it provide value to somebody right now, today? If yes, it's worth sharing today. It might not be in 6 months.
4. If there are hallucinations, we'll figure them out and prove the claim it is AI generated one way or another, and decide the overall value. If there is one hallucination per paragraph, it's a problem. If it's one every 5 chapters, it might be, but probably isn't. If it's one in 62 chapters, it's beating the error rate of human writers quite some way.
Yes, the GitHub history looks "off", but maybe they didn't want to develop in public and just wanted to get a clean v1.0 out there. Maybe it was all AI generated and they're hiding. I'm not sure it matters, to be honest.
But I do find it grating that every time somebody even suspects an LLM was involved, there is a rush of upvotes for "calling it out". This isn't rational thinking. It's not using data to make decisions, its not logical to assume all LLM-assisted writing is slop (even if some of it is), and it's actually not helpful in this case to somebody who is keen to learn zig to decide if this resource is useful or not: there are many programming tutorials written by human experts that are utterly useless, this might be a lot better.
There is, actually, You may copy the introduction to Pangram and it will say 100% AI generated.
It does make it automatically worthless if the author claims it's hand made. How am I supposed to trust this author if they just lie about things upfront? What worth does learning material have if it's written by a liar? How can I be sure the author isn't just lying with lots of information throughout the book?
That didn't happen.
And if it did, it wasn't that bad.
And if it was, that's not a big deal.
And if it is, that's not my fault.
And if it was, I didn't mean it.
And if I did, you deserved it.Even if it was totally legitimate, the "landing page" (its design) and the headline ("Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software."?????) should discredit it immediately.
From the readme.
See the first screenshot in the exchange with newlines breaking up the label and block - and maybe this for proper formatting https://zig.guide/language-basics/labelled-blocks/
Some text in the book itself is odd, but I'll be a guinea pig and try to learn zig from this book and see how far I get.
https://github.com/zigbook/zigbook/issues/25
"AI ALLEGATION" and "RE**ED COMPLAINT"
Futile as it is.
Such a bald-faced and utterly disgusting lie. The introduction itself ticks every single flag of AI generated slop. AI is trained well on corporate marketing brochures.
I opted to give it a try instead of reading the comments and the book was arranged in a super strange way where it's discussing concepts that a majority of programmers would never be concerned with when starting out with learning a language. It's very different to learn about some of these concepts if you are reading a language doc in order to work on the language itself. But if you want to learn how to use the language, something like:
> Choose between std.debug.print, unbuffered writers, and buffered stdout depending on the output channel and performance needs.
is absolutely never going to be something you dump into chapter 1. I skimmed through a few chapters from there and it's blocks of stuff thrown in randomly. The introduction to the if conditional throws in Zig Intermediate Representation with absolutely no explanation of what it is and why it's even being discussed.Came here to comment that this has been written pretty poorly or just targets a very niche audience and now I discover it's slop. What a waste of time. The one thing AI was supposed to save.
A garbage collector isn't "hidden complexity". We all know they exist and use memory and CPU.
How Zig is better than Ruby:
- it has a linear performance gain
- the IDE has more information
That's great if you absolutely need it, but in a lot of cases:
- you don't need linear performance gains or you need the gains to be more than linear
- the linear performance gains come at a huge cost: code readability (assuming you managed to write it and it compiles)
- relying too much on the IDE won't make better programs and won't make you a better programmer
This is something that I wouldn't judge beginners for, but someone claiming to be an expert writing a book on the topic should know how to configure a .gitignore for their particular language of expertise.
jasonjmcghee•2mo ago
> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
I just don't buy it. I'm 99% sure this is written by an LLM.
Can the author... Convince me otherwise?
> This journey begins with simplicity—the kind you encounter on the first day. By the end, you will discover a different kind of simplicity: the kind you earn by climbing through complexity and emerging with complete understanding on the other side.
> Welcome to the Zigbook. Your transformation starts now.
...
> You will know where every byte lives in memory, when the compiler executes your code, and what machine instructions your abstractions compile to. No hidden allocations. No mystery overhead. No surprises.
...
> This is not about memorizing syntax. This is about earning mastery.
PaulRobinson•2mo ago
Can I also ask: so what if it is or it isn't?
While AI slop is infuriating, and the bubble hype is maddening, I'm not sure every time somebody sees some content they don't like the style of we just call out it "must" be AI, and debate if it is or it isn't is not at least as maddening. It feels like all content published now gets debated like this, and I'm definitely not enjoying it.
maxbond•2mo ago
As to why it matters, doesn't it matter when people lie? Aren't you worried about the veracity of the text if it's not only generated but was presented otherwise? That wouldn't erode your trust that the author reviewed the text and corrected any hallucinations even by an iota?
geysersam•2mo ago
Why? Didn't people use such constructions frequently before AI? Some authors probably overused them the same frequency AI does.
maxbond•2mo ago
Don't we all remember 5 years ago? Did you regularly encounter people who write like every followup question was absolutely brilliant and every document was life changing?
I think about why's (poignant) Guide to Ruby [1], a book explicitly about how learning to program is a beautiful experience. And the language is still pedestrian compared to the language in this book. Because most people find writing like that saccharin, and so don't write that way. Even when they're writing poetically.
Regardless, some people born in England can speak French with a French accent. If someone speaks French to you with a French accent, where are you going to guess they were born?
[1] https://poignant.guide/book/chapter-1.html
PaulRobinson•2mo ago
maxbond•2mo ago
Rochus•2mo ago
Still better than just nagging.
maxbond•2mo ago
Rochus•2mo ago
I'm sure there are more interesting things to say about this book.
maxbond•2mo ago
I intend to learn Zig when it reaches 1.0 so I was interested in this book. Now that I see it was probably generated by someone who claimed otherwise, I suspect this book would have as much of a chance of hurting my understanding as helping it. So I'll skip it. Does that really sound petty?
littlestymaar•2mo ago
I wouldn't mind a technical person transparently using AI for doing the writing which isn't necessary their strength, as long as the content itself comes from the author's expertise and the generated writing is thoroughly vetted to make sure there's no hallucinationated misunderstanding in the final text. At the end of the day this would just increase the amount of high quality technical content available, because the set of people with both a good writing skill and a deep technical expertise is much narrower than just the later.
But claiming you didn't use AI when you did breaks all trust between you a your readership and makes the end result pretty much worthless because why read a book if you don't trust the author not to waste your time?
Rochus•2mo ago
The hypocrisy and entitlement mentality that prevails in this discussion is disgusting. My recommendation to the fellow below that he should write a book himself (instead of complaining) was even flagged, demonstrating once again the abuse of this feature to suppress other, completely legitimate opinions.
maxbond•2mo ago
Additionally please note that I neither complained not expressed an entitlement. The author owes me as much as I owe them (nothing beyond respect and courtesy). I'm just as entitled to express a criticism as they are to publish a book. I suppose you could characterize my criticism as complaints, but I don't see what purpose that really serves other than to turn up the rhetorical temperature.
rudedogg•2mo ago
gamegoblin•2mo ago
[1] one of the only AI detectors that actually works, 99.9% accuracy, 0.1% false positive
ants_everywhere•2mo ago
> I just ran excerpts from two unpublished science fiction / speculative fiction short stories through it. Both came back as ai with 99.9% confidence. Both stories were written in 2013.
> I've been doing some extensive testing in the last 24 hours and I can confidently say that I believe the 1 in 10,000 rate is bullshit. I've been an author for over a decade and have dozens of books at hand that I can throw at this from years prior to AI even existing in anywhere close to its current capacity. Most of the time, that content is detected as AI-created, even when it's not.
> Pangram is saying EVERYTHING I have hand written for school is AI. I've had to rewrite my paper four times already and it still says 99.9% AI even though I didn't even use AI for the research.
> I've written an overview of a project plan based on a brief and, after reading an article on AI detection, I thought it would be interesting to run it through AI detection sites to see where my writing winds up. All of them, with the exception of Pangram, flagged the writing as 100% written by a human. Pangram has "99% confidence" of it being written by AI.
I generally don't give startups my contact info, but if folks don't mind doing so, I recommend running pangram on some of their polished hand written stuff.
https://www.reddit.com/r/teachingresources/comments/1icnren/...
gamegoblin•2mo ago
I've yet to see a single real Pangram false positive that was provably published when it says it was, yet plenty such comments claiming they exist
agucova•2mo ago
simonklee•2mo ago
It doesn't take away from the fact that someone used a bunch of time and effort on this project.
jasonjmcghee•2mo ago
simonklee•2mo ago
gre•2mo ago
Check out the sleek looking terminal--there's no ls, cd, it's just an ai hallucination.
the-anarchist•2mo ago
lukan•2mo ago
"The Zigbook intentionally contains no AI-generated content—it is hand-written"
tredre3•2mo ago
I agree that there is a difference between entirely LLM-generated, and LLM-reworded. But the statement is unequivocal to me:
> The Zigbook intentionally contains no AI-generated content—it is hand-written
If an LLM was used in any fashion, then this statement is simply a lie.
mrob•2mo ago
While I don't believe the article was created this way, it's possible to use an LLM purely as a classifier. E.g. prompt along the lines of "Does this paragraph contain any errors? Answer only yes or no." and generate only a single set of token probabilities, without any autoregression. Flag any paragraphs with sufficient probability of "yes" for human review.
blt•2mo ago
chris_pie•2mo ago
CathalMullan•2mo ago
smj-edison•2mo ago
ants_everywhere•2mo ago
Arguably it would be covered by some of the existing rules, but it's become such a common occurrence that it may need singling out.
ModernMech•2mo ago
One thing I've learned is that comment sections are a vital defense on AI content spreading, because while you might fool some people, it's hard to fool all the people. There have been times I've been fooled by AI only to see in the comments the consensus that it is AI. So now it's my standard practice to check comments to see what others are saying.
If mods put a rule into place that muzzles this community when it comes to alerting others a fraud is being affected, that just makes this place a target for AI scams.
ants_everywhere•2mo ago
There are intentional communities devoted to stopping the spread of technology, but HN isn't currently one of them. And I've never seen an HN discussion where curiosity was promoted by accusations or insinuations of LLM use.
It seems consistent to me with the rules against low effort snark, sarcasm, insinuating shilling, and ideological battles. I don't personally have a problem with people waging ideological battles about AI, but it does seem contrary to the spirit of the site for so many technical discussions to be derailed so consistently in ways that specifically try to silence a form of expression.
ModernMech•2mo ago
ants_everywhere•2mo ago
ModernMech•2mo ago
maxbond•2mo ago
NoboruWataya•2mo ago
Not disagreeing with you, but out of interest, how could you be convinced otherwise?
jasonjmcghee•2mo ago
This one hit a sore spot b/c many people are putting time and effort into writing things themselves and to claim "no ai use" if it is untrue is not fair.
If the author had a good explanation... Idk not a native English writer and used an LLM to translate and that included the "no LLMs used" call-out and that was translated improperly etc
chris_pie•2mo ago
ummonk•2mo ago
Jach•2mo ago
keyle•2mo ago
ninetyninenine•2mo ago