Kani is a model checker for Rust, and CBMC is a model checker for C. I'm not aware of one (yet!) for Zig, but it would not be difficult to build a port. Both Kani and CBMC compile down to goto-c, which is then converted to formulas in an SMT solver.
https://smt.st/SAT_SMT_by_example.pdf
The algorithms behind SAT / SMT are actually pretty straight-forward. One of these days, I'll get around to publishing an article to demystify them.
If zig locks down the AIR (intermediate representation at the function level) it would be ideal for running model checking of various sorts. Just by looking at AIR I found it possible to:
- identify stack pointer leakage
- basic borrow checking
- detect memory leaks
- assign units to variables and track when units are incompatible
False. Fil-C secures C and C++. It’s more comprehensively safe than Rust (Fil-C has no escape hatches). And it’s compatible enough with C/C++ that you can think of it as an alternate clang target.
Fil-C is in the cards for my next project.
Hit me up if you have questions or issues. I’m easy to find
I think in addition to possibly being the solution to safety for someone, Fil-C is helping to elucidate what memory safe systems programming could look like and that might lead to someone building something even better
Yeah. By adding a runtime.
> Fil-C achieves this using a combination of concurrent garbage collection and invisible capabilities (each pointer in memory has a corresponding capability, not visible to the C address space)
https://github.com/pizlonator/llvm-project-deluge/tree/delug...
So? That doesn't make it any less safe or useful.
In almost all uses of C and C++, the language already has a runtime. In the Gnu universe, it's the combination of libgcc, the loader, the various crt entrypoints, and libc. In the Apple version, it's libcompiler_rt and libSystem.
Fil-C certainly adds more to the runtime, but it's not like there was no runtime before.
Fil-C is not a replacement for C++ generally, that oversells it. It might be a replacement for some C++ software without stringent performance requirements or a rigorously performance-engineered architecture. There is a lot of this software, often legacy.
Is this just you speculating? How much is "a lot"? Where's the data? Let's get some benchmarks!
Fil-C’s perf sucks on some workloads. And it doesn’t suck on others.
Extreme examples to give you an idea:
- xzutils had about 1.2x overhead. So slower but totally usable.
- no noticeable overhead in shells, systems utilities, ssh, curl, etc. But that’s because they’re IO bound.
- 4x or sometimes maybe even higher overheads for things like JS engines, CPython, Lua, Tcl, etc. Also OpenSSL perf tests are around 4x I think.
But you’re on thin ice if you say that this is a reason why Fil-C will fail. So much of Fil-C’s overhead is due to known issues that I will fix eventually, like the function call ABI (which is hella cheesy right now because I just wanted to have something that works and haven’t actually made it good yet).
You can’t possibly know that.
> C++ is often selected as a language instead of safer options for its unusual performance characteristics even among systems languages in practice.
Is that why sudo, bash, coreutils, and ssh are written in C?
Of course not.
C and C++ are often chosen because they make systems programming possible at all due to their direct access to syscall ABI.
> Fil-C is not a replacement for C++ generally, that oversells it.
I have made no such claim.
Fil-C means you cannot claim - as TFA claims - that it’s impossible to make C and C++ safe. You have to now hedge that claim with additional caveats about performance. And even then you’re on thin ice since the top perf problems in Fil-C are due to immaturity of its implementation (like the fact that linking is hella cheesy and the ABI is even cheesier).
> It might be a replacement for some C++ software without stringent performance requirements or a rigorously performance-engineered architecture. There is a lot of this software, often legacy.
It’s the opposite in my experience. For example, xzutils and simdutf have super lower overhead in Fil-C. In the case of SIMD code it’s because using SIMD amortizes Fil-C’s overheads.
Rust attempts to enforce its guarantees statically which has the advantage of no runtime overhead but the disadvantage of no runtime knowledge.
Fil-C doesn't "add a runtime". C already has a runtime (loader, crt, compiler runtime, libc, etc)
Yes, in `unsafe` code typically dynamic checks or careful manual review is needed. However, most code is not `unsafe` and `unsafe` code is wrapped in safe APIs.
I'm aware C already has a runtime, this adds to it.
Even if we accept this, it's still an improvement from the status quo. As is your project.
Does CPython fail at memory safety because it's C or allows things via C extensions or ctypes? Does C# fail because of similar things? If Fil-C doesn't have a runtime, how does it ensure memory safety when using libc or the Win API? That would be neat to go into!
Anti-Rust zealotry is just as bad as Rust zealotry (if not worse, because there are other things Rust does well, that people may genuinely be excited about).
If your project achieves what it says it does, it'll be a no-brainer because there are more C/C++ codebases and programmers than Rust.
The negativity/vague fear mongering doesn't add to a nuanced technical discussion; it detracts from it. I wish you all the best, the more tools we have for safer code, the better.
Whether the amount and quality of this kind of code is comparable between the two approaches depends on the specific programs you're writing. Static checking, which can also be applied in more fine-grained ways to parts of the runtime (or its moral equivalent) is an interesting approach, depending on your goals.
I'm not even joking. Any pattern used by magic numbers that fill pointers (such as HeapFree filling memory with FEEEEEEE on Windows) should have a corresponding no-access page just to ensure that the program will instantly fail, and not have a valid memory allocation mapped in there. For 32-bit programs, everything past 0x8000000 used to be reserved as kernel memory, and have an access violation when you access it, so the magic numbers were all above 0x80000000. But with large address aware programs, you don't get that anymore, only manually reserving the 4K memory pages containing the magic numbers will give you the same effect.
How safe is Zig? - https://news.ycombinator.com/item?id=31850347 - June 2022 (254 comments)
How Safe Is Zig? - https://news.ycombinator.com/item?id=26537693 - March 2021 (274 comments)
How Safe Is Zig? - https://news.ycombinator.com/item?id=26527848 - March 2021 (1 comment)
How Safe Is Zig? - https://news.ycombinator.com/item?id=26521539 - March 2021 (1 comment)
Zig gives you the control you need if that is what you want, safety isn't something Zig is chasing.
Safer than C, yeah, but not safe.
Rust = safe Zig = control
Pick your weapon for the foe in front of you.
90s_dev•6h ago
Unless I gravely misunderstood Zig when I learned it, the Zig approach to memory safety is to just write a ton of tests fully exercising your functions and let the test allocators find and log all your bugs for you. Not my favorite approach, but your article doesn't seem to take into account this entirely different mechanism.
nine_k•6h ago
Not a great approach for critical software, but may be much better than what C++ normally offers for e.g. game software, where the development speed definitely trumps correctness.
KerrAvon•5h ago
TimSchumann•5h ago
It’s just code you didn’t write and thus likely don’t understand as well.
This can potentially lead to performance and/or control flow issues that get incredibly difficult to debug.
agarren•4h ago
rk06•2h ago
josephg•1h ago
> This can potentially lead to performance and/or control flow issues that get incredibly difficult to debug.
Writing a linked list in rust isn't difficult because of control flow issues, or because rust makes code harder to debug. (If you've spent any time in rust, you quickly learn that the opposite is true.) Linked lists are simply a bad match up for the constraints rust's borrow checker puts on your code.
In the same way, writing an OS kernel or a high performance b-tree is a hard problem for javascript. So what? Every language has things its bad at. Design your program differently or use a different language.
josephg•2h ago
The borrow checker only runs at compile-time. It doesn't change the semantic meaning - or the resulting performance - of your code.
The borrow checker makes rust a much more difficult and frustrating language to learn. The compiler will refuse to compile your code entirely if you violate its rules. But there's nothing magical going on in the compiler that changes your program. A rust binary is almost identical to the equivalent C binary.
ajross•5h ago
90s_dev•5h ago
codr7•4h ago
I don't know what the solution is, but these days I'm a lot more likely to simply copy code over to a new project rather than try to build general purpose libraries.
I feel like that's part of the mess Rust/Swift are getting themselves tangled up in, everything depends on everything which turns evolution into more and more of an uphill struggle.
josephg•1h ago
By all means, rewrite little libraries instead of pulling in big ones. But if you're literally copy+pasting code between projects, it doesn't take much work to pull that code out into a shared library.
0x6c6f6c•2h ago
Saying we should rely on reusable modules is great and all, but that reusable code is going to be maintained by who now?
There's no sustainable pattern for this yet, most things are good graces of businesses or free time development, many become unmaintained over time- people who actually want to survive on developing and supporting reusable modules alone might actually be more rare than the unicorn devs.
90s_dev•2h ago
josephg•2h ago
You can write tests to find bugs in any language. C + Valgrind will do most of the same thing for C that the debug allocator will do for zig. But that doesn't stop the avalanche of memory safety bugs in production C code.
I used to write a lot of javascript. At the time I swore by testing. "You need testing anyway - why not use it to find other kinds of bugs too?". Eventually I started writing more and more typescript, and now I can't go back. One day I ported a library I wrote for JSON based operational transform from javascript to typescript. That library has an insane 3:1 test:code ratio or something, and deterministic constraint testing. Despite all of that testing, the typescript type checker still found a previously unknown bug in my codebase.
As the saying goes, tests can only prove the presence of bugs. They cannot prove your code does not have bugs. For that, you need other approaches - like rust's borrow checker or a runtime GC.
throwawaymaths•2m ago