Zig gives the programmer more control than Rust. I think this is one of the reasons why TigerBeetle is written in Zig.
From section "1.2 Compatibility". How easy is it to embed a library written in Zig in, say, a small embedded system where you may not be using Zig for the rest of the work?
Also, since you're the submitter, why did you change the title? It's just "Why is SQLite Coded in C", you added the "and not Rust" part.
From the site guidelines: https://news.ycombinator.com/newsguidelines.html
More control over what exactly? Allocations? There is nothing Zig can do that Rust can’t.
I read your response 3 times and I truly don't know what you mean. Mind explaining with a simple example?
Hash maps in zig std are another great example, where you can use adapter to completely change how the data is stored and accessed while keeping the same API [1]. For example to have map with limited memory bound that automatically truncates itself, in rust you need to either write completely new data structure for this or rely on someone's crate again (indexmap).
Errors in zig compose also better, in rust I find error handling really annoying. Anyhow makes it better for application development but you shouldn't use it if writing libraries.
When writing zig I always feel like I can reuse pieces of existing code by combining the building blocks at hand (including freestanding targets!). While in rust I always feel like you need go for the fully tailored solution with its own gotchas, which is ironic considering how many crates there are and how many crates projects depend on vs. typical zig projects that often don't depend on lots of stuff.
1: https://zig.news/andrewrk/how-to-use-hash-map-contexts-to-sa...
I mean yeah, allocations. Allocations are always explicit. Which is not true in C++ or Rust.
Personally I don't think it's that big of a deal, but it's a thing and maybe some people care enough.
Safe languages insert additional machine branches to do things like verify that array accesses are in-bounds. In correct code, those branches are never taken. That means that the machine code cannot be 100% branch tested, which is an important component of SQLite's quality strategy.
Rust needs to mature a little more, stop changing so fast, and move further toward being old and boring.
Rust needs to demonstrate that it can do the kinds of work that C does in SQLite without a significant speed penalty.
This is annoying in Rust. To me array accesses aren't the most annoying, it's match{} branches that will never been invoked.
There is unreachable!() for such situations, and you would hope that:
if array_access_out_of_bounds { unreachable!(); }
is recognised by the Rust tooling and just ignored. That's effectively the same as SQLite is doing now by not doing the check. But it isn't ignored by the tooling: unreachable!() is reported as a missed line. Then there is the test code coverage including the standard output by default, and you have to use regex's on path names to remove it.Your example does what [] does already, it’s just a more verbose way of writing the same thing. It’s not the same behavior as sqlite.
https://algora.io/challenges/turso "Turso is rewriting SQLite in Rust ; Find a bug to win $1,000"
------
- Dec 10, 2024 : "Introducing Limbo: A complete rewrite of SQLite in Rust"
https://turso.tech/blog/introducing-limbo-a-complete-rewrite...
- Jan 21, 2025 - "We will rewrite SQLite. And we are going all-in"
https://turso.tech/blog/we-will-rewrite-sqlite-and-we-are-go...
- Project: https://github.com/tursodatabase/turso
Status: "Turso Database is currently under heavy development and is not ready for production use."
turso has 341 rust source files spread across tens of directories and 514 (!) external dependencies that produce (in release mode) 16 libraries and 7 binaries with tursodb at 48M and libturso_sqlite3.so at 36M.
looks roughly an order of magnitude larger to me. it would be interesting to understand the memory usage characteristics in real-world workloads. these numbers also sort of capture the character of the languages. for extreme portability and memory efficiency, probably hard to beat c and autotools though.
I suppose SQLite might use a C linter tool that can prove the bounds checks happen at a higher layer, and then elide redundant ones in lower layers, but... C compilers won't do that by default, they'll just write memory-unsafe machine code. Right?
https://news.ycombinator.com/item?id=28278859 - August 2021
https://news.ycombinator.com/item?id=16585120 - March 2018
The current doc no longer has any paragraphs about security, or even the word security once.
The 2021 edition of the doc contained this text which no longer appears: 'Safe languages are often touted for helping to prevent security vulnerabilities. True enough, but SQLite is not a particularly security-sensitive library. If an application is running untrusted and unverified SQL, then it already has much bigger security issues (SQL injection) that no "safe" language will fix.
It is true that applications sometimes import complete binary SQLite database files from untrusted sources, and such imports could present a possible attack vector. However, those code paths in SQLite are limited and are extremely well tested. And pre-validation routines are available to applications that want to read untrusted databases that can help detect possible attacks prior to use.'
https://web.archive.org/web/20210825025834/https%3A//www.sql...
Huh it's not everyday that I hear a genuinely new argument. Thanks for sharing.
// pseudocode
if (i >= array_length) panic("index out of bounds")
that are never actually run if the code is correct? But (if I understand correctly) these are checks implicitly added by the compiler. So the objection amounts to questioning the correctness of this auto-generated code, and is predicated upon mistrusting the correctness of the compiler? But presumably the Rust compiler itself would have thorough tests that these kinds of checks work?Someone please correct me if I'm misunderstanding the argument.
I wouldn't put it that way. Usually when we say the compiler is "incorrect", we mean that it's generating code that breaks the observable behavior of some program. In that sense, adding extra checks that can't actually fail isn't a correctness issue; it's just an efficiency issue. I'd usually say the compiler is being "conservative" or "defensive". However, the "100% branch testing" strategy that we're talking about makes this more complicated, because this branch-that's-never-taken actually is observable, not to the program itself but to its test suite.
Rust does not stop you from writing code that accesses out of bounds, at all. It just makes sure that there's an if that checks.
By the same logic one could also claim that tail recursion optimisation, or loop unrolling are also dangerous because they change the way code works, and your tests don't cover the final output.
I don’t think anyone would find the idea compelling that “you are only responsible for the code you write, not the code that actually runs” if the code that actually runs causes unexpected invalid behavior on millions of mobile devices.
This is not correct for every industry.
Hipp worked as a military contractor for battleships, furthermore years later SQLite was under contract under every proto-smartphone company in the USA. Under these constraints you maybe are not responsible to test what the compiler spits out across platforms and different compilers, but doing that makes the project a lot more reliable, makes it sexier for embedded and weapons.
If the check never fails, it is logically equivalent to not having the check. If the code isn't "correct" and the panic is reached, then the equivalent c code would have undefined behavior, which can be much worse than a panic.
Your second case implies that it is reachable.
The way I was thinking about it was: if you somehow magically knew that nothing added by the compiler could ever cause a problem, it would be redundant to test those branches. Then wondering why a really well tested compiler wouldn't be equivalent to that. It sounds like the answer is, for the level of soundness sqlite is aspiring to, you can't make those assumptions.
This is a dubious statement. In Rust, the array indexing operator arr[i] is syntactic sugar for calling the function arr.index(i), and the implementation of this function on the standard library's array types is documented to perform a bounds-check assertion and access the element.
So the checks aren't really implicitly added -- you explicitly called a function that performs a bounds check. If you want different behavior, you can call a different, slightly-less-ergonomic indexing function, such as `get` (which returns an Option, making your code responsible for handling the failure case) or `get_unchecked` (which requires an unsafe block and exhibits UB if the index is out of bounds, like C).
Automatic array bounds checks can get hit by corrupted data. Thereby leading to a crash of exactly the kind that SQLite tries to avoid. With complete branch testing, they can guarantee that the test suite includes every kind of corruption that might hit an array bounds check, and guarantee that none of them panic. But if the compiler is inserting branches that are supposed to be inaccessible, you can't do complete branch testing. So now how do you know that you have tested every code branch that might be reached from corrupted data?
Furthermore those unused branches are there as footguns which are reachable with a cosmic ray bit flip, or a dodgy CPU. Which again undermines the principle of keeping running if at all possible.
Also you rarely need to actually access by index - you could just access using functional methods on .iter() which avoids the bounds check problem in the first place.
I'm checking to see how array access is implemented, whether through deref to slice, or otherwise.
sure safety checks are added but
it's ignoring that many of such checks get reliably optimized away
worse it's a bit like saying "in case of a broken invariant I prefer arbitrary potential highly problematic behavior over clean aborts (or errors) because my test tooling is inadequate"
instead of saying "we haven't found adequate test tooling" for our use case
Why inadequate? Because technically test setups can use
1. fault injection to test such branches even if normally you would never hit them
2. for many of such tests (especially array bound checks) you can pretty reliably identify them and then remove them from your test coverage statistic
idk. what the tooling of rust wrt this is in 2025, but around the rust 1.0 times you mainly had C tooling you applied to rust so you had problems like that back then.
Certainly don't get me wrong, SQLite is one of the best and most thoroughly tested libraries out there. But this was an argument to have 4 arguments. That's because 2 of the arguments break down as "Those languages didn't exist when we first wrote SQLite and we aren't going to rewrite the whole library just because a new language came around."
Any language, including C, will emit or not emit instructions that are "invisible" to the author. For example, whenever the C compiler decides it can autovectorize a section of a function it'll be introducing a complicated set of SIMD instructions and new invisible branch tests. That can also happen if the C compiler decides to unroll a loop for whatever reason.
The entire point of compilers and their optimizations is to emit instructions which keep the semantic intent of higher level code. That includes excluding branches, adding new branches, or creating complex lookup tables if the compiler believes it'll make things faster.
Dr Hipp is completely correct in rejecting Rust for SQLite. Sqlite is already written and extremely well tested. Switching over to a new language now would almost certainly introduce new bugs that don't currently exist as it'd inevitably need to be changed to remain "safe".
no, you still need to rewrite, re-optimize, etc. everything
it would make it much easier to be fully compatible, sure, but that doesn't make it trivial
furthermore part of it's (mostly internal) design are strongly influenced by C specific dev-UX aspects, so you wouldn't write them the same, so test for them (instead of integration tests) may not apply
which in general also means that you most likely would break some special purpose/usual user which do have "brittle" (not guaranteed) assumptions about SQLite
if you have code which very little if at all changes and has no major issues, don't rewrite it
but most of the new "external" things written around SQLite, alternative VFS impl. etc. tend to be at most partially written in C
Presumably this is why they do 100% test coverage. All of those instructions would be tested and not invisible to the test suite
0: https://doc.rust-lang.org/std/vec/struct.Vec.html#method.get...
This feels like chasing arbitrary 100% test coverage at the expense of safety. The code quality isn’t actually improved by omitting the checks even though it makes testing coverage go up.
Next they’ll have to tell me about how they had to turn off inlining because it creates copies of code which adds some dead branches. Bounds checks are just normal inlined code. Any bounds checked language worth its salt has that coverage for all that stuff already.
"Airbus confirms that SQLite is being used in the flight software for the A350 XWB family of aircraft."
I don't think I would (personally) ever be comfortable asserting that a code branch in the machine instructions emitted by a compiler can't ever be taken, no matter what, with 100% confidence, during a large fraction of situations in realistic application or library development, as to do so would require a type system powerful enough to express such an invariant, and in that case, surely the compiler would not emit the branch code in the first place.
One exception might be the presence of some external formal verification scheme which certifies that the branch code can't ever be executed, which is presumably what the article authors are gesturing towards in item D on their list of preconditions.
A simple array access in C:
arr[i] = 123;
...can be thought of as being equivalent to: if (i >= array_length) UB();
else arr[i] = 123;
where the "UB" function can do literally anything. From the perspective of exhaustively testing and formally verifying software, I'd rather have the safe-language equivalent: if (i >= array_length) panic();
else arr[i] = 123;
...because at least I can reason about what happens if the supposedly-unreachable condition occurs.Dr. Hipp mentions that "Recoding SQLite in Go is unlikely since Go hates assert()", implying that SQLite makes use of assert statements to guard against unreachable conditions. Surely his testing infrastructure must have some way of exempting unreachable assert branches -- so why can't bounds checks (that do nothing but assert undefined behavior does not occur) be treated in the same way?
you basically say if deeply unexpected things happen you prefer you program doing widely arbitrary and as such potentially dangerous things over it having a clean abort or proper error. ... that doesn't seem right
worse it's due to a lack of the used tooling and not a fundamental problem, not only can you test this branches (using fault injection) you also often (not always) can separate them from relevant branches when collecting the branch statistics
so the while argument misses the point (which is tooling is lacking, not extra checks for array bounds and similar)
lastly array bounds checking is probably the worst example they could have given as it
- often can be disabled/omitted in optimized builds
- is quite often optimized away
- has often quite low perf. overhead
- bound check branches are often very easy to identify, i.e. excluding them from a 100% branch testing statistic is viable
- out of bounds read/write are some of the most common cases of memory unsafety leading to security vulnerability (including full RCE cases)
It's like seat belts.
E.g. what if we drive four blocks and then the case occurs when the seatbelt is needed need the seatbelt? Okay, we have an explicit test for that.
But we cannot test everything. We have not tested what happens if we drive four blocks, and then take a right turn, and hit something half a block later.
Screw it, just remove the seatbelts and not have this insane untested space whereby we are never sure whether the seat belt will work properly and prevent injury!
- Rust needs to mature a little more, stop changing so fast, and move further toward being old and boring.
- Rust needs to demonstrate that it can be used to create general-purpose libraries that are callable from all other programming languages.
- Rust needs to demonstrate that it can produce object code that works on obscure embedded devices, including devices that lack an operating system.
- Rust needs to pick up the necessary tooling that enables one to do 100% branch coverage testing of the compiled binaries.
- Rust needs a mechanism to recover gracefully from OOM errors.
- Rust needs to demonstrate that it can do the kinds of work that C does in SQLite without a significant speed penalty.
2. This has been demonstrated.
3. This one hinges on your definition of “obscure,” but the “without an operating system” bit is unambiguously demonstrated.
4. I am not an expert here, but given that you’re testing binaries, I’m not sure what is Rust specific. I know the Ferrocene folks have done some of this work, but I don’t know the current state of things.
5. Rust as a language does no allocation. This OOM behavior is the standard library, of which you’re not using in these embedded cases anyway. There, you’re free to do whatever you’d like, as it’s all just library code.
6. This also hinges on a lot of definitions, so it could be argued either way.
Of course, two libraries that choose different no_std collection types can't communicate...but hey, we're comparing to C here.
like there are some things you can well in C
and this things you can do in rust too, through with a bit of pain and limitations to how you write rust
and then there is the rest which looks "hard but doable" in C, but the more you learn about it the more it's a "uh wtf. nightmare" case where "let's kill+restart and have robustness even in presence of the process/error kernel dying" is nearly always the right answer.
Rust insists on its own package manager "rustup" and frowns on distro maintainers. When Rust is happy to just be packaged by the distro and rustup has gone away, then it will have matured to at least adolescence.
There are other worlds out there than Linux.
Basically, people use it because they prefer it.
the rust version packaged in distros is for compiling rust code shipped as part of the distro. This means it
- is normally not the newest version (which , to be clear, is not bad per see, but not necessary what you need)
- might not have all optional components (e.g. no clippy)
but if you idk. write a server deployed by you company
- you likely want all components
- you don't need to care what version the distro pinned
- you have little reason not to use the latest rust compiler
for other use cases you have other reasons, some need nightly rust, some want to test against beta releases, some want to be able to test against different rust versions etc. etc.
rustup exist (today) for the same reason why a lot of dev projects use project specific copies of all kinds of tooling and libraries which do not match whatever their distro ships: The distro use-case and generic dev-use case have diverging requirements! (Other examples nvm(node), flutter, java etc.).
Also some distros are notorious for shipping outdated software (debian "stable").
And not everything is Linux, rustup works on OSX.
I’d love to see rust be so stable that MSRV is an anachronism. I want it to be unthinkable you wouldn’t have any reason not to support Rust from forever ago because the feature set is so stable.
What other languages satisfy this criteria?
- C23: https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3096.pdf
- Cobol 2023: https://www.incits.org/news-events/news-coverage/available-n... (random press release since a PDF of the standard didn't immediately show up in a search)
- Fortran 2023: https://wg5-fortran.org/N2201-N2250/N2212.pdf
C2Y has a fair number of already-accepted features as well and it's relatively early in the standard release cycle: https://thephd.dev/c2y-hitting-the-ground-running
The current version of the Rust compiler definitely doesn't -- there's known issues like https://github.com/rust-lang/rust/issues/57893 -- but maybe there's some historical version from before the features that caused those problems were introduced.
Like, in the C world, there's a difference between "the C specification has problems" and "GCC incorrectly implements the C specification". You can make statements about what "the C language" does or doesn't guarantee independently of any specific implementation.
But "the Rust language" is not a specification. It's just a vague ideal of things the Rust team is hoping their compiler will be able to achieve. And so "the Rust language" gets marketed as e.g. having a type system that guarantees memory safety, when in fact no such type system has been designed -- the best we have is a compiler with a bunch of soundness holes. And even if there's some fundamental issue with how traits work that hasn't been resolved for six years, that can get brushed off as merely a compiler bug.
This propagates down into things like Rust's claims about backwards compatibility. Rust is only backwards-compatible if your programs are written in the vague-ideal "Rust language". The Rust compiler, the thing that actually exists in the real world, has made a lot of backwards-incompatible changes. But these are by definition just bugfixes, because there is no such thing as a design issue in "the Rust language", and so "the Rust language" can maintain its unbroken record of backwards-compatibility.
ironically if we look at how things play out in practice rust is far more suited as general purpose languages then C, to a point where I would argue C is only a general purpose language on technicality not on practical IRL basis
this is especially ridiculous when they argue C is the fasted general purpose language when that has proven to simply not hold up to larger IRL projects (i.e. not micro benchmarks)
C has terrible UX for generic code re-use and memory management, this often means that in IRL projects people don't write the fasted code. Wrt. memory management it's not rare to see unnecessary clones, as not doing so it to easy to lead to bugs. Wrt. data structures you write the code which is maintainable, robust and fast enough and sometimes add the 10th maximal simple reimplementation (or C macro or similar) of some data structure instead of using reusing some data structures people spend years of fine tuning.
When people switched a lot from C to C++ most general purpose projects got faster, not slower. And even for the C++ to Rust case it's not rare that companies end up with faster projects after the switch.
Both C++ and Rust also allow more optimization in general.
So C is only fastest in micro benchmarks after excluding stuff like fortran for not being general purpose while itself not really being used much anymore for general purpose projects...
I don't put too many things in Cargo.toml and it still pulls like a hundred things
Talking about C99, or C++11, and then “oh you need the nightly build of rust” were juxtaposed in such a way that I never felt comfortable banging out “yum install rust” and giving it a go.
(There are some decent reasons to use the nightly toolchain in development even if you don’t rely on any unfinished features in your codebase, but that means they build on stable anyway just fine if you prefer.)
The Rust Project releases a new stable compiler every six weeks. Because it is backwards compatible, most people update fairly quickly, as it is virtually always painless. So this may mean, if you don’t update your compiler, you may try out a new package version and it may use features or standard library calls that don’t exist in the version you’re using, because the authors updated regularly. There’s been some developments in Cargo to try and mitigate some of this, but since it’s not what the majority of users do, it’s taken a while and those features landed relatively recently, so they’re not widely adopted yet.
Nightly features are ones that aren’t properly accepted into the language yet, and so are allowed to break in backwards incompatible ways at any time.
Modern languages might do more than C to prevent programmers from writing buggy code, but if you already have bug-free code due to massive time, attention, and testing, and the rate of change is low (or zero), it doesn’t really matter what the language is. SQLIte could be assembly language for all it would matter.
This jives with a point that the Google Security Blog made last year: "The [memory safety] problem is overwhelmingly with new code...Code matures and gets safer with time."
https://security.googleblog.com/2024/09/eliminating-memory-s...
So, the argument for keeping SQLite written in C is that it gives the user the choice to either:
- Build SQLite with Yolo-C, in which case you get excellent performance and lots of tooling. And it's boring in the way that SQLite devs like. But it's not "safe" in the sense of memory safe languages.
- Build SQLite with Fil-C, in which case you get worse (but still quite good) performance and memory safety that exceeds what you'd get with a Rust/Go/Java/whatever rewrite.
Recompiling with Fil-C is safer than a rewrite into other memory safe languages because Fil-C is safe through all dependencies, including the syscall layer. Like, making a syscall in Rust means writing some unsafe code where you could screw up buffer sizes or whatnot, while making a syscall in Fil-C means going through the Fil-C runtime.
SQLite is old, huge and known for its gigantic test coverage. There’s just so much to rewrite.
DuckDB is from 2019, so new enough to jump on the “rust is safe and fast”
I am not Dr Hipp, and therefore I like run-time checks.
It has async I/O support on Linux with io_uring, vector support, BEGIN CONCURRENT for improved write throughput using multi-version concurrency control (MVCC), Encryption at rest, incremental computation using DBSP for incremental view maintenance and query subscriptions.
Time will tell, but this may well be the future of SQLite.
Compatible with SQLite. So it's another database?
SQLite is NOT being rewritten in Rust!
>>Turso Database is an in-process SQL database written in Rust, compatible with SQLite.
turdso is VC funded so will probably be defunct in 2 years
Also, this is a VC backed project. Everyone has to eat, but I suspect that Turso will not go out of its way to offer a Public Domain offering or 50 year support in the way that SQLite has.
The aim is to be compatible with sqlite, and a drop-in replacement for it, so I think it's fair use.
> Also, this is a VC backed project. Everyone has to eat, but I suspect that Turso will not go out of its way to offer a Public Domain offering or 50 year support in the way that SQLite has.
It's MIT license open-source. And unlike sqlite, encourages outside contribution. For this reason, I think it can "win".
At this point I wish the creators of the language could talk about what rust is bad at.
We don't have to have one implementation of a lightweight SQL database. You can go out right now and start your own implementation in Rust or C++ or Go or Lisp or whatever you like! You can even make compatible APIs for it so that it can be a drop-in replacement for SQLite! No one can stop you! You don't need permission!
But why would we want to throw away the perfectly good C implementation, and why would we expect the C experts who have been carefully maintaining SQLite for a quarter century to be the ones to learn a new language and start over?
I agree to what I think you're saying which is that "sqlite" has, to some degree, become so ubiquitous that it's evolved beyond a single implementation.
We, of course, have sqlite the C library but there is also sqlite the database file format and there is no reason we can't have an sqlite implementation in golang (we already do) and one in pure rust too.
I imagine that in the future that will happen (pure rust implementation) and that perhaps at some point much further in the future, that may even become the dominant implementation.
"Why is SQLite coded in C and not Rust?" is a question, which immediately makes me want to ask "Why do you need SQLite coded in Rust?".
they have a blog hinting at some answers as to "why": https://turso.tech/blog/introducing-limbo-a-complete-rewrite...
One reason I enjoy Go is because of the pragmatic stdlib. On most cases, I can get away without pulling in any 3p deps.
Now of course Go doesn’t work where you can’t tolerate GC pauses and need some sort of FFI. But because of the stdlib and faster compilation, Go somehow feels lighter than Rust.
Also, does it use doubly linked lists or graphs at all? Those can, in a way, be safer in C since Rust makes you roll your own virtual pointer arena.
You can write a linked list the same way you would in C if you wish.
You can implement a linked list in Rust the same as you would in C using raw pointers and some unsafe code. In fact there is one in the standard library.
sure, it's an old library they had pretty much anything (not because they don't know what they are doing but because shit happens)
lets check CVEs of the last few years:
- CVE-2025-29088 type confusion
- CVE-2025-29087 out of bounds write
- CVE-2025-7458 integer overflow, possible in optimized rust but test builds check for it
- CVE-2025-6965 memory corruption, rust might not have helped
- CVE-2025-3277 integer overflow, rust might have helped
- CVE-2024-0232 use after free
- CVE-2023-36191 segmentation violation, unclear if rust would have helped
- CVE-2023-7104 buffer overflow
- CVE-2022-46908 validation logic error
- CVE-2022-35737 array bounds overflow
- CVE-2021-45346 memory leak
...
as you can see the majority of CVEs of sqlite are much less likely in rust (but a rust sqlite impl. likely would use unsafe, so not impossible)
as a side note there being so many CVEs in 2025 seem to be related to better some companies (e.g. Google) having done quite a bit of fuzz testing of SQLite
other takeaways:
- 100% branch coverage is nice, but doesn't guarantee memory soundness in C
- given how deeply people look for CVEs in SQLite the number of CVEs found is not at all as bad as it might look
but also one final question:
SQLite uses some of the best C programmers out there, only they merge anything to the code, it had very limited degree of change compared to a typical company project. And we still have memory vulnerabilities. How is anyone still arguing for C for new projects?
So you might think, but there is a committee actively undermining this, not to mention compiler people keeping things exciting also.
There is a dogged adherence to backward compatibility, so that you can't pretend C has not gone anywhere in thirty-five years, if you like --- provided you aren't invoking too much undefined behavior. (You can't as easily pretend that your compiler has not gone anywhere in 35 years with regard to things you are doing out of spec.)
Like why defend C in 2025 when you only have to defend C in 2000 and then argue you have a old, stable, deeply tested, C code base which has no problem with anything like "commonly having memory safety issues" and is maintained by a small group of people very highly skilled in C.
Like that argument alone is all you need, a win, simple straight forward, hard to contest.
But most of the other arguments they list can be picked apart and are only half true.
Any idea what this refers to? assert is a macro in C. Is the implication that OP wants the capability of testing conditions and then turning off the tests in a production release? If so, then I think the argument is more that go hates the idea of a preprocessor. Or have I misunderstood the point being made?
The biggest gripe I have with a rewrite is... A lof of the time we rewrite for feature parity. Not the exact same thing. So you are kind ignoring/missing/forgetting all those edge cases and patches that were added along the way for so many niche or otherwise reasons.
This means broken software. Something which used to work before but not anymore. They'll have to encounter all of them again in the wild and fix it again.
Obviously if we are to rewrite an important piece of software like this, you'd emphasise more on all of these. But it's hard for me to comprehend whether it will be 100%.
But other than sqlite, think SDL. If it is to be rewritten. It's really hard for me to comprehend that it's negligible in effect. Am guessing horrible releases before it gets better. Users complaining for things that used work.
C is going to be there long after the next Rust is where my money is. And even if Rust is still present, there would be a new Rust then.
So why rewrite? Rewrites shouldn't be the default thinking no?
mikece•7h ago
01HNNWZ0MV43FF•4h ago
Occasionally when working in Lua I'd write something low-level in C++, wrap it in C, and then call the C wrapper from Lua. It's extra boilerplate but damn is it nice to have a REPL for your C++ code.
Edit: Because someone else will say it - Rust binary artifacts _are_ kinda big by default. You can compile libstd from scratch on nightly (it's a couple flags) or you can amortize the cost by packing more functions into the same binary, but it is gonna have more fixed overhead than C or C++.
mellinoe•4h ago