Plain and simple C is, by far, one of the current _LESS WORSE_ performant alternatives which can be used, actually, from low level to large applications.
C syntax is already waaaaay to rich and complex (and ISO is a bit too much pushing feature creep over 5/10 years cycles).
I love C, when I wrote my first programs in it as a teenager, I loved the simplicity, I love procedural code. Large languages like C++ and or Rust are just too much for ME. Too much package. I like writing code in a simple way, if I can do that with a small amount of tools, I will. You can learn C and be efficient in it within a couple weeks. C systems are just built better, faster, more stable, portable, and more! This is my 2 cents.
>>"the spiral rule"
Not a problem in 99.99% of cases where it's just: type name = something;
with maybe * and [] somewhere.
We need a new C, fixed, leaner and less complex: primitives are all sized (u64, u8, f64, etc), only one loop primitive (loop{}), no switch, no enum, no typedef, no typeof and other c11 _generic/etc, no integer promotion, no implicit casts (except for void* and literal numbers?), real hard compiler constants, current "const" must go away (I don't even talk about "restrict" and similar), no anonymous code block, explicit differentiation between reinterpret,runtime,etc casts?, synchronous numeric computation handling, anonymous struct/union for complex memory layouts, and everything else I am currently forgetting about.
(the expression evaluation of the C pre-processor would require serious modifications too)
We need clean and clear cut definitions, in regards of ABIs, of argument passing by value of aggregates like structs/arrays, you know, mostly for those pesky complex numbers and div_t.
We need keywords and official very-likely-to-be-inline-or-hardware-accelerated-functions, something in between the standard library and the compiler keywords, for modern hardware architecture programming like memory barriers, atomics, byte swapping, some bitwise operations (like popcnt), memcpy, memcmp, memmove, etc.
In the end, I would prefer to have a standard machine ISA (RISC-V) and code everything directly in assembly, or with very high level languages (like [ecma|java]script, lua, etc) with a interpreter coded in that standard machine ISA assembly (all that with a reasonable SDK, ofc)
stdint.h already gives you that.
>> only one loop primitive (loop{}), no switch, no enum
I don't think you will find many fans of that.
>> atomics
check stdatomic.h
>>some bitwise operations (like popcnt)
Check stdbit.h in C23
The only language that would make sense for a partial/progressive migration is zig, in huge part due to its compatibility with C. It's not mentioned in the article though.
Almost certainly correct. It is however being rewritten in Rust by other people https://github.com/tursodatabase/turso. This is probably best thought of as a seperate, compatible project rather than a true rewrite.
For example, Rust has additional memory guarantees when compared to C.
Linus also brought this up: https://lkml.org/lkml/2021/4/14/1099
https://rust.docs.kernel.org/next/kernel/alloc/kvec/struct.V...
Vec::push_within_capacity is a nice API to confront the reality of running out of memory. "Clever" ideas that don't actually work are obviously ineffective once we see this API. We need to do something with this T, we can't just say "Somebody else should ensure I have room to store it" because it's too late now. Here's your T back, there was no more space.
Trying to apply backpressure from memory allocation failures which can appear anywhere completely disconnected from their source rather than capping the current in memory set seems like an incredibly hard path to make work reliably.
If you’re OOM your application is in a pretty unrecoverable state. Theoretically possible, practically not.
I suppose you can try to reliable target "seriously wild allocation fails" without leaving too much memory on the table.
0: Heuristic overcommit handling. Obvious overcommits of
address space are refused. Used for a typical system. It
ensures a seriously wild allocation fails while allowing
overcommit to reduce swap usage. root is allowed to
allocate slightly more memory in this mode. This is the
default.
https://www.kernel.org/doc/Documentation/vm/overcommit-accou...Running in an environent without overcommit would allow you to handle it gracefully though, although bringing its own zoo of nasty footguns.
See this recent discussion on what can happen when turning off overcommit:
What are you referring to specifically? Overcommit is only (presumably) useful if you are using Linux as a desktop OS.
I was suggesting (though in retrospect not clearly) that logging should use a ring buffer instead of allocation, in order to make logging on cleanup a guaranteed best effort operation. You're right that you can recover from OOM, but logging OOM with an allocator is pretty self-defeating.
And that is usually not too difficult in C (in my experience), where allocation is explicit.
In C++, on the other hand, this quickly gets hairy IMO.
After those experiences I agree with the sibling comment that calls your position "bullshit". I think people come to your conclusion when they haven't experienced a system that can handle it, so they're biased to think it's impossible to do. Since being able to handle it is not the default in so many languages and one very prominent OS, fewer people understand it is possible.
That's intentional; IOW the "most code" that is unable to handle OOM conditions are written that way.
You can write code that handles OOM conditions gracefully, but that way of writing code is the default only in C. In every other language you need to go off the beaten path to gracefully handle OOM conditions.
It's possible. But very very few projects do.
What are you talking about? Every allocation must be checked at the point of allocation, which is "the default"
If you write non-idiomatically, then sure, in other languages you can jump through a couple of hoops and check every allocation, but that's not the default.
The default in C is to return an error when allocation fails.
The default in C++, Rust, etc is to throw an exception. The idiomatic way in C++, etc is to not handle that exception.
C doesn't force you to check the allocation at all. The default behavior is to simply invoke undefined behavior the first time you use the returned allocation if it failed.
In practice I've found most people write their own wrappers around malloc that at least crash - for example: https://docs.gtk.org/glib/memory.html
PS. The current default in rust to print something and then abort the program, not panic (i.e. not throw an exception). Though the standard library reserves the right to change that to a panic in the future.
No one ever claimed it did; I said, and still do, that the in C, at any rate, the default is to check the returned value from memory allocations.
And, that is true.
The default in other language is not to recover.
> No one ever claimed it did;
You specifically said
> Every allocation must be checked at the point of allocation
...
> the default is to check the returned value from memory allocations.
Default has a meaning, and it's what happens if you don't explicitly choose to do something else.
In libc - this is to invoke undefined behavior if the user uses the allocation.
In glib - the library that underpins half the linux desktop - this is to crash. This is an approach I've seen elsewhere as well to the point where I'm comfortable calling it "default" in the sense that people change their default behavior to it.
Nowhere that I've ever seen, in C, is it to make the user handle the error. I assume there are projects with santizers that do do that, I haven't worked on them, and they certainly don't make up the majority.
It also has the meaning of doing the common thing: https://www.merriam-webster.com/dictionary/default
> : a selection made usually automatically or without active consideration
See that "without active consideration" there? The default usage of malloc includes, whether you want to acknowledge it or not, checking the returned value.
C doesn't have anything done automatically, so I am wondering why you would choose to think that by "default" one would mean that something automatically gets done.
"Ubiquitous" is a different word than default, checking the return code of malloc isn't even that. As an example - I've been having some issues with pipewire recently (unrelated) and happen to know it uses an unwrapped malloc. And it doesn't reliably check the return code. For example: https://github.com/PipeWire/pipewire/blob/6ed964546586e809f7...
And again, this isn't cherry picked, this is just "the last popular open source C code base I've looked at". This is the common case in C. Either you wrap malloc to crash, or you just accept undefined behavior if malloc fails. It is the rare project that doesn't do one of those two.
Right. But this is what you initially responded to:
> You can write code that handles OOM conditions gracefully, but that way of writing code is the default only in C.
How did you get from "That way" to thinking I claimed that C, by default, handles allocation failures?
> As an example - I've been having some issues with pipewire recently (unrelated) and happen to know it uses an unwrapped malloc.
Correct. That does not mean that the default way of writing allocation in C is anything other than what I said.
Do programmers make mistakes? Sure. But that's not what asked - what was asked is how do you handle memory errors gracefully, and I pointed out that, in idiomatic C, handling memory errors gracefully is the default way of handling memory errors.
That is not the case for other languages.
I think you might want to reread the line you quoted directly above this,
That way of writing code, i.e. "write[ing] code that handles OOM conditions gracefully" "is the default [...] in C".
This is what I am saying is not the case. The default in C is undefined behavior (libc) or crashing (a significant fraction of projects allocator wrappers). Not "handling OOM gracefully" - i.e. handling OOM errors.
I am reading exactly what I said:
> You can write code that handles OOM conditions gracefully, but that way of writing code is the default only in C.
How is it possible to read that as anything other than "That Way Of Writing Code Is The Default Way In C"?
Are you saying that checking the result of malloc (and others) is not the default way of allocating memory?
In C - yes. I've said that repeatedly now...
> In C - yes. I've said that repeatedly now...
Well, that's just not true. The instances of unchecked allocations are both few and far between, *and* treated as bugs when reported :-/
Maybe you should program in a language for a little bit before forming an opinion on it :-/
For good reason. Most C software is not designed to run in a situation where malloc might fail.
I, unlike you, have provided evidence of this by pointing to major pieces of the linux desktop that do not do so.
because of OS-level overcommit, which is nearly always a good thing
It doesn't matter about the language you are writing in, because your OS can tell you that the allocation succeeded, but when you come to use it, only then do you find out that the memory isn't there.
It's a place where windows legitimately is better than linux.
Sure, but ... what does that have to do with this thread? Using `mmap` is not the same as using `malloc` and friends.
If you turn off overcommit, malloc will return NULL on failure to allocate. If you specifically request mmap to ignore overcommit, and it does, why are you surprised?
You misunderstand, you specifically request mmap to ignore overcommit, and it doesn't, not does.
What it has to do with this thread is it makes turning off overcommit on linux an exceptionally unpalatable option because it makes a lot of correct software incorrect in an unfixable manner.
It may not be as simple as "that's our policy". I worked at one place (embedded C++ code, 2018) that simply reset the device every 24h because they never managed to track down all the leaks.
Finding memory leaks in C++ is a non-trivial and time-consuming task. It gets easier if your project doesn't use exceptions, but it's still very difficult.
Was not available for that specific device, but even with Valgrind and similar tools, you are still going to run into weird destructor issues with inheritance.
There are many possible combinations of virtual, non-virtual, base-class, derived-class, constructors and destructors; some of them will indeed cause a memory leak, and are allowed to by the standard.
No, this is wishful thinking. While plenty of programs out the are in the business of maintaining caches that could be optimistically evicted in order to proceed in low-memory situations, the vast majority of programs are not caching anything. If they're out of memory, they just can't proceed.
In Zig you must handle it. Even if handling means "don't care, panic", you have to spell that out.
First off allocation failure (typically indicated by bad_alloc exception in C++ code, or nullptr in C style code) does not mean that the system (or even the process) as a whole is out of memory.
It just means that this particular allocator could not satisfy the allocation request. The allocator could have "ulimit" or such limit that is completely independent from actual process/system limitations.
Secondarily what reason is there to make an allocation failure any different than any other resource allocation failure?
A normal structure for a program is to catch these exceptions at a higher level in the stack close to some logical entry point, such as thread entry point, UI action handler etc. where they can be understood and possibly shown to the user or logged or whatever. It shouldn't really matter if the failure is about failing to allocate socket or failing to allocate memory.
You could make the case that if the system is out of memory the exception propagation itself is going to fail. Maybe..but IMHO on the code path that is taken when stack is unwound due to exception you should only release resources not allocate more anyway.
Are we playing word games here? If a process has a set amount of memory, and it's out of it, then that process is OOM, if a VM is out of memory, it's OOM. Yes, OOM is typically used for OS OOM, and Linus is talking about rust in the kernel, so that's what OOM would mean.
>Secondarily what reason is there to make an allocation failure any different than any other resource allocation failure.
Of course there is, would you treat being out of bread similar to being out of oxygen? Again this can be explained by the context being kernel development and not application development.
As I just explained an allocator can have its own limits.
A process can have multiple allocators. There's no direct logical step that says that because some allocator some failed some allocation, the process itself cannot allocate more ever.
"Of course there is, would you treat being out of bread similar to being out of oxygen? Again this can be explained by the context being kernel development and not application development."
The parent comment is talking about over commitment and OOM as if these are situations that are completely out of the programs control. They aren't.
In your C++ (or C) program you have one (or more) allocators. These are just pieces of code that juggle blocks of memory into smaller chunks for the program to use. Typically the allocators get their memory from the OS in pages using some OS system call such as sbrk or mmap.
For the sake of argument, let's say I write an allocator that has a limit of 2MiB, while my system has 64Gib or RAM. The allocator can then fail some request when it's internal 2MiB has been exhausted. In C world it'd return a nullptr. In C++ world it would normally throw bad_alloc.
If this happens does this mean the process is out of memory? Or the system is out of memory? No, it doesn't.
That being said where things get murky is because there are allocators that in the absence of limits will just map more and more pages from the OS. The OS can "overcommit" which is to say it gives out more pages than can actually fit into the available physical memory (after taking into account what the OS itself uses etc). And when the overall system memory demand grows too high it will just kill some arbitrary process. On Linux this is the infamous OOM killer that uses the "niceness" score to determine what to kill.
And yes, for the OOM killer there's very little you can do.
But an allocation failure (nullptr or bad_alloc) does not mean OOM condition is happening in the system.
This is the far more meaningful part of the original comment:
> and furthermore most code is not in a position to do anything other than crash in an OOM scenario
Given that (unlike a language such as Zig) Rust doesn’t use a variety of different allocator types within a given system, choosing to reliably panic with a reasonable message and stack/trace is a very reasonable mindset to have.
If some allocation fails, the error bubbles up until a safe place, where some pages can be dropped from the cache, and the operation that failed can be tried again.
All this requires is that bubbling up this specific error condition doesn't allocate. Which SQLite purportedly tests.
I'll note that this is not entirely dissimilar to a system where an allocation that can't be immediately satisfied triggers a full garbage collection cycle before an OOM is raised (and where some data might be held through soft/weak pointers and dropped under pressure), just implemented in library code.
But that’s not the point: what can most applications do when SQLite tells them that it encountered a memory error and couldn’t complete the transaction?
Abort and report an error to the user. In a CLI this would be a panic/abort, and in a service that would usually be implemented as a panic handler (which also catches other errors) that attempts to return an error response.
In this context, who cares if it’s an OOM error or another fatal exception? The outcome is the same.
Of course that’s not universal, but it covers 99% of use cases.
If SQLite fails to allocate memory for a string or blob, it bubbles up the error, frees some data, and maybe tries again.
Your app may be "hopeless" if the error bubbles up all the way to it, that's your choice, but SQLite may have already handled the error internally, retried, and given your answer without you noticing.
Or it may at least have rolled back your transaction cleanly, instead of immediately crashing at the point of the failed allocation. And although crashing should not corrupt your database, a clean rollback is much faster to recover from, even if your app then decides to crash.
Your app, e.g. an HTTP server, might decide to drop the request, maybe close that SQLite connection, and stay alive to handle other ongoing and new requests.
SQLite wants to be programmed in a language were a failed allocation doesn't crash, and unlike most other code, SQLite is actually tested for how it behaves when malloc fails.
Historically, a lot of C code fails to handle memory allocation failure properly because checking malloc etc for null result is too much work — C code tends to calm that a lot.
Bjarne Stroustrup added exceptions to C++ in part so that you could write programs that easily recover when memory allocation fails - that was the original motivation for exceptions.
In this one way, rust is a step backwards towards C. I hope that rust comes up with a better story around this, because in some applications it does matter.
If it were any other way then processes could ignore signals and just make themselves permanent, like Maduro or Putin.
No. A single process can have several allocators, switch between them, or use temporary low limits to enforce some kind of safety. None of that has any relation to your system running out of memory.
You won't see any of that in a desktop or a server. In fact, I haven't seen people even discuss that in decades. But it exists, and there are real reasons to use it.
So I assume there is no real blockers as people in this tread assume, this is just not a conventional behavior, ad hoc, so we need to wait and well defined stable OOM handlers will appear
ul.threadlist li:hover > a {
color: red;
}
ul.threadlist li.origin > a {
display: block;
background: rgb(205, 216, 216);
font-weight: normal;
padding: 1px 6px;
margin: 1px -6px;
} ul.threadlist li.origin a {
display: block;
background: rgb(205, 216, 216);
font-weight: normal;
padding: 1px 6px;
margin: 1px -6px;
}https://turso.tech/blog/introducing-limbo-a-complete-rewrite...
I also think it's important to have really solid understandings (which can take a few decades I imagine) to understand the bounds of what Rust is good at. For example, I personally think it's unclear how good Rust can be for GUI applications.
As a language, it's too basic. Almost every C projects try to mimic what C++ already has.
Then came the rise of FOSS adoption, with the GNU manifest asserting to use only C as compiled language.
"Using a language other than C is like using a non-standard feature: it will cause trouble for users. Even if GCC supports the other language, users may find it inconvenient to have to install the compiler for that other language in order to build your program. So please write in C."
The GNU Coding Standard in 1994, http://web.mit.edu/gnu/doc/html/standards_7.html#SEC12
https://www.youtube.com/live/EIKAqcLxtT0?si=J82Us4zBlXLPbZVq
It is a matter of skill.
Because I could not, by scrubbing that video, find anything where immense skill is used to deal with the enormous overhead that standard C++ forces on every program that uses it.
"Rich Code for Tiny Computers: A Simple Commodore 64 Game in C++17"
https://youtu.be/zBkNBP00wJE?si=uqUwVMMEpp4ZPWun
It is a matter of skill, understanding C++ and how to make it useful for embedded systems, and being able to understand standard library isn't a requirement for each executable.
By the way, the famous Arduino and ESP32 hasve no problem dealing with C++.
As we also didn't, back in MS-DOS, with 640 KB, Turbo Vision and Borland Collection Library (BIDS).
A matter of skill, as mentioned.
So you also agree that C++ libraries are a bad fit for embedded? Because in the video you linked, that person did not use any libraries.
It is one thing to compile small standalone binary, using non-conforming compiler extensions to disable rtti and exceptions. It is another to write C++ library. By standard, even freestanding C++ requires RTTI, exceptions and most of standard library. If you need to implement your own STL subset to satisfy the library, then modify the library to work with your STL, the resulting API is not much of an API, is it?
It takes skill to make use of Arduino, ESP32, and other C++ embedded libraries, being able to write custom C++ libraries, master compiler switches and linker maps.
You cannot make it in C++, because any valid C++ library imposes massive requirements on the environment I already mentioned. C does no such thing!
Arduino, https://docs.arduino.cc/arduino-cloud/guides/arduino-c/
Or even ESP32, https://docs.espressif.com/projects/esp-idf/en/stable/esp32/...
Others do not.
As for C does not such thing, strangly there are enough examples from TI, Microchip, Atmel that prove otherwise.
The ESP32 SDK (I lol'd at your "even", those Xtensa/RISCV chips can even run linux kernel!) is extremely impressive - they support enabling/disabling rtti and exceptions (obviously disabled by default, but the fact they implemented support for that is amazing). So "real C++" is possible on ESP32, which is good to know.
For comparision, here are the minimum SQLite dependencies from the submitted article: memcmp() memcpy() memmove() memset() strcmp() strlen() strncmp()
Of course you could run javascript and erlang on MCU too, but is that API better than C? Your claim of "skill issue" sounds like RedBull challenge. Please let us unskilled people simply call library functions.
First of all, ABI is a property of the OS calling conventions, which happen to overlap with C on UNIX/POSIX, given its symbiotic relationship.
Secondly, https://thephd.dev/to-save-c-we-must-save-abi-fixing-c-funct...
[1] https://web.archive.org/web/20170701061906/https://sqlite.or...
Let’s be serious now.
I believe this article is from a few years ago
Why they? You should do it.
Overall, it makes sense. C is a systems language, and a DB is a system abstraction. You shouldn't need to build a deep hierarchy of abstractions on top of C, just stick with the lego blocks you have.
If the project had started in 2016, maybe they would have gone for c++, which is a different beast from what it was pre-2011.
Similarly, you might write SQLite in Rust if you started today.
https://web.archive.org/web/20250130053844/https://250bpm.co...
* Error handling via exceptions. Rust uses `Result` instead. (It has panics, but they are meant to be strictly for serious logic errors for which calling `abort` would be fine. There's a `Cargo.toml` option to do exactly that on panic that rather than unwinding.) (btw, C++ has two camps here for better or worse; many programs are written in a dialect that doesn't use exceptions.)
* Constructors have to be infallible. Not a thing in Rust; you just make a method that returns `Result<Self, Error>`. (Even in C++ there are workarounds.)
* Destructors have to be infallible. This is about as true in Rust as in C++: `Drop::drop` doesn't return a `Result` and can't unwind-via-panic if you have unwinding disabled or are already panicking. But I reject the characterization of it as a problem compared to C anyway. The C version has to call a function to destroy the thing. Doing the same in Rust (or C++) is not really any different; having the other calls assert that it's not destroyed is perfectly fine. I've done this via a `self.inner.as_mut().expect("not terminated")`. They say the C only has two states: "Not initialised object/memory where all the bets are off and the structure can contain random data. And there is initialised state, where the object is fully functional". The existence of the "all bets are off" state is not as compelling as they make it out to be, even if throwing up your hands is less code.
* Inheritance. Rust doesn't have it.
That section was probably written 20 years ago when Java was all the rage.
If they are getting good results with C and without OOP, and people like the product, then those from outside the project shouldn't really have any say on it. It's their project.
Yeah this super common. Great comment.
If only programming languages (or GenAI) were tools like hammers and augers and drills.
Even then the cabinets you see that come out of shops that only use hand tools are some of the most sturdy, beautiful, and long lasting pieces that become the antiques. They use fewer cuts, less glue, avoid using nails and screws where a proper joint will do, etc.
Comparing it to AI makes no sense. Invoking it is supposed to bring to mind the fact that it's worse in well-known ways, but then the statement 'better in every way' no longer applies. Using Rust passively improves the engineering quality compared to using anything else, unlike AI which sacrifices engineering quality for iteration speed.
No disrespect intended, but your criticism of the analogy reveals that you are speaking from assumptions, but not knowledge, about furniture construction.
In fact, less glue, and fewer fasteners (i.e. design that leverages the strength of the materials), is exactly how quality furniture is made more sturdy.
The traditional joints held up very well and even beat the engineered connectors in some cases. Additionally one must be careful with screws and fasteners: if they’re not used according to spec, they may be significantly weaker than expected. The presented screws had to be driven in diagonally from multiple positions to reach the specified strength; driving them straight in, as the average DIYer would, would have resulted in a weak joint.
Glue is typically used in traditional joinery, so less glue would actually have a negative effect.
And a lot of traditional joinery is about keeping the carcase sufficiently together even after the hide glue completely breaks down so that it can be repaired.
Modern glues allow you to use a lot less complicated joinery.
If the alternative has drawbacks (they always do) or is not as well known by the team, it's perfecly fine to keep using the tool you know if it is working for you.
People who incessantly try to evangelise their tool/belief/preferences to others are often seen as unpleasant to say the least and they often achieve the opposite effect of what they seek.
but everyone with a brain knows the costs are worth the benefits.
And when it comes to programming languages, it's not as clear cut. As exemplified by the article.
So the power tools is a poor analogy.
In reality, this is not the case. Bad code is the result of bad developers. Id rather have someone writing C code that understands how memory bugs happen rather than a Rust developer thinking that the compiler is going to take care of everything for them.
There is literally nothing strange or disproportionate. It's incredibly obvious that new languages, that were designed by people who found older languages lacking, are of interest to groups of people interested in new applications of technology and who want to use their new languages.
> then those from outside the project shouldn't really have any say on it. It's their project.
People outside the project are allowed to say whatever the hell they want, the project doesn't have to listen.
Otherwise they would have written something along the lines of "shouldn't say anything about it".
And? GP didn't say that they shouldn't.
What these people do is a disservice to the general open source community, by spreading misinformation and general FUD about critical software that uses C and C++.
Within reason - don't be a dick and all that. :)
To summarize, Rust provides a lot of compile-time discipline that C and C++ are lacking, and many people are tired of maintaining code that was developed without that discipline. Rust makes it harder to write low-effort software.
As a programmer that picks up a new language every 2-3 years (and one that is privileged to have an employer that tolerates this) Rust is really a breath of fresh air; not because it's easier, but because it codifies and enforces some good habits in ways that other languages don't.
Claiming Java has a niche is very funny. I guess the niche is programmable computers? Well done.
Surely that gap has been filled for at least a decade, even if only by Rust itself?
Moreover, I am not sure that serves as an explanation as it shows up in the strangest places. As you mention Go: Visit any discussion about Go and you'll find some arbitrary comment about Rust, even though, as you point out, they don't even exist in the same niche; being different tools for different jobs.
> Go was originally pitched as a C or C++ replacement
It was originally imagined that it would replace C++ for network servers at Google. The servers part was made abundantly clear. In fact, the team behind it expressed quite a lot of surprise that it eventually found a home doing other things as well.
> you don't see Go ported to microcontrollers for instance.
You don't? https://tinygo.org
I think this is the argument made by the "Rust Evangelism Task Force" -- that Rust provides the features that C and C++ are missing. What i meant by "gap" is "the distance between C or C++ and Rust is greater then the distance between C++ and Go (in Go's target use case) or between Java and Kotlin". For the record, I do think all of these languages are great; I'm just trying to reason out the "rewrite it in Rust" mantra that has taken hold in many online communities, including this one.
> You don't? https://tinygo.org
I wasn't aware of this, thank you.
What, exactly, does distance mean here?
The other explicitly told design consideration for Go was for it to "feel like a dynamically-typed language with statically-typed performance". In other words, the assumption was that Googlers were using C++ for network servers not because of C++, but because something like Python (of which Google was the employer of van Rossum at the time!) was too slow. Go was created to offer something more like Python but with performance more like C++. It was a "C++ replacement" only in the sense that C++ is what Google was using where it was considered the "wrong tool for the job". Keep in mind that Go was created before we knew how to make actually dynamically-typed languages fast.
Putting things into perspective, the distance between C++ and Go is approximately the same as the distance between C++ and Python. Which is a pretty big distance, I'd say. C, C++, and Rust are much closer. They are all trying to do essentially the same thing, with Rust only standing out from the other two thanks to its at-the-time unique memory model. So it is apparent that we still don't understand "gap" to mean the same thing.
Another argument offered for Rust is that it's high-level enough that you can also use it for the web (see how many web frameworks it has). So I think that Rust's proponents see it as this universal language that could be good for everything.
Ten years ago the memory model was a compelling benefit, sure, but nowadays we have Fil-C, that C++ static analyzer posted here yesterday, etc. There is some remaining marginal benefit that C and C++ still haven't quite caught up with yet, but is that significantly smaller and continually shrinking gap sufficient to explain things as they stand today?
You are right that the aforementioned assumption did not play out in the end. It turns out that C++ developers did, in fact, choose C++ because of C++ and would have never selected Python even if Python was the fastest language out there. Although, funnily enough, a "faster Python" ended up being appealing to Python developers so Go does ultimately have the same story, except around Python (and Ruby) instead of C++.
> Another argument offered for Rust is that it's high-level enough that you can also use it for the web
It was able to do that ten years ago just as well. That doesn't really explain things either.
Would you mind elaborating on this? The strongtalk heritage of VMs has been around for a while now, and certainly before go was started.
> As a programmer that picks up a new language every 2-3 years (and one that is privileged to have an employer that tolerates this)
does this mean they allow you to tinker around on your own for this, or do you actually then start to deploy things to production written in the new language.
After having quite a long career as a programmer, I realised that if I were ever CTO at a startup, unless there was an absolute proven need to switch languages, I'd mandate only a single language for our (back end) stack. The cost of supporting different languages is just too high.
This claim does not pass the smell test. Tech sprawl is a widely recognized problem, and dumping codebases each 2-3 years is outright unthinkable and pure madness. It doesn't even come across as resume-driven development because 3 years is not nearly enough to get anyone at a proficient level.
This claim is so outlandish that I'm inclined to dismiss it entirely as completely made-up. There is no project manager in the world who would even entertain this thought.
First of all, Java isn't a platform. Kotlin and Java are both just languages, and Kotlin has explicit interoperability with Java exactly to make it easy for Java devs to "upgrade".
The JVM is a common target for both Java and Kotlin, where the two are intentionally interoperable - from the Kotlin-side, by virtue of explicit annotation. Both languages have other targets through other compilers, e.g., Kotlin's native backend and GraalVM.
The widening gap is not at all moving Kotlin further away from Java developers, but is just increasing the reasons to migrate. It is crucially not making interoperability with existing, legacy Java harder, just giving you more toys. Stuff like suspend functions vs. virtual threads only affects decision making in new application code, and you can for all intents and purposes use either depending on whether you're writing new Kotlin libs for a legacy Java app or a Kotlin app using legacy Java libs.
The C → Rust migrations that happen a lot these days underline how differences in features isn't at all a problem (quite the opposite when there's new features), but that interoperability allowing partial work is by far the most important thing.
Plus, considering that Android apps were responsible for a very significant portion of actively developed Java (I would assume quite a loand has with Android having gone full Kotlin, a quite significant portion of Java developers will either already have migrated or soon be migrating to follow suit. This will over time affect a quite significant portion of the available skill pool for hiring, which will add additional pressure on enterprise.
There will always be Java, but I'd expect a significant portion of actively developed applications (as opposed to maintenance-mode only applications) to slowly migrate to either Kotlin or something else entirely.
That the JVM and IR has features to help the Java compiler generate better output is obvious but not really relevant. Modern CPUs also have instructions to help C compiles generate better code, but that doesn't make them C platforms. It's just that implementation details.
So no, Java is not a platform. It is a language that sometimes runs on the JVM together with many other large and quite influential languages.
You are being facetious. I mean, do you actually believe that the JVM exists in a context where Java does not exist? What does the J in JVM stand for?
> The JVM is a common target for both Java and Kotlin, where the two are intentionally interoperable (...)
Yes, in the sense that Java exists and Kotlin by design piggybacks on the JVM.
> The C → Rust migrations that happen a lot these days underline how differences in features isn't at all a problem (quite the opposite when there's new features), but that interoperability allowing partial work is by far the most important thing.
This analysis is very superficial and fails to identify any of the real world arguments to switch from C. For example, Microsoft outright strangled C by wasting many years refusing to support any standard beyond C89, in spite of being directly involved in it's drafts. This was a major contribution to address any of the pain points and DX shortcomings. Compare the evolution of C and C++ during that time period, and we see C++ going through the same path in the C++0x days to then recover spectacularly once C++11 got unstuck.
Java runs on several things that are not the JVM. Android does not use JVM to run Java, and even Oracle is pushing something that is not the JVM.
At the same time, JVM runs many things that are not Java.
If you are somehow implying along the lines of JVM only got initially authored because Java, then that is nothing but a historical fact of little relevance from the early days of the language. If not even Oracle considers Java and JVM one thing - and by virtue of Graal they don't - then it simply isn't as such.
> This analysis is very superficial and fails to identify any of the real world arguments to switch from C
You misread - what you quoted was not an analysis of why the migrations happen. It was a parallel, underlining that migrations do happen in spite of obvious feature differences (and sometimes, because of such differences).
At least Kotlin can theoretically retreat to Android.
This doesn't explain why so many rust activists are going to projects they have no involvement in and demanding they be rewritten in rust.
What's happening is that there are progressive minded people who have progressive minded tactics, where they have a cause and everywhere they go they push their cause regardless of whether the places they are going have anything to do with their cause.
> you don't see Go ported to microcontrollers for instance.
AVRGo disagrees: https://github.com/avrgo-org/avrgoFor the more bulky processors, there's also tamago.
The "long history of safety issues" is actually a combination of being extremely successful (the world runs on C and C++) and production software always featuring bugs.
The moment Rust started to gain some traction, we immediately started seeing CVEs originating from Rust code.
I decided to try it for a medium-sized (~10k LoC) performance sensitive component recently and it has been an absolute joy to use.
Low level memory-managed languages have been C and C++ mostly for a really long time, so I think Rust and Zig probably seem "more new" than the likes of Kotlin, Go, Elixir, Gleam, etc.
Some languages, like elixir, stick around with a low-volume, but consistently positive mention on HN. Which makes me want to use it more.
Kotlin doesn't have a strong case for replacing java because java is, well, just fine. At least it's safe. Sure it's, like, slightly inconvenient sometimes.
And other languages like Go which originally claimed to take on C and C++ just don't. Go is garbage collected, it's not a real competition.
But Rust is different. It's actually safe, and it's actually a real competitor. There's basically zero reason to choose C other than "I know it" or "one of my library author's knows it". Which are both very good reason, but incidently have nothing to do with the language itself.
More than C++? More than Java? More than Python?
In my mind at least there's a decent risk Rust is going to end up like the next Haskell, its benefits other than safety are not that clear and many of those features can and have been replicated in other languages.
In my mind, the thing that makes rust and zig nice are that they put modern language features in a systems language. Non-nullable pointers and match expressions in a language that runs as fast as C? Yes please.
I love rust, but I personally doubt rust will ever be anywhere near as popular as Python, go, JavaScript and C#. It’s just so complex and difficult to learn. But its niche is pretty clear to me: I see it as a tool for writing beautiful, correct, memory safe, performant systems code. I used to love C. But between zig, rust and Odin, I can’t see myself ever using it again. C is just so much less productive and less pleasant to use than more modern languages.
This is a disingenuous opinion. The level of militance involved in this propaganda push goes way beyond mere interest. Just look at people who actually enjoy Java, C++, Python, etc. These are the most popular languages that mankind ever developed,and countless people built very successful careers around them. Yet, you don't see even a fraction of the fanboys you see constantly pushing these experimental languages.
My only complaint would be that there's many SQL features I want to use which aren't supported. Surely some part of that is deliberately restricted scope, but some might also be dev velocity issues.
DuckDB (C++) can be used like an SQLite with more SQL features, but it's also a lot buggier. Segfaults from your database are not what you want.
So still hoping for some alternative to come along (or maybe I'll need to write it myself)
And beyond standard SQL, stuff like hashmap/bloom filter/vector indices, nearest neighbor search, etc...
A good example of this are the object systems in Scheme and Common Lisp (which are less strictly Functional (note the capital F in that word) then something like Haskell).
Though on further thought, may be this isn't FP vs OOP, because C has a similar approach of "standing above", and C is the hallmark imperative language.
The US government recently called on everyone to stop using them and move to memory-safe languages.
Regardless, there are practices and tools that significantly help produce safe C code and I feel like more effort should be spent teaching C programmers those.
Edit: Typos and to make the point that I'm not necessarily defending C, just acknowledging it's place. I haven't written a significant amount of C in over 2 decades, probably, aside from microcontroller C.
C cannot be made safe (at scale). It's like asbestos. In fact, C is a hazardous material in exactly the same way as asbestos. Naturally occurring, but over industrialized and deployed far too widely before its dangers were known. Still has its uses but it will fuck you up if you do not use industrial-grade PPE.
Stop using C if you can. Stop arguing other people should use it. There have always been alternatives and the opportunity cost of ecosystems continuing to invest in C has massive externalized costs for the entire industry and society as a whole.
And yet, it never does. It's been powering those types of machines likely longer than you have been alive, and the one exception I can think of where lives were lost, the experts found that the development process was at fault, not the language.
If it was as bad as you make out, we'd have many many many occurrences of this starting in the 80s. We don't.
C is not known to the state of California to cause cancer.
The US government also _really_ (no sarcasm) cares about safety-critical code that can be formally verified, depending on program requirements. DO-178, LOR1, el. al.
Developing those toolchains costs tens of millions, getting them certified costs tens of millions, and then buying those products to use costs 500k-1.5m a pop.
Those toolchains do not exist for rust. I am only aware of these toolchains existing for C, and old C at that.
Hell, rust doesn't even have a formal spec, which is a bit of a roadblock.
I think your link agrees with me, actually.
DO-178C isn’t there yet, but I believe I heard that it’s coming. In general, Ferrous Systems works with customer demand, which has been more automotive to start.
Actually having it happen, someone is going to be out 10-30 million bucks. And again for each new compiler version.
It is certainly non-trivial.
Clearly C “can meet” and “has met” DO-178. So, I posit that more languages than C “can meet” this standard.
Proving it is the very hard, very expensive part.
Oh, and whatever version of the rust compiler that gets certified will be locked down as the only certified toolchain. No more compiler updates every 6 weeks. Unless you go though the whole process again.
It's not every six weeks, but it's far faster than once every three years.
I'm sure they also made a few bad decisions too :-P
https://en.wikipedia.org/wiki/United_States_Department_of_Wa...
This congress is not likely to approve it. And the next congress, even less so.
That said, "ever" is probably too strong. There's a window wherein the chaos which is currently being actively created by the US will develop to an extent that compels the US (or is sold to US voters as a necessary step) to adopt a foreign policy where it would be the more appropriate title. And if the adults can't manage that with charismatic leadership in the next election cycle or two, we could be right back here again, with quasi-legitimate geopolitical justification for the sort of big-stick wagging we see today.
I honestly think this is the goal, and I'm not sure the American people are up to the challenge of preventing it.
How would the three letter agencies then spy on people and deliver their payloads on various target devices?
The governments around the world really need the security holes to exist despite what they say.
For that matter, there are a number of compiled memory-safe and safer languages: Dlang, Vlang, Golang, etc... who could be discussed and are equally viable choices. And if we are talking about something that needs to be outright safety-critical, Ada and SPARK should definitely be in the debate.
However, all of that doesn't override the right of projects to decide on what language they believe is best for them or what strategies concerning safety that they wish to pursue.
Golang is not 100% zero cost close to metal abstraction. You could add Java and .NET too, but they are not replacement for C obviously.
Golang is not playing in the same niche as C/C++/Rust/Zig, but we have had countless memory safe languages that are indeed a good fit for many uses where C was previously used.
https://reversec.com/usb-armory
> In addition to native support for standard operating environments, such as Linux distributions, the USB armory is directly supported by TamaGo, an Reversec Foundry developed framework that provides execution of unencumbered Go applications on bare metal ARM® System-on-Chip (SoC) processors.
1. With Rust, you may lower the exposure, but the same classes of bug still remain. And of course, all the other non memory related bugs.
2. With C you may, if you wish, develop a big sensibility to race conditions, and stay alert. In general it is possible that C programmers have their "bugs antenna" a bit more developed than other folks.
3. With Rust, to decrease the "unsafe" sections amount, you need often to build abstractions that may be a bit unnatural.
4. Rust may create a false sense of security, and in the unsafe sections the programmer sometimes, when reviewing the code, is falsely convinced by the mandatory SAFETY comment. Like in the Linux Kernel bug, where such comment was hallucinated by a human that sometimes (not sure in this specific case, it's just an example) may be less used to do the "race spotting" process that C learns you to do.
5. With Rust, in case of a bug, the fix could no longer be the one-liner usually you see in C fixes, and can make the exposure time window larger. Sometimes fixing things in Rust means refactoring in non trivial ways.
6. With C, if there was the same amount of effort in creating wrappers to make kernel programming safer at the cost of other things, the surface of attack could also be lowered in a significant way (see for instance Redis use of sds.c: how many direct strings / pointers manipulation we avoid? The same for other stuff of course). Basically things like sds.c let you put a big part of the unsafe business in a self contained library.
So, is Rust an interesting language for certain features it has? Yes. Is Rust a silver bullet? No. So should Rust be "pushed" to others, hell no, and I suggest you to reply in the most firm way to people stressing you out to adopt Rust at all the costs.
I think the idea of developers developing a "bugs antenna" is good in theory, though in practice the kernel, Redis, and many other projects suffer from these classes of bugs consistently. Additionally, that's why people use linters and code formatters even though developers can develop a sensitivity to coding conventions (in fact, these tools used to be unpopular in C-land). Trusting humans develop sensibility is just not enough.
Specifically, about the concurrency: Redis is (mostly) single-threaded, and I guess that's at least in part because of the difficulty of building safe, fast and highly-concurrent C applications (please correct me if I'm wrong).
Can people write safer C (e.g. by using sds.c and the likes)? For sure! Though we've been writing C for 50+ years at this point, at some point "people can just do X" is no longer a valid argument. As while we could, in fact we don't.
Now, if what you're saying is that with super highly optimized sections of a codebase, or extremely specific circumstances (some kernel drivers) you'd need a bit of unsafe rust: then sure. Though all of a sudden you flipped the script, and the unsafe becomes the exception, not the rule; and you can keep those pieces of code contained. Similarly to how C programmers use inline assembly in some scenarios.
Funny enough, this is similar to something that Rust did the opposite of C, and is much better for it: immutable by default (let mut vs. const in C) and non-nullable by default (and even being able to define something as non-null). Flipping the script so that GOOD is default and BAD is rare was a huge win.
I definitely don't think Rust is a silver bullet, though I'd definitely say it's at least a silver alloy bullet. At least when it comes to the above topics.
The biggest `unsafe` sections are probably for SIMD accelerated search. There's no "unnatural abstractions" there. Just a memmem-like interface.
There's some `unsafe` for eliding bounds checks in the main DFA search loops. No unnatural abstractions there either.
There's also some `unsafe` for some synchronization primitives for managing mutable scratch space to use during a search. A C library (e.g., PCRE2) makes the caller handle this. The `regex` crate does it for you. But not for unnatural reasons. To make using regexes simpler. There are lower level APIs that provide the control of C if you need it.
That's pretty much it. All told, this is a teeny tiny fraction of the code in the `regex` crate (and all of its dependencies).
Finally, a demonstration of C-like speed: https://github.com/BurntSushi/rebar?tab=readme-ov-file#summa...
> Is is a tradeoff, not a silver bullet.
Uncontroversial.
- C interop
- Low level machine code (eg inline assembly)
Most programs don’t need to do either of those things. I think you could directly port redis to entirely safe rust, and it would be just as fast. (Though there will need be unsafe code somewhere to wrap epoll).
And even when you need a bit of unsafe, it’s usually a tiny minority of any given program.
I used to think you needed unsafe for custom container types, but now I write custom container types in purely safe rust on top of Vec. The code is simpler, and easier to debug. And I’m shocked to find performance has mostly improved as a result.
First from antirez:
> You don't need much unsafe if you use Rust to replace a Python project, for instance. If there is lower level code, high performances needs, things change.
Use of the term `unsafe` here referring to the keyword / "blocks" of code. Note that this statement would be nonsensical if talking about `unsafe` as a property of code, certainly it would be inconsistent with the later unsafe since later it's claimed that C code is not inherently "unsafe" (therefor Rust would not be inherently "unsafe").
Kibwen staying on that definition here:
> For replacing a Python project with Rust, unsafe blocks will comprise 0% of your code. For replacing a C project with Rust, unsafe blocks will comprise about 5% of your code.
Here is the switch:
> A big amount of C code does not do anything unsafe as well
Complete shift to "unsafe" as being a property of code, no longer talking about the keyword or about blocks of code. You can spot it by just rewriting the sentences to use Rust instead of C.
You can say:
"A big amount of 'unsafe' Rust code does not do anything unsafe as well" "It is also wrong to believe 100% of the unsafe Rust code is basically unsafe."
I think that makes this conflation of terms clear, because we're now talking about the properties of the code within an "unsafe" block or globally in C. Note how clear it is in these sentences that the term `unsafe` is being swapped, we can see this by referring to "rust in unsafe blocks" explicitly.
This is just a change of definitions partway through the conversation.
p.s. @Dang can you remove my rate limit? It's been years, I'm a good boy now :)
Of course, it's not actually this trivial because what you're saying is incorrect. C is not equipped to enforce memory safety; even mundane C code is thoroughly suffused with operations that threaten to spiral off the rails into undefined behavior.
But even with explicit bounds checks, C has an ace up its sleeve.
int cost_of_nth_item(int n) {
if (n < 0 || n >= num_items)
return -1; // error handling
…
}
Safe, right? Not so fast, because if the caller has a code path that forgets to initialize the argument, it’s UB.Rust achieves a sizable but not complete victory on that front.
I can't find the extreme claims that you seem to argue against.
Of course if one writes unsafe Rust and it leads to a CVE then that's on them. Who's denying that?
On the other hand, having to interact with the part of the landscape that's written in C mandates the use of the `unsafe` keyword and not everyone is ideally equipped to be careful.
I view the existence of `unsafe` as pragmatism; Rust never would have taken off without it. And if 5% of all Rust code is potentially unsafe, well, that's still much better than C where you can trivially introduce undefined behavior with many built-in constructs.
Obviously we can't fix everything in one fell swoop.
>>The recent bug in the Linux kernel Rust code, based on my understanding, was >>in unsafe code, and related to interop with C. So I wouldn't really classify >>it as a Rust bug.
Sometimes it's good to read the whole thread.
Helps to read and ingest context.
Though I do agree that in the strictest of technical senses it's indeed a "Rust" bug, as in: bug in code written in Rust.
Of course it's a bug in Rust code. It's just not a bug that you would have to protect against often in most workplaces. I probably would have allowed that bug easily because it's not something I stumble upon more than once a year, if even that.
To that effect, I don't believe it's fair to gauge the ecosystem by such statistical outliers. I make no excuses for the people who allowed the bug. This thread is a very good demonstration as to why: everything Rust-related is super closely scrutinized and immediately blown out of proportion.
As for the rest of your emotionally-loaded language -- get civil, please.
But you and me seem to be much closer in opinion and a stance than I thought. Thanks for clarifying that.
1) "interop with C" is part of the fundamental requirements specification for any code running in the Linux kernel. If Rust can't handle that safely (not Rust "safe", but safely), it isn't appropriate for the job.
2) I believe the problem was related to the fact that Rust can't implement a doubly-linked list in safe code. This is a fundamental limitation, and again is an issue when the fundamental requirement for the task is to interface to data structures implemented as doubly-linked lists.
No matter how good a language is, if it doesn't have support for floating point types, it's not a good language for implementing math libraries. For most applications, the inability to safely express doubly-linked lists and difficulty in interfacing with C aren't fundamental problems - just don't use doubly-linked lists or interface with C code. (well, you still have to call system libraries, but these are slow-moving APIs that can be wrapped by Rust experts) For this particular example, however, C interop and doubly-linked lists are fundamental parts of the problem to be solved by the code.
Rust is no less safe at C interop than using C directly.
If Rust is no less safe than C in such a regard, then what benefit is Rust providing that C could not? I am genuinely curious because OS development is not my forte. I assume the justification to implement Rust must be contingent on more than Rust just being 'newer = better', right?
Not really. Yeah you need to reach into unsafe to make a doubly linked list that passes borrow checker.
Guess what. You need unsafe implementation to print to console. Doesn't mean printing out is unsafe in Rust.
That's the whole point of safe abstraction.
And safe abstractions mean this stuff usually only matters if you’re implementing new, complex collection types. Like an ECS, b-tree, or Fenwick tree. Most code can just use the standard collection types. (Vec, HashMap, etc). And then you don’t have to think about any of this.
This could have happened with no linked lists whatsoever. Kernel locks are notoriously difficult, even for Linus and other extremely experienced kernel devs.
You wrote that question in a browser mostly written in C++ language, running on an OS most likely written in C language.
OS and browser development are seriously hard and took countless expert man hours.
Writing a toy one? Sure.
Writing a real one? Who's gonna write all the drivers and the myriad other things?
And the claim was not that it's "so much easier", but that it is so much easier to write it in a secure way. Which claim is true. But it's still a complex and hard program.
(And don't even get started on browsers, it's no accident that even Microsoft dropped maintaining their own browser).
The point is if it were much easier, then they would overtake existing ones easily, just by adding features and iterating so much faster and that is clearly not the case.
>>difficulty of building safe, fast and highly-concurrent C
This was the original claim. The answer is, there is a tonne of C code out there that is safe, fast and concurrent. Isn't it logical? We have been using C for the last 50 years to build stuff with it and there is a lot of it. There doesn't seem to be a big jump in productivity with the newer generation of low level languages, even though they have many improvements over C.
This is anecdotal, I used to do a lot of low level C and C++ development. And C++ is a much bigger language then C. And honestly I don't think I was ever more productive with it. Maybe the code looked more organized and extendable, but it took the same or larger amount of time to write it. On the other hand when I develop with Javascript or C#, I'm easily 10 times more productive then I would be with either C or C++. This is a bit of apples and oranges comparison, but what I'm trying to say is that new low level languages don't bring huge gains in productivity.
Reminds me of this classic: "Beware Isolated Demands For Rigor" (https://slatestarcodex.com/2014/08/14/beware-isolated-demand...)
(4) Again, this doesn't seem to be borne out empirically.
(5) I've seen plenty of patches to C code that are way more than a single line for the Linux kernel, but sure, maybe we grant that a bug fix in Rust requires more LOC changed? It'd be nice to see evidence. Is the concern here that this will delay patching? That seems unlikely.
It's not uncommon at all for patches to the C code in the kernel for "make this generally safe" are 1000s of lines of code, seeding things like a "length" value through code, and take years to complete. I don't think it's fair to compare these sorts of "make the abstraction safe" vs "add a single line check" fixes.
(6) Also not borne out. Literally billions spent on this.
> So, is Rust an interesting language for certain features it has? Yes. Is Rust a silver bullet? No.
Agreed. I'm tempted to say that virtually no one contests the latter lol
> So should Rust be "pushed" to others, hell no, and I suggest you to reply in the most firm way to people stressing you out to adopt Rust at all the costs.
I guess? You can write whatever you want however you want, but users who are convinced that Rust code will provide a better product will ask for it, and you can provide your reasoning (as SQLite does here, very well imo) as firm as you'd please I think.
edit: And to this other comment (I'm rate limited): https://news.ycombinator.com/item?id=46513428
> made the current developers so able to reason about race conditions (also when they write Rust).
Aha. What? Where'd you get this from? Definitely not from Linus, who has repeatedly stated that lock issues are extremely hard to detect ahead of time.
> we’ve tweaked all the in-kernel locking over decades [..] and even people who know what they are doing tend to get it wrong several times
https://lwn.net/Articles/808498/
Definitely one of MANY quotes and Linus is not alone.
By now their claims keep popping up in Rust discussion threads without any critical evaluation, so this whole communication is better understood as a marketing effort and not a technical analysis.
Don't expect proofs from empirical data. What we have is evidence. Google has published far better evidence, in my view, than "we have this one CVE, here are a bunch of extrapolations".
> By now their claims keep popping up in Rust discussion threads without any critical evaluation,
Irrelevant to me unless you're claiming that I haven't critically evaluated the information for some reason.
I suppose it's possible. I wonder if I'll become a better driver if I take off my seatbelt. Or even better, if I take my son out of my car seat and just let him roam free in the back seat. I'm sure my wife will buy this.
In all seriousness, your comment reminds me of this funny video: https://www.youtube.com/watch?v=glmcMeTVIIQ
It's nowhere near a perfect analogy, but there are some striking similarities.
But in this specific case, if the respawn feature is not available or dying isn't a desirable event, FAFO might not be the best way to learn how to drive.
Yes, just sucks for the person who you hit with your car, or the person whose laptop gets owned because of your code.
"FAFO" is not a great method of learning when the cost is externalized.
Another good example of this is how civil engineers adding safety factors into design of roads - lane widths, straighter curves, and so on - leading drivers to speed more and decreasing road safety overall.
If that were truely the case, we wouldn’t need Rust now, would we!
I think there are effects in both directions here. In C you get burned, and the pain is memorable. In Rust you get forced into safe patterns immediately. I could believe that someone who has done only Rust might be missing that "healthy paranoia". But for teaching in general, it's hard to beat frequent and immediate feedback. Anecdotally it's common for experienced C programmers to learn about some of the rules only late in their careers, maybe because they didn't happen to get burned by a particular rule earlier.
> Rust may create a false sense of security, and in the unsafe sections the programmer sometimes, when reviewing the code, is falsely convinced by the mandatory SAFETY comment.
This is an interesting contrast to the previous case. If you write a lot of unsafe Rust, you will eventually get burned. If you're lucky, it'll be a Miri failure. I think this makes folks who work with unsafe Rust extremely paranoid. It's also easier to sustain an that level of paranoia with Rust, because you hopefully only have to consider small bits of unsafe code in isolation, and not thousands of lines of application logic manipulating raw pointers or whatever.
SQLite is an example of an extremely high quality software project. That encompasses not only the quality of the software itself, but the project management around it. That includes explaining various design choices, and this document explaining the choice of language is just one of many such explanations.
Maybe writing about it was taken as an opportunity to clarify their own thinking about the topic?
That’s not to say OOP advocacy has disappeared from HN. It still exists, but it no longer feels dominant or ascendant. If anything, it feels like a legacy viewpoint maintained by a sizable but aging contingent rather than a direction the community is moving toward.
Part of OOP’s staying power comes from what I’d call a cathartic trap. Procedural programming is intuitive and straightforward. OOP, by contrast, offers a new conceptual lens: objects, inheritance, polymorphism, and eventually design patterns. When someone internalizes this framework, there’s often a strong moment of “clicking” where complex software suddenly feels explainable and structured. That feeling of insight can be intoxicating. Design patterns, in particular, amplify this effect by making complexity feel principled and universally applicable.
But this catharsis is easy to confuse with effectiveness. The emotional satisfaction of understanding a system is orthogonal to whether that system actually produces better outcomes. I’ve seen a similar dynamic in religion, where the Bible’s dense symbolism and internal coherence produce a powerful sense of revelation once everything seems to “fit” together. The feeling is real, but it doesn’t validate the underlying model.
In practice, OOP often increases complexity and reduces modularity. This isn’t always obvious from inside the paradigm. It tends to become clear only after working seriously in other paradigms, where composition, data-oriented design, or functional approaches make the tradeoffs impossible to ignore.
Where I disagree is on encapsulation being the “good” part of OOP.
Encapsulation, as a general idea, is positive. Controlling boundaries, hiding representation, and enforcing invariants are all valuable. But encapsulation as realized through objects is where the deeper problem lies. Objects themselves are not modular, and the act of encapsulating a concept into an object breaks modularity at the moment the boundary is drawn.
When you encapsulate something in OOP, you permanently bind state and the methods that mutate that state into a single unit. That decision fixes the system’s decomposition early and hardens it. Behavior can no longer move independently of data. Any method that mutates state is forever tied to that object, its invariants, and its lifecycle. Reuse and recomposition now operate at the object level rather than the behavior level, which is a much coarser and more rigid unit of change.
This is the core issue. Encapsulation in OOP doesn’t just hide implementation details; it collapses multiple axes of change into one. Data representation, behavior, and control flow are fused together. As requirements evolve, those axes almost never evolve in lockstep, but the object boundary forces them to.
What makes this especially insidious is that the failure mode is slow and subtle. OOP systems don’t usually fail immediately. They degrade over time. As new requirements arrive, developers feel increasing resistance when trying to adapt the existing design. Changes start cutting across object boundaries. Workarounds appear. Indirection layers accumulate. Eventually the system is labeled as having “too much tech debt” or being the result of “poor early design decisions.”
But this framing misses the point. The design mistakes were not merely human error; they were largely inevitable given the abstraction. The original object model could not have anticipated future requirements, and because it was not modular enough to allow the design itself to evolve, every change compounded rigidity. The problem wasn’t that the design was wrong. It’s that it was forced to be fixed.
Polymorphism doesn’t fundamentally resolve this, and often reinforces it. While polymorphism itself is not inherently object-oriented, in OOP it is typically expressed through stateful objects and virtual dispatch. That keeps behavior anchored to object identity and mutation rather than allowing it to be recomposed freely as requirements shift.
The deeper requirement is that a system must be modular enough not just to extend behavior, but to change its own design as understanding improves. Object-based encapsulation works directly against this. It locks in assumptions early and makes architectural change progressively more expensive. By the time the limitations are obvious, the system is already entangled.
So while I agree that inheritance deserves much of the criticism, I think encapsulation via objects is the more fundamental problem. It’s not that encapsulation is bad in principle. It’s that object-based encapsulation produces systems that appear well-structured early on, but inevitably accumulate rigidity and hidden coupling over time. What people often call “tech debt” in OOP systems is frequently just the unavoidable artifact of an abstraction that was never modular enough to begin with. OOP was the tech debt.
The way forward is actually simple. Avoid mutation as much as possible. Use static methods (aka functions) as much as possible. Segregate IO and mutation into its own module separate from all other logic. Those rules are less of a cathartic paradigm shift then OOP and it takes another leap to see why doing these actually resolves most of the issues with OOP.
I’m personally with you here. Just in my circle they see it positively. But I agree with you: as long as it helps modularity, great, but also have many downsides that you describe very well.
Your last paragraph also perfectly aligned with my views. I think you are coming from a functional PoV, which luckily seems to have some more traction in the last decade or two. Sadly, before you say it, often are underline the parts of functional programming that are not the most useful you address here… but maybe, some day…
I don't know about Zig, but my experience with Rust's trait system is that it isn't explicitly against OOP. Traits and generics feel like an extension and generalization of the OOP principles. With OOP, you have classes (types) and/or objects (instances) and bunch of methods specific to the class/object. In Rust, you extend that concept to almost all types including structs and enums.
> OOP, by contrast, offers a new conceptual lens: objects, inheritance, polymorphism, and eventually design patterns.
Rust doesn't have inheritance in the traditional sense, but most OOP languages prefer composition to data inheritance. Meanwhile, polymorphism, dynamic dispatch and design patterns all exist in Rust.
That’s not oop. Traits and generics are orthogonal to oop. It’s because oop is likely where you learned these concepts so you think the inception of these things is derived from oop.
What’s unique to oop is inheritance and encapsulation.
Design patterns isn’t unique to OOP either but there’s a strong cultural association with it. The term often involves strictly using encapsulated objects as the fundamental building block for each “pattern”.
The origin of the term “design patterns” was in fact established in context of OOP through the famous book and is often used exclusively to refer to OOP but the definition of the term itself is more broad.
That’s an understatement.
You won't find easily Zig programmers that want you to use Zig at all costs, or that believe it's a moral imperative that you do. It's just antithetical to the whole concept of Zig.
The worst that can happen is that Zig programmers want C projects to have a build.zig so they can cross-compile the project trivially, since that's usually not a thing C/C++ build scripts tend to offer. And even then, we have https://github.com/allyourcodebase/ so that Zig users can get their build.zig scripts without annoying project maintainers.
One example is null-- a billion dollar mistake as Tony Hoare called it. A Maybe type with exhaustive pattern matching is so dramatically better, it can be worth switching just for that feature alone.
ML is not some new development, it just took this long to get some of its ideas mainstream.
Value semantics is the hot thing now I'd say.
It's disingenuous to lump them together. It is the former that does the whole toxic, pushy advocacy routine.
The language itself features interesting ideas, many of them borrowed (pun intended) from Haskell, so not that new after all. But the community behavior proved consistently abysmal. A real put off.
One of the very strange things about C is that it is designed by a committee that is inherently conservative and prefers to not add new features, especially if they have any chance of breaking any compatibility. This seems necessary before Rust ever becomes an old, boring language.
But I don't see Rust ever going in such a direction. It seems fundamentally opposed to Rust's philosophy, which is to find the best solution to the problems it's trying to solve, at any cost, including breaking compatibility, at least to some degree.
Consider python2 and python3, you don't need to update your python2 code really, you can just use the python2 interpreter.
There’s also automated migration tools to convert 2021 code to 2024. It might fail on some translations but generally it’s pretty automatic.
So huge difference both in the migration mechanism and the bc story.
So e.g. 2018 Edition said r# at the start of an identifier now marks a "raw" identifier. Keywords promise never to start this way, so r#foo is the same as foo but r#foo even works if some lunatic makes foo a keyword whereas just foo would become a keyword if that happened. As a result if you write
let async = 5;
... in Rust 1.0 that translator treats it exactly as though you'd written let r#async = 5;
... in a modern Rust edition because these days the keyword async exists.Rust editions only (and rarely!) break your code when you decide to upgrade your project's edition. Your public API stays the same as well (IIRC), so upgrading edition doesn't break your dependents either -unless they don't have a new enough version of the compiler to support said newer edition.
what would be the difference with a binary that's has both a py2 and py3 interpreter and a flag --edition=2 or =3 redirects to either file?
If I have Rust code from 2021, can I add a feature from 2024 and run it with --edition=2021 or 2024? Wouldn't adding a 2024 feature then possibly break the 2021 code?
I think the fact that rust is compiled has a big impact in terms of bc for dependencies. py2 must use py2 dependencies, but rust24 could use rust21 binaries as long as there were no API bc breaks, the code itself is already compiled away.
The difference is that it's not the entire compiler. Rust's editions are only allowed to change the frontend. At the middle part of the compiler, it's all the same thing, and the differences are completely gone. This is the core of how the interop works, and it also means that you can't just change anything in an edition, only smaller things. But completely changing the language every few years wouldn't be good either, so it's fine in practice.
But the Rust team found a great way to avoid breaking backward compatibility: old code gets automatically compiled by the old version of the compiler, whereas more recent code is treated with the latest version of the compiler.
That is much better IMHO than carrying that old baggage along in the language that e.g. the C++ folks struggle with, where every half-decade a pile of additive changes get piled on the previous version of the language, and nobody can clean up the old mess.
That's not what happens. You always use the same version of the compiler. It's just that the newer compiler version also knows several older dialects (known as editions) of the language.
It is also usual for the C++ compilers to support all seven standard library versions too. Rust doesn't have this problem, the editions can define their own stdlib "prelude" (the reason why you can just say Vec or println! in Rust rather than their full names, a prelude is a set of use statements offered by default) but they all share the same standard library.
core::mem::uninitialized() is a bad idea and we've known that for many years, but that doesn't mean you can't use it in brand new 2024 Edition Rust code, it just means doing so is still a bad idea. In contrast C++ removes things entirely from its standard library sometimes because they're now frowned on.
The fact that C was effectively "born old" means you can take a C89 program and compile it as C23 and it should simply work, with extremely minimal changes, if any.
That's a killer feature when you're thinking in decades. Which SQLite is.
Claiming that editions are an example of rust failing back compat shows a complete ignorance of the ecosystem or what coding in the language is actually like. People in this thread think you can’t mix n match editions or that migrating is some huge unavoidable obstacle. Indeed I have less fear upgrading editions than I do bumping the version of the c or c++ language or even using a newer compiler that exploits accidental UB that had been lurking.
I don't know where you got this impression but our switches from 2018 to 2021 and now 2024 editions went very smootly. Rust hasn't broken backwards compatibility in any bigger way since 1.0.
There has always been undefined behavior in C, but back in the day, compilers were nowhere near as aggressive in taking advantage of it to make your code faster. Most C programmers tended to treat C as portable assembly and not a language with rules that needed to be respected for it to not blow up your code.
I remember this being endlessly groused over by grognard, traditionalist C programmera, and more than a few of them switched to C++ as a result. After all, if the language was going to be a stickler about correctness, they might as well use a language with the features that meant they didn't have to reach into that bag of tricks.
How much wasted work has been created by compiler authors deciding that they know better than the original software authors and silently break working code, but only in release mode? Even worse, -O0 performance is so bad that developers feel obligated to compile with -O2 or more. I will bet dollars to donuts that the vast majority of the material wins of -O2 in most real world use cases is primarily due to better register allocation and good selective inlining, not all the crazy transformations and eliminations that subtly break your code and rely on UB. Yeah, I'm sure they have some microbenchmarks that justify those code breaking "optimizations" but in practice I'll bet those optimizations rarely account for more than 5% of the total runtime of the code. But everyone pays the cost of horrifically slow build times as well as nearly unbounded developer time loss debugging the code the compiler broke.
Of course, part of the problem is developers hating being told they're wrong and complaining about nanny compilers. In this sense, compiler authors have historically been somewhat similar to sycophantic llms. Rather than tell the programmer that their code is wrong, they will do everything they can to coddle the programmer while behind the scenes executing their own agenda and likely getting things wrong all because they were afraid to honestly tell the programmer there was a problem with their instructions.
That much is true. If you put the derefence of a pointer and the null check in the wrong order both those statements would have code emitted for them.
Now, it is almost certain that one of those statements would not be emitted.
OTOH, compiling with -O0 still leaves most code to be emitted and fewer dead-code elimination places.
int main() {
printf("Sans headers, this is valid C89.");
return 0;
}
Without an explicit declaration, C will consider this function to have the following signature based on the call site: int printf();
By the way, in C () doesn't mean "no parameters" it means "any parameters, which the compiler will infer from the call site and pass to the function."But in all the cases I can think of, when you look at that Rust today, what it meant in say 2018 Edition seems silly, "Oh, that's a daft thing for that to mean, I'm glad it doesn't mean that now"
We can't magically step into a time machine and fix how it worked at the time, any more than we could go back and cancel those minstrel shows which now seem awful. We can only fix it now and Rust's editions enable this, without the cost of making old code not work in old projects.
for dog in array_of_dogs.into_iter() {
/* In Rust 1.0 we get an immutable reference to each dog from the array */
/* But today (since 2021 Edition) we get the actual dogs, not references - thus consuming the array */
}
One of the changes I'm looking forward to from a future Edition is what happens when I write native ranges like 1..=10 (the integers 1, 2, 3, 4, 5, 6, 7, 8, 9 and 10). Today this means core::ops::RangeInclusive but that type isn't Copy, because it is Iterator. Instead hopefully one day it'll become core::range::RangeInclusive which is Copy, and so can't be Iterator but instead IntoIterator.So in that future Rust edition (when/if it happens) I can treat 1..=10 the same way as a pair of numbers (1,10) or an array of two numbers [1, 10] which are both Copy, that is, by just copying the bit pattern you are guaranteed to get an object which means the same thing, making life easier for programmers and the compiler. Today, because it isn't Copy, I must do extra busy work, which is slightly annoying and discourages use of this otherwise great type.
Some of its actual weirdnesses seem no less odd than that to people who aren’t C experts, I assure you.
To illustrate the difference look at C++, it was designed by a person with strong opinions, but then left it to be controlled by a committee.
If you look at the structure of C++ in Backus Naur form it is absolutely nuts. The compile times have gone through the roof and people leave to create new languages like jai, Zig or whatever.
Common Lisp was designed by committee. It is as chaotic as C++.
Rust is also gigantic. I am very careful not to use in critical systems because it is so big unless we could restrict it with the compiler or something.
You can always use no-std if you so choose where the language is about the size of C (but still has better utility) although if you’re complaining about the size of the language, c++ is drastically worse in my opinion. Rust starting to work its way into the Linux kernel should be a signal that the language is better suited for paring down than c++
This comparison confuses me because C is... also controlled by a committee? The evolution of the C standard is under the control of ISO WG14 [0], much like how the C++ standard is under the control of ISO WG21 [1]. This was true for even the first versions of each language that was standardized.
Just imagine if all of Chromium was written in C and used simpler tools like GNU Make and Git for it's projects.
once data leaves the fs and is all (or in part) bought into a processes memory, access is just limited by the memory bandwidth.
ofcourse if you start touching uncached data things slow down, but just for a short while…
Also, the article appears to be quite old, so the submission should have a year appended.
See also previous discussions, the last of which was only a few months ago: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
I would strongly urge you to read and respect the HN guidelines and FAQ. https://news.ycombinator.com/newsguidelines.html https://news.ycombinator.com/newsfaq.html
All that said, some have found reasons to rewrite in Rust, and are currently working on that: https://turso.tech/blog/introducing-limbo-a-complete-rewrite...
https://github.com/tursodatabase/turso
I could see it being useful for pure Rust projects once its completed. I mean in Java / Kotlin land, I prefer to use H2 in some cases over SQLite since its native to the platform and H2 is kind of nice altogether. I could see myself only using this in Rust in place of SQLite if its easy to integrate and use on the fly.
The issue is more of the object-relational impedance mismatch that happens when using any SQL database: ORMs can be slow / bloated, and hand-written SQL is time consuming.
I shipped a product on SQLite, and SQLite certainly lived up to its promise. What would have been more helpful was if it could index structured objects instead of rows, and serialize / deserialize the whole object. People are doing this now by putting JSON into SQLite. (Our competitors did it when I looked into their SQLite database.)
If you don’t need that, then great—Turso/Limbo might be for you! But there are a ton of use cases out there that rely on SQLite for simultaneous multiprocess access. And I’m not even talking about things that use it from forking servers or for coordination (though those are surprisingly common as well)—instead, lots of processes that use SQLite 99.9% of the time from one process still need it to be multiprocess-authoritative for e.g. data exports, “can I open two copies of an app far enough to get a ‘one is already running’ error?”-type use cases, extensions/plugins, maintenance scripts, etc. not having to worry about cross-process lock files thanks to SQLite is a significant benefit for those.
that said, just because c is best for sqlite today, doesn't mean c is best for you or what you're trying to do.
This is no longer true. Rust, Zig and likely others satisfy this requirements.
> Safe languages usually want to abort if they encounter an out-of-memory (OOM) situation. SQLite is designed to recover gracefully from an OOM. It is unclear how this could be accomplished in the current crop of safe languages.
This is a major annoyance in the rust stdlib. Too many interfaces can panic (and not just in case of an OOM), and some of them don’t even document this.
Rust and Zig satisfy this by being C ABI-compatible when explicitly requested. I'm pretty sure that that solution is not actually what the author meant. When you don't explicitly use `extern "C"` in Rust or `export` in Zig, the ABI is completely undefined and undocumented. I would go as far as arguing that the ABI problem, with Rust at least, is even worse than the ABI problem C++ has. Especially since Distros are (if memory serves) starting to ship Rust crates...
Not saying they should have picked C++ but that's a bit untrue. It's quite easy given some thought into the API to invoke C++ code in basically any language which can invoke C code, since you can wrap a C++ implementation in a C interface. I've done it multiple time throughout my career (that ended up being called from Python, Swift, Objective-C/C++, Swift, Java, and Kotlin).
And as a side note, you don't have to do object-oriented programming in C++ if you don't want to.
Is it easy to write a nice C interface for C++ that makes heavy use of templates, smart pointers, and move semantics?
I've only seen it done for C++ that is more C than C++, or for libraries that were designed to "bail out" to a C interface.
I've actually run into a similar problem with a C library before, because it heavily used macros as part of its API. Glad most developers don't do that.
If the interface itself has or leaks those features, no that's not easy indeed. But if those do not leak, then they can be used internally yes.
My point was not that it's easy to wrap a currently existing C++ library that has modern features in its interface in a C interface, especially post-C++11.
But that if you design something from the ground up, then it's rather easy (with a certain set of constraints). My bad for not conveying that better.
Where is that statement? The statement I reacted to (and with some caveats) was the following: "Libraries written in C++ or Java can generally only be used by applications written in the same language. It is difficult to get an application written in Haskell or Java to invoke a library written in C++."
Which in my opinion is not true for the reason I mentioned.
> Nothing from c++ ever gets exposed
Depends what's your definition for "getting exposed". If you mean "no C++ feature from the language gets exposed" then it's mostly true (you can still wrap certain things like allocators, though painful, but there's certain C++ features that have no real equivalent in some target languages indeed). But you can definitely expose the functionality of C++ code through a C interface.
And it's not like that's a point in favor of C, since C doesn't have vector or unique_ptr either
Classes can be wrapped with a bit of effort. You do need to write the constructors and the destructors manually and invoke a pair of new/delete on the C side, but it's just as you would handle a type allocated by a C library itself. You'd use it the same way. You just have the liberty to have the implementation use (mostly) C++.
In fact, some popular C++ libraries don't even have a stable C++ API, only a C API. LLVM comes to mind. DuckDB is another example
Can you expand a little?
But it ultimately depends on your situation. People elsewhere on this post are listing weaknesses of Rust that I consider to be strengths and vice versa.
Not really.
> it is an improvement in some aspects and a significant step back in others
Of course. Very few improvements are better in every way. There's always something you can find to like about the old solution. Horses are friendlier than cars. Records allow bigger artwork than CDs. Unlike DVDs (at first anyway) you can write to VHS. Unlike Typescript, Javascript doesn't need a compile step. FORTRAN77 fits nicely on punch cards.
That doesn't mean there's really any serious debate about them being improvements.
I will freely admit that Rust has only average compile time (though it's better than C++ at this point), high complexity, and async Rust is full of footguns. But I could come up with a much longer list of complaints for C.
> we must stamp out memory safety issues at all cost and Rust is the solution
Yeah I think memory safety is actually probably not the most important thing about Rust. Having a modern strong type system is arguably more significant. Memory safety is important too of course - even if you don't care about security memory safety issues are often plain hard to debug bugs.
Dynamic languages of course support dynamic linking.
ABI breaks are everywhere in C++... consider a class
class foo {
public:
foo();
void do_something();
private:
int x;
}
and an impl in the dynamic library: #include "foo.h"
foo::foo() {
this->x = 0;
}
void foo::do_something() { std::cout << this->x << std::endl; }
You build a libfoo.so and clients use it, calling `foo f; f.do_something();`, it calls your library and it's great.But as soon as you ship a new version that adds a new field to foo (still source-code compatible):
class foo {
public:
foo();
void do_something();
private:
int x;
int y;
};
With a new function body in your shared object: void foo::do_something() { std::cout << this->x << this->y << std::endl; }
You're hosed. Clients have to recompile, or they get: $ ./main
0, 0
*** stack smashing detected ***: terminated
[1] 2625391 IOT instruction (core dumped) ./main
Because the size information for an instance of foo is only known at compile-time, so the clients aren't allocating enough space on the stack for it (ditto the heap if you're using `new foo();`)The way around this is awkward and involves the pimpl pattern and moving all your constructors out-of-line... but you also need to freeze all virtual methods (even just adding a new one breaks ABI), and avoid using any template-heavy std:: types (not even std::string), since those are often fragile.
Most people just give up and offer an extern "C" API, because that has the added benefit that it's compatible across compilers.
"true" C++ shared libraries are crazy difficult to maintain. It's the reason microsoft invented COM.
Swift goes through crazy lengths to make this work, and it's impressive: https://faultlore.com/blah/swift-abi/#resilient-type-layout
Rust would have to do something like Swift is doing, and that's probably never going to happen.
I sometimes wonder if what happens is like this:
1. Have problem. Need higher level computer language. 2. Think about problem. 3. Solve problem - 'C' 4. Think about problems with 'C' 5. Attempt to fix problems with 'C' - get C++ 6. Think about problems with C & C++ 7. Get: Go, F#, Rust, Java, JavaScript, Python, PHP, ...other etc.
I tend to do this. The problem is obvious, that I do not repeat step #2. So then I move to the next step.
8. Thinking about how to fix C, C++, Go, F#, Rust, Java, JavaScript, Python, PHP, ...other is too hard. 9. Stop thinking.
C is a Worse is Better language (https://en.wikipedia.org/wiki/Worse_is_better). The bet is that if it's simpler to implement then the fact that what you're implementing isn't great will be dwarfed by that ease of implementation. And it worked for decades which is definitely a success.
You can make a similar argument for Go, but not really for Rust. The other side of the Worse is Better coin is that maybe you could make the Right Thing™ instead.
Because implementing C is so easy and implementing the Right Thing™ is very difficult, the only way this would compete is if almost nobody needs to implement the Right Thing™ themselves. In 1970 that's crazy, each Computer is fairly custom. But by 1995 it feels a lot less "out there", the Intel x86 ISA is everywhere, Tim's crap hypermedia experiment is really taking off. And when Rust 1.0 shipped in 2015 most people were able to use it without doing any implementation work, and the Right Thing™ is just better so why not?
Now, for an existing successful project the calculation is very different, though note that Fish is an example of this working in terms of full throated RIIR. But in terms of whether you should use C for new work it comes out pretty strongly against C in my opinion as a fairly expert C programmer who hasn't written any C for years because Rust is better.
For someone who might not think so they may think "Why?" and click it.
I think this makes a bit more sense.
https://sqlite.org/testing.html
>As of version 3.42.0 (2023-05-16), the SQLite library consists of approximately 155.8 KSLOC of C code. (KSLOC means thousands of "Source Lines Of Code" or, in other words, lines of code excluding blank lines and comments.) By comparison, the project has 590 times as much test code and test scripts - 92053.1 KSLOC.
Whoa. TIL there's a minimal build of SQLite that doesn't even need malloc() and free() !
Presumably for embedded devices. I'd love to see examples of that being used in the wild.
Of course Go programmers will say "Just don't do that" but that's exactly what C programmers told us about bounds misses, use after free and so on...
As someone who runs into this problem a lot, this is pretty cool! Does anyone know how they can recover from this in SQLite?
So you have to ensure that in such situation, you can execute code that does not require more memory: ensure that the rest only free stuff, or preallocate structures for that purpose
How are you running into it?
If you're writing in C, idiomatic code works (check the return values of functions that allocate!)
If you're in C++, Rust or similar, you have to write pretty non-idiomatically to recover from an allocation failure.
`new` throws, `malloc` returns. That's a pretty big difference!
Idiomatic C++ code never puts a `try` around `new`, while idiomatic C code always checks the return from an allocation.
I thought you were talking about the use of malloc in both languages - you never mentioned new in your first post. and i think we have different views on what is "idiomatic" in the languages.
That's fair, but malloc is certainly non-idiomatic, isn't it?
Disclaimer: Credit goes to whoever mentioned this in this discussion before me.
It's just the simplistic first feeling of a language, in the same way I feel like Go is the language for Kubernetes/cloud/net servers, Ruby is for Rails, JS is the browser/bootcamp language, Python is Data Science and AI...
Sure, you get that safety but you are also throwing the stability that an old (and maintained) codebase has just for the sake of being patched to handle the presssure from the real world.
Maybe it's time Rust positions itself as something more than "the language of rewrites".
Edit: Dafny: A verification language capable of enforcing invariants
Many try to replace C. All failed.
I'd love to see a language that could satisfy both high end needs and low need needs. I would not know how such a language would look though.
What do you mean by all failed? Rust supports CPython modules using PyO3 [1]. Even the popular polars dataframe library [2] uses it.
A Rust implementation exists. It's called turso
https://github.com/tursodatabase/turso
There's a whole company around it.
it seems rather stupid to take a sub-heading as the title.
But programming languages are like Math. It is like saying "multiplying is outdated" or "the square root is outdated".
Do you still write FORTRAN and Perl?
The "faster than C for general-purpose programming" is a pretty woolly claim so I'm not entirely surprised nobody claims that, but equally I think to be sure that "because none are" would need some serious caveats to make it true.
In particular you're going to have to say you think it doesn't matter how much developer time is needed. A project where C programmers spend five hours to make it run in 4 seconds is in this caveat "20% faster" than a project where Alt Lang programmers spend five minutes to make it run in 5 seconds - because only those seconds count. This is not how the real world works so it's a pretty nasty caveat.
Your example should instead be:
- 5 hours of developer time to run in 4 seconds * n
- 5 minutes of developer time to run in 5 seconds * n
As long as n <= 17,700, then the developer time is "not worthwhile." This assumes that you value user time as much as developer time.
In the case of sqlite, the user time may as well be infinite for determining that side of the equation. It's just that widely used.
But OP is correct, companies don't care as long as it doesn't translate into higher sales (or lower sales because the competitor does better). That's why you see that sort of optimization mainly in FOSS projects, which are not PDD (profits-driven development).
For any project with non-trivial amount of users the C programmer did better. I mean I know companies like to save money and move the costs (time, electricity, hardware costs) to customers but if you care about users any non-trivial speed improvement is worth a lot in aggregate, surely more than your time implementing it.
Databases are a problem space which can absorb all available effort. So if you spend five hours making thing U as fast as it could be that's five fewer hours you can dedicate to thing V or thing W, not to mention X, Y, Z and so on.
No UB, strict types, contracts that can be formally proven, runtime checks, better (than phthreds) concurrency, certifiable.
It's bitch sometimes but you'll thank it after release.
> All that said, it is possible that SQLite might one day be recoded in Rust.
...followed by a list of reasons why they won't do it now. I think the first one ("Rust needs to mature a little more, stop changing so fast, and move further toward being old and boring.") is no longer valid (particularly for software that doesn't need async and has few dependencies), but the other ones probably still are.
I write Rust code and prefer to minimize non-Rust dependencies, but SQLite is the non-Rust dependency I mind the least for two reasons:
* It's so fast and easy to compile: just use the `rusqlite` crate with feature `bundled`. It builds the SQLite "amalgamation" (its entire code basically concatenated into a single .c file). No need to have bazel or cmake or whatever installed, no weird library dependency chains, etc.
* It's so well-tested that the unsafety of the language doesn't bother me that much. 100% machine branch coverage is amazing.
1. Stability: Ada is an "old and boring" language (aka stable and reliable - the first standard for the language was issued in 1983 and created for the Defence industry) - https://learn.adacore.com/courses/intro-to-ada/chapters/intr...
2. Compatibility: Interoperability features is built-in, allowing you to create Ada libraries that can be used with other languages - https://piembsystech.com/interfacing-ada-programming-with-c-...
3. Embedded Programming: Ada has been designed for embedded programming and the language includes constructs for directly manipulating hardware, for example, and direct interaction with assembly language. - https://learn.adacore.com/courses/intro-to-embedded-sys-prog...
4. Ada supports processes / suites to comply with Branch coverage in ISO 26262 ( https://github.com/NVIDIA/spark-process ) and Decision coverage in DO-178C ( https://learn.adacore.com/booklets/adacore-technologies-for-... ).
5. Ada has features supporting memory safety to avoid many memory issues in the first place ( https://www.adacore.com/blog/memory-safety-in-ada-and-spark-... ) and using its exception handling and programming techniques (like pre-allocating "guard" memory) can allow Ada programs to tackle fringe OOM errors.
6. Performance: Ada is built for systems programming, is just as fast as C ( https://learn.adacore.com/courses/Ada_For_The_Embedded_C_Dev... ), with the added bonus of "modern" programming features that can generate more "reliable" code
I think the "lingua franca" argument for C and the points at the end about what they'd need from Rust to switch do go beyond merely justifying a decision that's already been made, though.
This is such an under appreciated feature.
I'm tired of languages that just keep adding and adding to a language.
Work on the standard library or ecosystem, but stop changing the language otherwise you'll never end up anywhere because there is always going to be something that would benefit from some love.
Hell, I'd say the same is true for many libraries.
This isn't entirely true. There are a bunch of UBs that you have to be very careful about. C has the right idea about simplicity. But deterministic behavior is also important because someone will find a way to make use of UBs in their code.
Designated initializers, compound literals, binary literals and the way to printf numbers in binary, constexpr, alignas, #embed, stdbit.h, memset_explicit, restrict to name the ones I really like.
This artical makes the case for C as a low level language, I then offer Javascript as a functional-ish language for manipulating domain model data structures on the top.
Doesn't the language compiler write the code that checks if the array access is in-bounds? Why would you need to test the compiler's code?
...saying that for a statement `if( a>b && c!=25 ){ d++; }`, they use 100% machine-code branch coverage as a way of determining that they've evaluated this in `a<=b`, `a>b && c==25`, and `a>b && c!=25`. (C/C++) branch coverage tools I've used are less strict, only requiring that takes both if and else paths.
One could imagine a better high-level branch coverage tool that achieves this intent without dropping to the machine code level, but I'm not sure it exists today in Rust (or any other language for that matter).
There might also be an element of "we don't even trust the compiler to be correct and/or ourselves to not have undefined behavior" here, although they also test explicitly for undefined behavior as mentioned later on the page.
let val = arr[i]
to assembly code like: cmp rdx, rsi ; Compare i (rdx) with length (rsi)
jae .Lpanic_label ; Jump if i >= length
; later...
.Lpanic_label:
call core::panicking::panic_bounds_check
Are they saying with "correct code" the line of source code won't be covered? Because the assembly instruction to call panic isn't ever reached?It might be reasonable to redefine their metric as "100% branch coverage except for panics"...if you can reliably determine that `jae .Lpanic_label` is a panic jump. It's obvious to us reading your example of course but I don't know that the compiler guarantees panics always "look like that", and only panics look like that.
I've had many new cars and the car I drive today is a classic, I love it. Takes me everywhere, replace the parts and still goes. No rust. Can't say the same for a new car. I had a brand new truck a few years ago and I sold it. I had way too many problems. Doors being not aligned, electrical problems. Just shit all around. I had a Ford Focus ST as well for a for awhile too and I hated it. I couldn't even go in a car wash without it leaking through the roof, new car as well. Crazy!!!
This feels like they're responding to people asking for SQLite to be rewritten in Java. Who are these people?!
I don't think they thought about this too much. In the C code are they testing the "branch" into Undefined Behaviour?
I've heard it said you shouldn't use asserts in production code, but handle errors and exceptions instead? That said, can't you just 'if !condition() {panic...}' to simulate the behavior? Or are they talking about something specific about Golang?
2. Ada is about as old as C so should have been considered as a possible safe language. It's safer than Rust in my opinion. GNAT was released in 1995 while SQLite is from 2000, so it was a possibility. C++ was also around: still unsafe, though less so than C.
I also noticed many software companies have been switching to SQLite for storing data. Audacity has switched from using hundreds of tiny audio files in multiple folders for a single recording to a single file using SQLite. I suspect it is the reliability that comes from sticking to C
alexpadula•1d ago