One bug is all it takes to compromise the entire system.
The monolithic UNIX kernel was a good design in the 60s; Today, we should know better[0][1].
So is Mach, by the way, if you can afford the microkernel performance overhead.
The Linux kernel was and is a monstrosity.
I don't think they ever intended to keep all drivers strictly userland, though. Just the service side.
The Windows NT code was engineered to be portable across many different architectures--not just X86--so it has a hardware abstraction layer. The kernel only ever communicated to the device-driver implementation through this abstraction layer; so the kernel code itself was isolated.
That doesn't mean the device drivers were running in user-land privilege, but it does mean that the kernel code is quite stable and easy to reason about.
When Microsoft decided to compromise on this design, I remember senior engineers--when I first started my career--being abuzz about it for Windows NT 4.0 (or apparently earlier?).
And it is irrelevant anyway, given that this comment was written from 10.0.26100.
You’re saying they improved the design. I know they added user-privilege device driver support for USB (etc).; did they revert the display compromise/mess as well?
Someone will of course bring up XNU, but the microkernel aspect of it died when they smashed the FreeBSD kernel into the codebase. DriverKit has brought some userspace drivers back, but they use shared memory for all the heavy lifting.
As opposed to Mach, which is not used in any actual systems
- mandatory byte-range locks enforced by the kernel
- explicit sharing modes
- guarantees around write ordering and durability
- safe delete-on-close
- first-class cache coherency contracts for networked access
POSIX aims for portability while NTFS/Win32 aims for explicit contracts and enforced behavior. For apps assuming POSIX semantics (e.g. git) NTFS feels rigid and weird. Coming the other way from NTFS, POSIX looks "optimistic" if not downright sloppy.
Of course ZFS et al. are more theoretically more robust than EXT4 but are still limited by the lowest common denominator POSIX API. Maybe you can detect that you're dealing with a ZFS backed volume and use extra ioctls to improve things but its still a messy business.
There is also the note that some of them are more database like, than classical filesystems.
Ah, and modern Windows also has Resilient File System (ReFS), which is used on Dev Drives.
`FILE_FLAG_DELETE_ON_CLOSE`’s equivalent on Posix is just rm. Windows doesn’t let you open new handles to `FILE_FLAG_DELETE_ON_CLOSE`ed files anyway, so it’s effectively the same. The inode will get deleted when the last file description is removed.
NFS is a disaster though, I’ll give you that one. Though mandatory locks on SMB shares hanging is also very aggravating in its own right.
Say the USB system runs in its own isolated process. Great, but if someone pwns the USB process they can change disk contents, intercept and inject keystrokes, etc. You can usually leverage that into a whole system compromise.
Same with most subsystems: GPU, network, file system process compromises are all easily leveraged to pwn the whole system.
We call AI models "open source" if you can download the binary and not the source. Why not programs?
Who's "we"? There's been quite a lot of pushback on this naming scheme from the OSS community, with many preferring the term "open weights".
the weights of a model aren't equivalent to the binary output of source code, no matter how you try to stretch the metaphor.
>why not
because we aren't beholden to change all definitions and concepts because some guy at some corp said so.
We want human readable, comprehensible, reproducible and maintainable sources at minimum when we say open source.
Impressive results on the model, I'm surprised they improved it with very simple heuristics. Hopefully this tool will be made available to the kernel developers and integrated to the workflow.
It is worth noting that the class of bugs described here (logic errors in highly concurrent state machines, incorrect hardware assumptions) wouldn't necessarily be caught by the borrow checker. Rust is fantastic for memory safety, but it will not stop you from misunderstanding the spec of a network card or writing a race condition in unsafe logic that interacts with DMA.
That said, if we eliminated the 70% of bugs that are memory safety issues, the SNR ratio for finding these deep logic bugs would improve dramatically. We spend so much time tracing segfaults that we miss the subtle corruption bugs.
Rust is not just about memory safety. It also have algebraic data types, RAII, among other things, which will greatly help in catching this kind of silly logic bugs.
While the bugs you describe are indeed things that aren't directly addressed by Rust's borrow checker, I think the article covers more ground than your comment implies.
For example, a significant portion (most?) of the article is simply analyzing the gathered data, like grouping bugs by subsystem:
Subsystem Bug Count Avg Lifetime
drivers/can 446 4.2 years
networking/sctp 279 4.0 years
networking/ipv4 1,661 3.6 years
usb 2,505 3.5 years
tty 1,033 3.5 years
netfilter 1,181 2.9 years
networking 6,079 2.9 years
memory 2,459 1.8 years
gpu 5,212 1.4 years
bpf 959 1.1 years
Or by type: Bug Type Count Avg Lifetime Median
race-condition 1,188 5.1 years 2.6 years
integer-overflow 298 3.9 years 2.2 years
use-after-free 2,963 3.2 years 1.4 years
memory-leak 2,846 3.1 years 1.4 years
buffer-overflow 399 3.1 years 1.5 years
refcount 2,209 2.8 years 1.3 years
null-deref 4,931 2.2 years 0.7 years
deadlock 1,683 2.2 years 0.8 years
And the section describing common patterns for long-lived bugs (10+ years) lists the following:> 1. Reference counting errors
> 2. Missing NULL checks after dereference
> 3. Integer overflow in size calculations
> 4. Race conditions in state machines
All of which cover more ground than listed in your comment.
Furthermore, the 19-year-old bug case study is a refcounting error not related to highly concurrent state machines or hardware assumptions.
It’s also worth noting that Rust doesn’t prevent integer overflow, and it doesn’t panic on it by default in release builds. Instead, the safety model assumes you’ll catch the overflowed number when you use it to index something (a constant source of bugs in unsafe code).
I’m bullish about Rust in the kernel, but it will not solve all of the kinds of race conditions you see in that kind of context.
The example given looks like a generalized example:
spin_lock(&lock);
if (state == READY) {
spin_unlock(&lock);
// window here where another thread can change state
do_operation(); // assumes state is still READY
}
So I don't think you can draw strong conclusions from it.> I’m bullish about Rust in the kernel, but it will not solve all of the kinds of race conditions you see in that kind of context.
Sure, all I'm trying to say is that "the class of bugs described here" covers more than what was listed in the parentheses.
Which is also why in my opinion Zig is much more suitable, because it actually addresses the readability aspect without bring huge complexity with it.
How? It doesn't look very different from Rust. In terms of readability Swift does stand out among LLVM frontends, don't know if it is or can be used for systems programming though.
I think they are right in that claim, but in making it so, at least some of the code loses some of the readability of Swift. For truly low-level code, you’ll want to give up on classes, may not want to have copy-on-write collections, and may need to add quite a few some annotations.
To some extent that argument only makes sense; if you can find a way to greatly reduce the incidence of non-logic bugs while not addressing other bugs then of course logic bugs would make up a greater proportion of what remains.
I think it's also worth considering the fact that while Rust doesn't guarantee that it'll catch all logic bugs, it (like other languages with more "advanced" type systems) gives you tools to construct systems that can catch certain kinds of logic bugs. For example, you can write lock types in a way that guarantees at compile time that you'll take locks in the correct order, avoiding deadlocks [0]. Another example is the typestate pattern [1], which can encode state machine transitions in the type system to ensure that invalid transitions and/or operations on invalid states are caught at compile time.
These, in turn, can lead to higher-order benefits as offloading some checks to the compiler means you can devote more attention to things the compiler can't check (though to be fair this does seem to be more variable among different programmers).
> Rust is not an extraordinary readable language in my opinion, especially in the kernel where the kernel has its own data structures.
The above notwithstanding, I'd imagine it's possible to think up scenarios where Rust would make some logic bugs more visible and others less so; only time will tell which prevails in the Linux kernel, though based on what we know now I don't think there's strong support for the notion that logic bugs in Rust are a substantially more common than they have been in C, let alone because of readability issues.
Of course there's the fact that readability is very much a personal thing and is a multidimensional metric to boot (e.g., a property that makes code readable in one context may simultaneously make code less readable in another). I don't think there would be a universal answer here.
"Each mutex has a type parameter which represents the data that it is protecting. The data can only be accessed through the RAII guards returned from lock and try_lock, which guarantees that the data is only ever accessed when the mutex is locked."
Even if used with more complex operations, the RAII approach means that the example you provided is much less likely to happen.
But in the listed categories, I’m equally skeptical that none of them would have benefited from Rust even a bit.
It always surprised me how the top-of-the line analyzers, whether commercial or OSS, never really implemented C-style reference count checking. Maybe someone out there has written something that works well, but I haven’t seen it.
Rewriting it all in Rust is extremely expensive, so it won't be done (soon).
As far as I know that's why Microsoft rewrote Typescript in Go instead of Rust.
The Rust phantom zealotry is unfortunately real.
[1] Aha, but the chilling effect of dismissing RIR comments before they are even posted...
Is this an irrational fear, I wonder? Reminds me of methods used in the political discourse.
> Is this an irrational fear, I wonder? Reminds me of methods used in the political discourse.
In a sad sort of way, I think its hilarious that hn users have been so completely conditioned to expect rust evangelism any time a topic like this comes up that they wanted to get ahead of it.
Not sure who it says more about, but it sure does say a whole lot.
In my experience it's closer to 5%.
Basically, 70% of high severity bugs are memory safety.
[1] https://www.chromium.org/Home/chromium-security/memory-safet...
It's worth noting that if you write memory safe code but mis-program a DMA transfer, or trigger a bug in a PCIe device, it's possible for the hardware to give you memory-safety problems by splatting invalid data over a region that's supposed to contain something else.
https://rust-unofficial.github.io/too-many-lists/
https://chadaustin.me/2024/10/intrusive-linked-list-in-rust/
I think "is" is a bit too strong. "Can be", sure, but I'm rather skeptical that all uses of unsafe Rust will be more difficult than writing equivalent C/C++ code.
Just worth noting that it is a significant extrapolation from only "28%" of fix commits to assume that the average is 2 years.
Here is a device driver bug that was around 11 years.
https://www.bitdefender.com/en-us/blog/hotforsecurity/google...
It's not uncommon for the bugs they found to be rediscovered 6-7 years later.
Profiting from selling their patchset is not the whole story, though. grsec was public and free for a long time and there were many effects at play preventing the kernel from adopting it.
1. Tons of bugs are reported upstream by grsecurity historically.
2. Tons of critical security mitigations in the kernel were outright invented by that team. ASLR, SMAP, SMEP, NX, etc.
3. They were completely FOSS until very recently.
4. They have always maintained that they are entirely willing to upstream patches but that it's a lot of work and would require funding. Upstream has always been extremely hostile towards attempts to take small pieces of Grsecurity and upstream them.
The median lifetimes are fascinating. Race conditions at 5.1 years vs null-deref at 2.2 years makes intuitive sense - the former needs specific timing to manifest, while the latter will crash obviously once you hit the code path. The ones that need rare conditions to trigger are the ones that survive longest.
Yea, it's pretty common. We had a customer years ago that was having a rare and random application crash under load. Never could figure out where it was from. Quite some time later a batch load interface was added to the app and with the rate things were input with it the crash could be triggered reliably.
It's something else that's added/changed in the application that eventually makes the bug stand out.
I have a server which has many peripherals and multiple GPUs. Now, I can use vfio and vfio-pcio to memory map and access their registers in user space. My question is, how could I start with kernel driver development? And I specifically mean the dev setup.
Would it be a good idea to use vfio with or without a vm to write and test drivers? How to best debug, reload and test changing some code of an existing driver?
That seems frightening at first. However, the more I consider it, the more it seems... predictable.
The mental model that I find useful:
Users discover surface bugs.
Deep bugs only appear in infrequent combinations.
For some bugs to show up, new context is required.
I've observed a few patterns:
Undefined behavior-related bugs are permanently hidden.
Logic errors are less important than uncommon hardware or timing conditions.
Long before they can be exploited, security flaws frequently exist.
I'm curious what other people think of this:
Do persistent bugs indicate stability or failure?
What typically leads to their discovery?
To what extent do you trust "well-tested" code?
I don't, which is why I use Qubes OS providing security through compartmentalization.
No they are often found and fixed.
Am I the only unreasonable maniac who wants a very long-term stable, seL4-like capability-based, ubiquitous, formally-verified μkernel that rarely/never crashes completely* because drivers are just partially-elevated programs sprinkled with transaction guards and rollback code for critical multiple resource access coordination patterns? (I miss hacking on MINIX 2.)
* And never need to reboot or interrupt server/user desktop activities because the core μkernel basically never changes since it's tiny and proven correct.
IMHO a fact that a bug hides for years can also be indication that such bug had low severity/low priority and therefore that the overall quality is very good. Unless the time represents how long it takes to reproduce and resolve a known bug, but in such case I would not say that "bug hides" in the kernel.
It doesn't seem to indicate that. It indicates the bug just isn't in tested code or isn't reached often. It could still be a very severe bug.
The issue with longer lived bugs is that someone could have been leveraging it for longer.
Not really true. A lot of very severe bugs have lurked for years and even decades. Heartbleed comes to mind.
The reason these bugs often lurk for so long is because they very often don't cause a panic, which is why they can be really tricky to find.
For example, use after free bugs are really dangerous. However, in most code, it's a pretty safe bet that nothing dangerous happens when use after free is triggered. Especially if the pointer is used shortly after the free and dies shortly after it. In many cases, the erroneous read or write doesn't break something.
The same is true of the race condition problems (which are some of the longest lived bugs). In a lot of cases, you won't know you have a race condition because in many cases the contention on the lock is low so the race isn't exposed. And even when it is, it can be very tricky to reproduce as the race isn't likely to be done the same way twice.
I don’t know much about Heartbleed, but Wikipedia says:
> Heartbleed is a security bug… It was introduced into the software in 2012 and publicly disclosed in April 2014.
Two years doesn’t sound like “years or even decades” to me? But again, I don’t know much about Heartbleed so I may be missing something. It does say it was also patched in 2014, not just discovered then.
Part of the resolution to the problem was I believe they ended up removing a fair number of unsupported platforms. It also ended up spawning alternatives to openssl like boring ssl which tried to remove as much as possible to guard against this very bug.
https://en.wikipedia.org/wiki/Shellshock_(software_bug)
The bug was introduced into the code in 1989, and only found and exploited in 2014.
Our bug dataset was way smaller, though, as we had to pinpoint all bug introductions unfortunately. It's nice to see the Linux project uses proper "Fixes: " tags.
Sort of. They often don't.
One criticism of Rust (and, no, I'm not saying "rewrite it in Rust", to be clear) is that the borrow checker can be hard to use whereas many C++ engineers (in particular, for some reason) seem to argue that it's easier to write in C++. I have two things to say about that:
1. It's not easier in C++. Nothing is. C++ simply allows you to make mistakes without telling you. GEtting things correct in C++ is just as difficult as any other language if not more so due to the language complexity; and
2. The Rust borrow checker isn't hard or difficult to use. What you're doing is hard and difficult to do correctly.
This is I favor cooperative multitasking and using battle-tested concurrency abstractions whenever possible. For example the cooperative async-await of Hack and the model of a single thread responding to a request then discarding everything in PHP/Hack is virtually ideal (IMHO) for serving Web traffic.
I remember reading about Google's work on various C++ tooling including valgrind and that they exposed concurrency bugs in their own code that had lain dormant for up to a decade. That's Google with thousands of engineers and some very talented engineers at that.
The implementations of sort in Rust are filled with unsafe.[0]
Another example is that of doubly linked lists.[1] It is possible to implement a doubly linked list correctly in C++ without much trouble. In Rust, it can be significantly more challenging.
In C++, pointers are allowed to alias if their types, roughly said, are compatible. In Rust, there are stricter rules, and getting those rules wrong in an unsafe block, or code outside unsafe blocks that code inside unsafe blocks rely on, will result in breakage of memory safety.
This has been discussed by others.[2]
Based on that, do you agree that there are algorithms and data structures that are significantly easier to implement efficiently and correctly in C++ than in Rust? And thus that you are being completely wrong in your claim?
[0] https://github.com/rust-lang/rust/blob/main/library/core/src...
[1] https://rust-unofficial.github.io/too-many-lists/
[2] https://chadaustin.me/2024/10/intrusive-linked-list-in-rust/
Strictly speaking, the mere presence of `unsafe` says nothing on its own about whether "it" is easier in C++. Not only does `unsafe` on its own say nothing about the "difficulty" of the code it contains, but that is just one factor of one side of a comparison - very much insufficient for a complete conclusion.
Furthermore, "just" writing a sorting algorithm is pretty straightforwards both in Rust and C++; it's the more interesting properties that tend to make for equally interesting implementations, and one would need to procure Rust and C++ implementations with equivalent properties, preferably from the same author(s), for a proper comparison.
Past research has shown that Rust's current sorting algorithms have different properties than C++ implementations from the time (e.g., the "X safety" results in [0]), so if nothing substantial has changed since then there's going to be some work to do for a proper comparison.
Edit: forgot to add the reference [0]: https://github.com/Voultapher/sort-research-rs/blob/main/wri...
I'm not sure "they found strict aliasing too difficult" is an entirely correct characterization? From this rather (in)famous email from Linus [0]:
The fact is, using a union to do type punning is the traditional AND
STANDARD way to do type punning in gcc. In fact, it is the
*documented* way to do it for gcc, when you are a f*cking moron and
use "-fstrict-aliasing" and need to undo the braindamage that that
piece of garbage C standard imposes.
[snip]
This is why we use -fwrapv, -fno-strict-aliasing etc. The standard
simply is not *important*, when it is in direct conflict with reality
and reliable code generation.
The *fact* is that gcc documents type punning through unions as the
"right way". You may disagree with that, but putting some theoretical
standards language over the *explicit* and long-time documentation of
the main compiler we use is pure and utter bullshit.
[0]: https://lkml.org/lkml/2018/6/5/769There are entire classes of structures that no, aren't hard to do properly, but the borrow checker makes artificially hard due to design limitations that are known to be sub-optimal.
No, two-directional linked lists and partially editable data structures aren't inherently hard. It's a Rust limitation that a piece of code can't take enough ownership of them to edit they safely.
On a related note, I'm seeing a correlation between "level of hoopla" and a "level of attention/maintenance." While it's hard to distinguish that correlation from "level of use," the fact that CAN is so far down the list suggests to me that hoopla matters; it's everywhere but nobody talks about it. If a kernel bug takes down someone's datacenter, boy are we gonna hear about it. But if a kernel bug makes a DeviceNet widget freak out in a factory somewhere? Probably not going to make the front page of HN, let alone CNN.
A CAN with 10,000 machines total and relatively fixed applications is either going to trigger the bug right off the bat and then work around it, or trigger the bug so rarely it won't be recognized as a kernel issue.
General purpose systems running millions and millions of units with different workloads are an evolutionary breeding ground for finding bugs and exploits.
My Pixel 8 runs kernel a stable minor from 6.1, which was released more than 4 years ago. Yes, fixes get backported to it, but the new features in 6.2->6.19 stay unused on that hardware. All the major distros suffer from the same problem, most people are not running them in production
Most hyperscalers are running old kernel versions on which they do backports. If you go Linux conferences you hear folks from big companies mentioning 4.xx, 3.xx kernels, in 2025.
sedatk•23h ago
steveklabnik•22h ago
“There’s a crash while using this config file.” Something more complex than that, but ultimately a crash of some kind.
Years later, like 20 years later, the bug was closed. You see, they re-wrote the config parser in Rust, and now this is fixed.”
That’s cool but it’s not the part I remember. The part I always think about is, imagine responding to the bug right after it was opened with “sorry, we need to go off and write our own programming language before this bug is fixed. Don’t worry, we’ll be back, it’s just gonna take some time.”
Nobody would believe you. But yet, it’s what happened.
nurettin•21h ago
Xss3•14h ago
Yossarrian22•13h ago
steveklabnik•13h ago
mmooss•21h ago
The anti-Firefox mob really is striving to take shots at it.
The point of the article isn't a criticism of Linux, but an analysis that leads to more productive code review.