The entire hurd system is a literal metaphor for how waiting till you're perfect means you'll never be good enough.
At the risk of getting downvoted, I think hurd is cooked at this point. It certainly has some solid ideas that could live on in a modern system. They should retry rewriting it in rust (or zig) and at least have the opportunity to catch mindshare with new engineers just dabbling in systems engineering.
Also I can't remember any more recent GNU projects that were successful
TeXMacs was release in the late 90's (I think) and it's pretty neat and I think has at least some user base, and I think GNU Parallel was released in the mid 2000s and I know a number of people who use that (including myself).
> including Linux's entire coreutils
Ah yes I think I've heard of this GNU/Linux thing. To see if they will come ahead of Berkeley System Distribution. /s
It's an antipattern to chase whatever language is being hyped the most at the moment. And it's probably bad from a community POV to deliberately attract developers who are chasing hype.
that pretty much described the current hurd dev community and its dying. I wouldn't advocate a full RIIR for most things but I think its a solid hail Mary to maybe make hurd relevant. The alternative is its going to be dead in a few years when the contributors all age out to spend time with their grandkids.
I don't think that swath is as huge as you think it is in 2025.
We were saying the same stuff during the Golang heydays ~8-9 years ago, and the C experts were already pretty fucking MIA.
The Linux and systemd projects are both suffering from a lack of new blood interested in writing plain old C, and the old guard is aging out. Linux is embracing Rust, which should help. I imagine systemd will do the same thing once a Rust toolchain is required to build the average distro kernel.
Nah. I'd much rather use a newer language that's explicitly designed for writing the same sorts of things that C is but with a teensy portion of the footguns.
I'm not saying C is bad. I am saying that if the Linux kernel devs still write buggy code sometimes — not because of logic errors or other design-level mistakes, but because of some goofy memory issue or accidentally wandering off into the wilderness of UB — then I guarantee I'm going to screw it up.
If it were in Rust or Zig or whatever, I'd feel like I had at least a fighting chance of making a tweak that didn't immediately format my hard drive and kick my cat.
> It's an antipattern to chase whatever language is being hyped the most at the moment.
Hype? Come on, Rust's 1.0 release was already over a decade ago. At this point it's pretty boring. How many more years will it take before people start taking it seriously and finally accept that those who prefer Rust over C do so because it's a much better language than C and not just because it's hyped?
> GNU is full of brilliant people who can write great code, but there are a few issues that I don't see fixing: Rampant disagreement and individuals who like to work solo. This can be good sometimes, but for a project with that scope it just isn't possible. The group is also aging and isn't getting new blood. This can be good because people have more free time, but it also traps us in old familiar/comfortable patterns that make onboarding younger contributors even more difficult than it already is. The philosophy is also quite rigid. For good reasons I think as more "permissive" licenses have been used to abuse users extensively, but the limitations do come up quite a bit, mainly with adoption. I think too many people are just scarred still from an earlier world where proprietary was often the only real alternative, and change is hard.
This is very sad, because the GNU project pioneered a way of software design that's very different from anything we see on proprietary platforms, or even common Linux/BSD applications for that matter. This is best exemplified by Emacs - hackable to the core, with more than enough documentation and context help baked in to help you do just that. You can see the same philosophy at play in the Guix OS, the Shepherd (init), GNU Poke (semantics-aware binary editor) and many many other GNU software. It can be used easily by anyone, but it's absolute heaven for those who like to poke around (not a pun) the system. It nudges normal users towards becoming system hackers. The difference between GNU software and corporate-sponsored components (like systemd, avahi, gnome, policykit, PAM, Chrome, Firefox, etc) is stark. I have heard similar things about NetBSD and OpenBSD to a lesser extend, but I'm yet to give it a good try. The only other alternative I've seen is the suckless suite of software where the configuration is done in the source code itself, before it's compiled. But it can be slightly daunting even for power users. With the loss of that knowledge and philosophy, an entire generation will grow up without ever knowing a different way of computing that treats you as something more than just a consumer to be squeezed for every last penny, and the true power and potential of general purpose computing.
There's so, so much old code. There's very few tests because it was all written hacker style. Because of this "hackable to the core" philosophy these codebases are piles of unmaintainable tech debt. Someone had an itch to scratch, wrote code to scratch it, then submitted a diff for merging. But nobody took a step back and said: "Wait! We need to rearchitect <this> to accommodate this new functionality." This is understandable because GNU projects are volunteer projects and it's a lot more fun to submit a diff than do a refactor to get a small change in.
Old Unix init systems, much like Emacs, were just piles and piles of kludges that kept things going. The insight that Poeterring had was that if you reduce the hacking surface to mere configuration then you only need to maintain the configuration layer and not a huge kaleidoscope of functionality.
The modern Unix layer of having opinionated, config-driven tools talk over pipes won out in this environment because you only really need to maintain the contract at the pipes (even if you are just outputting terminal escape codes.) Beyond that tools can be composed, but the surface area of each tool is small. This is why vim is so much more commonly used and easily maintained than emacs.
(This is distinct from the "Unix philosophy" stuff that was popular before where folks wrote small, composable tools instead of larger, monolithic, config driven ones.)
A more modern approach of "hackable to the core" I feel would be to standardize around a sturdy, opinionated, well maintained core and then to allow extensibility around this core. Helix is exploring this (using Scheme as its extensibility layer.) But I think Helix and other projects (maybe Lem?) need to prove this approach out before it can come close to challenging the modern configuration-driven ecosystem of tools.
(I say this as a user of emacs and Zed by the way and someone who recently started hacking on Lem, so this comes from some experience.)
Still, a part of me wishes we lived in the alternative universe where Hurd had taken over the world instead of Linux. I don't know much about kernel design so I'm speaking out of my ass here, but I've always thought that the microkernel design was more elegant than the monolithic thing we ended up with. I don't know that the alternate universe would be "better", and maybe realistically a design like Hurd would never be able to take over the world like Linux, but it always seemed cooler to me.
I honestly didn't really realize that they were still working on Hurd. Does anyone here use it for anything?
I booted it on real hardware sometime in the early 2000s, and it worked but was very anticlimactic.
I do know that the Mach microkernel they based it on (also the basis for Apple's XNU kernel) is considered dated. Later microkernels are supposed to have better performance.
I played with RedoxOS a bit in a virtual machine a few years ago [1], and it seemed cool, so maybe that can be the logical successor to something like Hurd.
I think there's some spark there.
A problem with RedoxOS is that it is not GPLed: contributors have no assurance that they and others will be able to use software built with their contributions.
Microsoft, Apple, Google and Facebook all have plenty of money to pay engineers; they don’t need my contributions for free.
Unfortunely other UNIX clones rather keep going as "things were always done around here"
The thing with elegant systems is they usually don't succeed if the alternative is something pragmatic that has been battle tested.
I tried installing FreeBSD on a laptop years ago, which isn't really an "obscure" operating system or anything, but even that had a lot of compatibility problems with regards to drivers for wifi and GPUs, and even that would have a considerable head-start over something like Hurd if it were to try and take on the desktop world.
Had they not existed, or BSD been obviously free and clear, Linux might have been a footnote.
https://bitmason.blogspot.com/2020/04/podcast-if-linux-didnt...
I don't think even 386BSD existed when Linus started Linux.
And then BSD could have won against Hurd anyway. Especially when corps like the permissive license and are afraid of the FSF.
Compare Sony PlayStation Network[1]
Monthly active users on PlayStation Network reached 123 million as of June 30, 2025.
with Valve's Steam[2] Valve reported 132 million active monthly players (that is, they used Steam within the month, as opposed to being logged in at exact the same time) at the end of 2021...
This isn't scientific, but if the same ratio of active monthly to peak concurrent users held through to today, back of the napkin math would put Steam's current active monthly users at 221.5 million
With an optimistic estimate of current Monthly Active Users, if gaming on Linux grew overnight from 2.5% to 50% of total players on Steam, then it would still be slightly behind half of the people who are currently gaming on FreeBSD-based Playstation.FreeBSD code is also in iOS and macOS via Darwin, the Nintendo Switch, and the Microsoft Windows networking stack.
Evidently BSD is a go-to choice for consumers today, but many don't realize it, and those of us who do often do not think about it. That's because the BSD license and the companies that use it result in products that bear no resemblance to the BSD we know.
A similar situation occurred with Minix - to the extent that it's creator Andrew Tannenbaum had no idea it's install base was arguably bigger than Linux until 2017. Intel had put Minix into the Management Engine on their professional grade CPUs for years. The BSD license allowed Intel to put it everywhere without the knowledge of the wider Minix community.
In some key ways, BSD is already taking the Linux spot, however, I'd argue that BSD can't truly take the Linux spot because the GPL license makes the Linux spot what it is. I honestly can't say if this makes Linux better or worse off. The most advanced technology of our time is largely not choosing copyleft licenses, and for those who did choose it, they've taken steps to distance themselves from it[3][4][5][6].
Given all this, I think Hurd has more of a chance to be the spiritual successor to Linux (if it disappeared). The only caveat is there is zero chance for a big-tech-dominated $200M "Hurd Foundation" to arise due to Hurd's's affiliation with the Free Software Foundation. Not much of the Linux Foundation's money actually goes to Linux, so it may not matter in the grand scheme of things[7].
[0] https://wololo.net/2023/03/22/new-freebsd-vulnerabilities-co...
[1] https://www.psu.com/news/psn-hits-123-million-monthly-active...
[2] https://www.pcgamer.com/gaming-industry/steam-just-cracked-4...
[3] https://arstechnica.com/gadgets/2017/05/googles-fuchsia-smar...
[4] https://www.androidauthority.com/google-android-development-...
[5] https://www.theregister.com/2023/06/23/red_hat_centos_move/
[6] https://lwn.net/Articles/655519/
[7] https://blog.desdelinux.net/en/The-annual-report-of-the-Linu...
Is this not even more true than with Linux in the billions strong Android?
In 2021, it appeared that Google was planning a pivot to their own BSD/MIT-licensed OS named Fuscia.
https://arstechnica.com/gadgets/2017/05/googles-fuchsia-smar...
This pivot seemed to end around the same time tech layoffs were occuring in 2024.
https://9to5google.com/2024/01/15/google-is-no-longer-bringi...
Since then, Google has chosen to limit the amount of open source development done for the Android OS.
https://www.androidauthority.com/google-android-development-...
Keeping Android kernel development internal creates greater risk of binary blobs polluting the source code. Binary blobs might be a practical solution to bring products to market, but they are also a mechanism to circumvent the GPL. I doubt Google will take this problem seriously, but other Linux distributions have.
https://lwn.net/Articles/655519/
The move by Google mirrors the choice by Red Hat to keep RHEL source code private.
https://www.theregister.com/2023/06/23/red_hat_centos_move/
The common trend is product managers for these companies view the GPL as a bug instead of a feature.
Anyway, it is important to keep in mind that the useful “size” metric of a community led open source project is the number of developer-hours being contributed to it, not the number of users. It is a fun bit of trivia that these devices are everywhere, and maybe good news for open source fans’ career prospects. But that’s all.
It is a common belief that Darwin has allegedly descended from FreeBSD, but there is not a lot in there: a pretty ancient snapshot of the FreeBSD userland, another snapshot of the TCP/IP stack that has now completely diverged from the current FreeBSD TCP/IP stack (or, more correctly, the other way round), plus a few borrowed kernel level API's (kqueue is the most notable one).
VMM, VFS, driver layers, file systems etc etc do not share the same lineage.
Off topic question, but wasn't that a violation of the BSD license? It does require a copyright notice.
Google is sure trying with Fuscia: https://en.wikipedia.org/wiki/Fuchsia_(operating_system)
Aren't those billions of Linux/Android instances typically running on top of an seL4 microkernel?
[1] https://web.archive.org/web/20120211210405/http://www.ok-lab...
I suspect that there is a place for elegant systems - they just have to be pragmatic in how they launch.
Start small, do a limited function, or replace an existing limited function, and grow from there.
Thing is, linux is a kernel, but its driver support and hooks into the rest of userspace makes it more than just a kernel. Harder to replace with something more elegant/better.
FTFY
>I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones.
(It often got installed on top of “real” Unix because it was a damn good toolchain)
The standard tools were always sort of unergonomic on all of AIX, Sun/Solaris, DEC/Alpha, SCO, and *BSD.
I don't know but it seems people (or at least old geezers) install GNU on top of Macs these days.
Yes, here it is: “Software Usability II” by Tom Davis, the “Irix bloat memo” https://www.seriss.com/people/erco/sgi-irix-bloat-document.t... . Mind you, that bloat would probably look very modest nowadays.
People installed them on AIX/Solaris etc because the BSD/SysV tools they came with were basically abandoned. The GNU tools had a lot more useful features, not specifically more performant (although they probably were)
But they really did have tons of options and once you were used to them, you really wanted them.
But remember that GNU grew up in the era of multi-user systems; and Linux was the forefront of “personal computing Unix” where the user had root.
Me, I do this! I like being able to tack flags on after positional args if I remember them after typing a command. I like some of the convenience flags added to the GNU versions coreutils, grep, and findutils. Even `parallel` implementation I've used before is GNU parallel. I've never really learned mawk, to the extent I know awk at all, it's gawk.
(I don't like the common convention of prefixing them with `g` on macOS, either. I install them with Nix and just preempt the system ones on my PATH.)
It was too new. Linus' mail was in August; GNU only announced its new OS in May:
https://www.gnu.org/software/hurd/history/hurd-announce
I think the GNU OS didn't even have a name yet.
The second announcement was in 1993 and by then it had a name:
https://www.gnu.org/software/hurd/history/hurd-announce2
So, no, I think that, as generally accepted, he meant the entire GNU OS.
That's what I meant too - Torvalds was talking about the OS, not just the kernel. I don't understand the difference or the mistake that you're trying to highlight.
While the Switch was broken early, this was due to NVIDIA's buggy boot code. The operating system itself... you could literally pwn WebKit or the Bluetooth driver, and get absolutely nowhere. SciresM famously reimplemented the kernel in an open source fashion (Mesosphere) and the secure monitor code (Exosphere), and has publicly stated they have zero possible security bugs in his eyes. That was in 2020 and there have not been any reports of kernel security bugs since.
Another example of microkernel-based systems you do interact with is car infotainment systems, where QNX has apparently seen a lot of use – though I think these days it’s being displaced by Linux and Android Automotive? I don’t actually know much about that industry.
Like, I've read about how you can mount lots of things like filesystems and that sounds kind of neat but that also seemed like it might obscure latency and make things ridiculously slow, though it's entirely likely that I am misunderstanding how things work.
It would really be a real competitor with linux in the server market.
I really should properly play with it, but it always seemed to me that it has the potential to add milliseconds of cost to each operation and that could be very slow.
If you mean that microkernels ping-ponging between kernel and user space can impact perf: Maybe? I'd really want to see benchmarks.
Yup. I tried to bounce the idea of Mark Shuttleworth when I last interviewed him, but he wasn't interested.
At least that's the impression that I got from their interview process.
Just to clarify, though: I wasn't interviewing for a job, I was interviewing the man for El Reg.
https://www.theregister.com/2024/11/11/mark_shuttleworth_ubu...
It took an afternoon to figure out how, and was basically "cat".
Latency is never a problem in my experience unless you're mounting in resources from a different continent, where ssh is slow anyway. Even in those cases, the UX is closer to mosh, since rio remains local.
In general, plan 9 is fast. Compiling all of userspace and the kernel tanker just a couple minutes on my 11th gen Framework. Grepping a large repo also feels closer to ripgrep than gnu grep.
One well-known user runs his home network and automation system all as a 9grid. He even frequently shares details on his YouTube channel adventuresin9[0]. It's binge-worthy IMHO.
It's hard to convey how cohesive the whole system is. It's ridiculous how many things are reduced to trivial shell scripts, and the source code is so darn grokkable, greppable, and small that treating it as documention is actually sensible. Granted, this is almost necessary to become proficient in Plan 9 since there are so few network effects producing StackOverflow answers, blog tutorials etc.
Anyway, I hope you do end up jumping in!
[0]:https://www.youtube.com/channel/UC7qFfPYl0t8Cq7auyblZqxA
I have an old piece of shit laptop that’s not being used for anything, might be a fun excuse to try it out.
For a bit more nitty-gritty, the 9front FQA[0] is worth running through.
Damnit, I guess I know what I am doing this weekend.
AI is getting good enough to help with the verification process and having a hardened kernel would guard a bit better than the current strategy of using containers everywhere.
https://www.gnu.org/software/hurd/history/port_to_another_mi...
Neal Walfield was working on a new microkernel as well: https://www.gnu.org/software/hurd/microkernel/viengoos.html
It definitely would not be a trivial amount of work.
Honestly, I think the downvotes were for mentioning AI may have a role in validation. LLMs are increasingly being explored in the theorem prover space, but it's still controversial to talk of them approvingly on some HN threads.
It's an interesting idea to think that LLMs could be used to not only explain the code but test the potentially tricky corner cases.
I'm pretty sure LLMs are here to stay, and we're just going to have to learn the best ways to work with them.
There's Genode[0]. Relative to the hurd, its design is much more advanced and it supports a range of modern microkernels including seL4.
Many stories around using Genode in the Genodians[1] blog.
I think that it's important to remember that Debian Hurd is not some massive project with thousands of anonymous people behind it. Like Tribblix and Peter Tribble, Debian Hurd's driving force is someone whom you can name: Samuel Thibault.
And although there are a few others that appear on the debian-hurd mailing list from time to time, it is amply clear that this is one of those (many) projects with a core group of very few dedicated people, with very limited resources for development and testing. There is no many hands making light work, here.
This isn't Debian as you may know it for other kernels. (-:
* https://lists.debian.org/debian-hurd/2025/07/maillist.html
So, in some ways, if microkernels interest you, Debian Hurd is a place to contribute where the ground has yet to be completely trodden.
Years ago I was met with derisive laughter from everyone when I said Haiku would hit 1.0 before Hurd. I also said that Haiku would beat linux to the opensource desktop widely used by the average person who is not concerned with opensource, but I think that was mostly stirring the pot because of the reaction to my previous statement. All these years later and Haiku hitting 1.0 seems inevitable and even the idea of it becoming a widely adopted opensource OS does not seem that far fetched. I would like to see Hurd hit 1.0, but I am fairly skeptical at this point.
I suppose ChromeOS/linux beat Haiku to the punch for the opensource desktop, but I think I will stick to my guns on this one and play semantics, many in the linux/oss view ChromeOS as linux/oss in name only. A cheat but I think Haiku has earned it.
Edit: Forgot that Chomium was opensource but ChromeOS is not, so I guess I had no need to play semantics.
It does feel a lot more user ready than a lot of alternatives. Although I did find it funny that on their last release a big milestone is that it can now compile code a little faster than half the speed of Linux. So performance is still lacking but gaining. Considering their team size compared with Linux, that is a big achievement.
And focusing on the users is a very smart move. It is better to do average expectations somevery well than to do pure ideological design poorly. Haiku is focusing it just right I feel.
A lot of software fails to build on Hurd because it makes (often dangerously) false assumptions that the software really needs to think about properly. `PATH_MAX` is the most visible one, but others exist as well.
(By contrast, I have found that software that fails on one of the BSDs is often failing because the particular OS completely lacks some essential feature, or at least lacks a stable API/ABI thereto.)
I won’t argue the semantics, but I will be a pedant :-) “ChromiumOS” absolutely exists and is FOSS-licensed. It’s a mess to build — basically a bunch of ebuild overlays for Gentoo’s build system and a boatload of custom tooling/scripts which produce images for a given system configuration, if I remember correctly. I don’t know much about it honestly, but it is open source! At least in the same way Chromium is.
And now it's 64bit!?
HarmonyOS NEXT is the world's most widely used microkernel system, reportedly used on approximately 800 million systems.
each week there are (in C, in Rust, in JS...)
What are their hardware support ?
at best they can run in a virtual machine
End of debate.
And then: Doing research in operating systems serves a lot of purposes. For some it's just fun. For some it's experimenting which may lead to ideas which may be incorporated into other OSs later, where eit is a lot simpler to do in a small kernel. For some it is an attempt to take over the world, few of those will, but maybe one might. At least for a small part of the world.
Also, you do not have to support every system.
For example if they support these cheap n150 mini pcs, I am more than fine. Something common.
Macos runs fine because it works in a specific space.
Stallman et. al. have promised since the late 80s that this would be the future, and at various stages promised that it will be ready for production work within the next year (or two). Like any promises made by Elon Musk, everyone in the tech industry has long since learned to ignore them. Maybe some day it'll be done, but I'm highly skeptical it has any chance of building up the momentum it needs.
> This 64b support is completely using userland disk drivers from NetBSD thanks to the Rump layer.
That's exciting news! I know it's a bit much to get excited about Hurd. But this is a big milestone. And though I'm not holding my breath, I really want to see how far they can take it. I'm not ready to write them off yet.
I didn't notice that until I saw your comment. Thanks!
My initial thought was that porting code from Linux would be a more direct path, especially since a major goal for GNU Hurd is compatibility with the Debian archive.
* the NetBSD "rump" unikernel is out there:
https://thamizhelango.medium.com/netbsds-rump-kernel-framewo...
* Secondly, this exists because NetBSD is much much smaller and simpler than Linux, which has undergone decades of enterprise-focussed optimisation.
https://fosdem.org/2025/schedule/event/fosdem-2025-5490-mach...
There's many more options[0] these days.
Wikipedia has a pretty good rundown [3] but I recommend booting up a VM image. It's actually quite beautiful. I love the purity of GNOME on a GNU/Hurd system with GUIX and Shepherd where the whole thing is configured in guile[4]. There's just something very aesthetic about the combination. I wish I could use it as my daily driver.
2. https://www.gnu.org/software/shepherd/manual/shepherd.html
https://guix.gnu.org/en/blog/2020/childhurds-and-substitutes...
Obviously I'm crazy, because I use Emacs with EXWM as my shell, but IMHO it's perfectly capable of being a "daily driver" already.
Two things that hold me back (aside from the obvious friction of switching):
1. nixpkgs is huge. If what I need isn't in 25.05, it's in unstable
2. I've been programming in Common Lisp for so long that I can't help but write buggy code any time I try writing in a lisp-1. It seems like it should be such a minor thing, but I invariably inadvertently shadow a function I'm using with some other variable.
[edit]
I forgot my third issue, which is all of my preferred setup options seem to be further off beaten path in Guix than NixOS (e.g. plasma, zfs)
I want to know whether I will be senile enough to continue contributing to project when I reach that age
lenerdenator•5mo ago
... also, they're still working on Hurd?
em3rgent0rdr•5mo ago
numpad0•5mo ago
JdeBP•5mo ago
* https://lists.debian.org/debian-hurd/2025/08/msg00038.html