It might not be super unique but is a truly from-scratch "common" operating system built in public, which for me at least puts it at the position of a reference of an OS of whose code one person can fully understand if they'd want to understand the codebase of a whole complete-looking OS.
And a few dozen others as well.
As a retro-computing enthusiast/zealot, for me personally it is often quite rewarding to revisit the ‘high concept execution environments’ of different computing era. I have a nice, moderately sized retro computing collection, 40 machines or so, and I recently got my SGI systems re-installed and set up for playing. Revisiting Irix after decades away from it is a real blast.
I'm thinking systems designed based on the assumption that there are tens, hundreds or even thousands of processors, and design assumptions are made at every level to leverage that availability
Ultimately, the OS has to be designed for the hardware/architecture it's actually going to run on, and not strictly just a concept like "lots of CPUs". How the hardware does interprocess communication, cache and memory coherency, interrupt routing, etc... is ultimately going to be the limiting factor, not the theoretical design of the OS. Most of the major OSs already do a really good job of utilizing the available hardware for most typical workloads, and can be tuned pretty well for custom workloads.
I added support for up to 254 CPUs on the kernel I work on, but we haven't taken advantage of NUMA yet as we don't really need to because the performance hit for our workloads is negligible. But the Linux's and BSD's do, and can already get as much performance out of the system as the hardware will allow.
Modern OSs are already designed with parallelism and concurrency in mind, and with the move towards making as many of the subsystems as possible lockless, I'm not sure there's much to be gained by redesigning everything from the ground up. It would probably look a lot like it does now.
I'm re-implementing it as a metacircular adaptive compiler and VM for a production operating system. We rewrite the STEPS research software and the Frank code [2] on a million core environment [3]. On the M4 processor we try to use all types of cores, CPU, GPU, neural engine, video hardware, etc.
We just applied for YC funding.
[1] https://github.com/smarr/RoarVM
But mainstream servers manage hundreds of processor cores these days. The Epyc 9965 has 192 cores, and you can put it in an off the shelf dual socket board for 384 cores total (and two SMT threads per core if you want to count that way). Thousands of core would need exotic hardware, even a quad socket Epyc wouldn't quite get you there and afaik, nobody makes those, an 8 socket Epyc would be madness.
There exist many OSes (and UI designs) based on non-mainstream concepts. Many abandoned, forgotten, @ design time suitable hardware didn't exist, no software to take advantage of it, etc etc.
A 'simple' retry at achieving such alternate vision could be very successful today due to changed environment, audience, or available hardware.
What about more specialized devices? e-readers, wifi-routers, smartwatches (hey, hello open sourced PebbleOS), all sorts of RTOS based things, etc? Isn't anything interesting happening there?
https://ares-os.org/docs/helios/
The cost of not having proper sandboxing is hard to overstate. Think of all the effort that has gone into linux containers, or VMs just to run another Linux kernel, all because sandboxing was an afterthought.
Then there's the stagnation in filesystems and networking, which can be at least partially attributed to the development frictions associated with a monolithic kernel. Organizational politics is interfering with including a filesystem in the Linux kernel right now.
Helios was written from scratch.
Helios hasn't done anything novel in terms of operating system design. It's taken an excellent design and reimplemented it in with a more modern language and built better tooling around it. I tend to point people towards the Helios project instead of seL4 because I think the tooling (especially around drivers) is so much better that it's not even a close comparison for productivity. It's where the open source OS community should be concentrating efforts.
serhack_•5h ago
amelius•4h ago
wazzaps•3h ago
MonkeyClub•2h ago
I read through its goals, and it seems that it is against current ideas and metaphors, but without actually suggesting any alternatives.
Perhaps an OS for the AI era, where the user expresses an intent and the AI figures out its meaning and carries it out?
[1] https://www.mercuryos.com/
[2] https://news.ycombinator.com/item?id=35777804 (May 1, 2023, 161 comments)
WillAdams•2h ago
Oberon looks/feels strikingly different (and is _tiny_) and can be easily tried out via quite low-level emulation (and just wants some drivers to be fully native say on a Raspberry Pi)