> URIs as namespace paths allowing access to system resources both locally and on the network without mounting or unmounting anything
This is such an attractive idea, and I'm gonna give it a try just because I want something with this idea to succeed. Seems the project has many other great ideas too, like the modular kernel where implementations can be switched out. Gonna be interesting to see where it goes! Good luck author/team :)
Edit: This part scares me a bit though: "Graphics Stack: compositing in-kernel", but I'm not sure if it scares me because I don't understand those parts deeply enough. Isn't this potentially a huge hole security wise? Maybe the capability-based security model prevents it from being a big issue, again I'm not sure because I don't think I understand it deeply or as a whole enough.
But seriously a lot of the design decisions Linux and other Unix like systems makes are horrible and poorly bolted on to a design from the 70s that aged very poorly. One of my goals with this project is to highlight that by showing how system with a more modern design derived from the metric ton of OS research that has been done since the 70s can be far better and show just how poorly designed and put together the million and one Unix clones actually are no matter how much lipstick Unix diehards try to put on that pig.
(You don't have to recompile the kernel if you put all the device drivers in it, just keep the object files around and relink it.)
Also the only approach for systems where people advocate for static linking everything, yet another reason why dynamic loading became a thing.
Most of these systems came with utilities to partially automate the process, some kind of config file to drive it, Netware 2.x even had TUI menuing apps (ELSGEN, NETGEN) to assist in it
The sys admin scripts would even relink just to merely change the ip address of the nic! (I no longer remember the details, but I think I eventually dug under the hood and figured out how you could edit a couple files and merely reboot without actually relinking a new kernel. But if you only followed the normal directions in the manual, you would use scoadmin and it would relink and reboot.) And this is not because SCO sux. Sure they did, but that was actually more or less normal and not part of why they sucked.
Change anything about which drives are connected to which scsi hosts on which scsi ids? fuggeddabouddit. Not only relink and reboot, but also pray and have a bootable floppy and a cheat sheet of boot: parameters ready.
Incremental compilation means you don't have to recompile everything just compile the new driver as a library and relink the kernel and you're done. Keep the prior n number of working ones around in case the new one doesn't work.
The intro page is currently useless.
The plan is to hand out panes which are just memory buffers to which applications write pixel data as they would on a framebuffer then when the kernel goes to actually refresh the display it composites any visible panes onto the back buffer and then swaps buffers. There is nothing unsafe about that any more so than any other use of shared memory regions between the kernel and userspace and those are quite prolific in existing popular OSes.
If anything the Unix display server nonsense is overly convoluted and far worse security wise.
From there each application can draw its own GUI and respond to events that happen in its panes like a mouse button down event while the cursor is at some coordinates and so forth using event capabilities. What any event or the contents of a pane mean to the application doesn't matter to the OS and the application has full control over all of its resources and its execution environment with the exception of not being allowed to do anything that could harm any other part of the system outside its own process abstraction. That's my rationale for why the display system and input events should work that way. Plus it helps latency to keep all of that in the kernel especially since we're doing all the rendering on the CPU and are thus bottlenecked by the CPU's memory bus having way lower throughput compared to that of a discrete GPU. But that's the way it has to be since there are basically no GPUs out there with full publicly available hardware documentation as far as I know and believe me I've looked far and wide and asked around. Eventually I'll want to port Mesa because redoing all the work develop something that complex and huge just isn't pragmatic.
This and other dirt is on any YouTube video about the history/demise of alternative computing platforms/OSes.
There are many great ideas in operating systems, programming languages, and other systems that have been developed in the fast 30 years, but these ideas need to work with existing infrastructure due to costs, network effects, and other important factors.
What is interesting is how some of these features do get picked up by the mainstream computing ecosystem. Rust is one of the biggest breakthroughs in systems programming in decades, bringing together research in linear types and memory safety in a form that has resonated with a lot of systems programmers who tend to resist typical languages from the PL community. Some ideas from Plan 9, such as 9P, have made their way into contemporary systems. Features that were once the domain of Lisp have made their ways into contemporary programming languages, such as anonymous functions.
I think it would be cool if there were some book or blog that taught “alternate universe computing”: the ideas of research systems during the past few decades that didn’t become dominant but have very important lessons that people working on today’s systems can apply. A lot of what I know about research systems comes from graduate school, working in research environments, and reading sites like Hacker News. It would be cool if this information were more widely disseminated.
Your complaint is more pointless than what you're complaining about.
- Multi-user and server-oriented permissions system.
- Incompatible ABIs
- File-based everything; leads to scattered state that gets messy over time.
- Package managers and compiling-from-source instead of distributing runnable applications directly.
- Dependence on CLI, and steep learning curve.
If you're OK with those, cool! I think we should have more options.Docker tries to partially address this, right?
> Dependence on CLI, and steep learning curve.
I think this is partially eased by LLMs.
Docker is a good way of turning a 2kb shell script into a 400mb container. It's not a solution.
Flatpak would be a better example.
Reactos if you need something to replace windows
Implementing support for docker on these operating systems could give them the life you are looking for
Did you know the Go language supports Plan9? You can create a binary from any system using GOOS=plan9 with amd64 and i386 supported. You might need to disable CGO and use libraries that don't have operating system specifics though. You can even bootstrap Go from it provided you have the SDK.
Incidentally 9Front is a modern fork of Plan9.
Maybe an LLM agent posting crap at random? lol
This could be done at every level: the operating system, the browser, websites..
So if you don't care about the website knowing it's the same person, instead of having multiple user accounts on HN, Reddit, you could log into a single account, then choose from a set of different usernames each with their own post history, karma, etc.
If you want to have different usernames on each website, switch the browser persona.
At the OS level, people could have different "decoy" personas if they're at risk of state/partner spying or wrench-based decryption, and so on.
What's that parenthetical mean?
Specifically, "Users may link this kernel with closed-source binary drivers, including static libraries, for personal, internal, or evaluation use without being required to disclose the source code of the proprietary driver.".
I wish there was a social stigma in Open Source/Free Software to doing anything other than just picking a bog standard license.
I mean, we have a social stigma even for OS developers about rolling your own crypto primitives. Even though it's the same very general domain, we know from experience that someone who isn't an active, experienced cryptographer would have close to a zero percent chance of getting it right.
If that's true, then it's even less likely that a programmer is going to make legally competent (or even legally relevant) decisions when writing their own open source compatible license, or modifying an existing license.
I guess technically the "clarification" of a bog standard license is outside of my critique. Even so, their clarification is shoe-horned right there in a parenthetical next to the "License" heading, making me itchy... :)
Many people don't know that, hence the clarification note.
Also to be clear I am not a lawyer and nothing I say constitutes any form of legal advice.
SerenityOS is written in C++.
I'd love some kind of meta-language that is easy to read and write, easy to maintain - but fast. C, C++, Rust etc... are not that easy to read, write and maintain.
easy to understand, maintain -> computer does more work for you to "figure things out" in a way that simply can't be optimal under al conditions.
TLDR: what you're asking for isn't really possible without some form of AGI
by that same definition, rust is pretty easy to maintain. I won't say its easy to write though.
More options (and thus) competition is very healthy.
pjmlp•15h ago
LavenderDay3544•45m ago