For all the reasons stated in the link from the README [1] and agreed by the author, this project should not be followed if one wants to gain an understanding of the design and implementation of operating systems for modern systems. Following it will likely lead only to another abandoned “hello world plus shell” that runs only in emulation of decades old hardware.
My advice is get the datasheets and programmers’ manuals (which are largely free) and use those to find ways to implement your own ideas.
fwiw, xv-6, the pedagogical os has migrated to it.
Disclaimer: I'm on the teaching team
However, it's arguably too idealized and predetermined. I think you could get all the way through building the computer in https://nandgame.com/ without really learning anything beyond basic logic design as puzzle solving, but computer organization, as they call it in EE classes, is more about designing the puzzles than solving them. Even there, most of what you learn is sort of wrong.
I haven't worked through the software part, but it looks like it suffers from the same kind of problems. IIRC there's nothing about virtual memory, filesystems, race conditions, deadly embraces, interprocess communication, scheduling, or security.
It's great but it's maybe kind of an appetizer.
If i wanted to learn just so i have a concept of what an os does, what would you recommend?
I'm not trying to write operating systems per se. I'm trying to become a better developer by understanding operating systems.
I don't know what to recommend there. I have no relevant experience, because all my RISC-V hardware leaves unimplemented the privileged ISA, which is the part that RISC-V makes so much simpler. The unprivileged ISA is okay, but it's nothing to write home about, unless you want to implement a CPU instead of an OS.
Start with an RTOS on a microcontroller. You'll see what the difference is between a program or a library and a system that does context switching and multitasking. That's the critical jump and a very short one. Easy diversion to timers and interrupts and communication, serial ports, buses and buffers and real-time constraints. Plus, the real-world applications (and even job opportunities) are endless.
I am an occasional uni TA that teaches OS and I use littleosbook as the main reference for my own project guidebook.
It's a decent warm-up project for undergraduates, giving them a first-hand experience programming in a freestanding x86 32-bit environment.
The good folks at MIT were gracious enough to make it available for free, free as in free beer.
I did this course over ~3 months and learnt immeasurably more than reading any blog, tutorials or textbook. There’s broad coverage of topics like virtual memory, trap processing, how device drivers work (high-level) etc that are core to any modern OS.
Most of all, you get feedback about your implementations in the form of tests which can help guide you if you have a working or effective solution.
10/10 highly recommended.
Maybe it's a matter of marketing the product and managing expectations, but many of these projects are A ok being "legacy and obsolete" just for the sake of simplicity for introducing basic concepts.
Let's take two random examples.
(1) "let's create 3D graphics from scratch" It's quite easy to grab "graphics gems" and create a comprehensive tutorial on software renderer. Sure it won't be practical, and sure it will likely end on phong shading, but for those wanting to understand how 3d models are translated into pixels on screen it's way more approachable than studying papers on nanites.
(2) "let's crate a browser from scratch" It's been widely discussed that creating a new browser today would be complete madness (yet there's Ladybird!), but shaving the scope, even if it wouldn't be able to run most modern websites would be a interesting journey for someone who'd interested in how things work.
PS. Ages ago I've done a Flash webpage that was supposed to mimic a desktop computer for ad campaign for tv show. Webpage acted as a personal computer of main character and people could lurk into it between episodes to read his emails, check his browser history, etc. I took it as a learning opportunity to learn about OS architecture and spent ungodly amount of unpaid overtime to make it as close to win3.1 running on dos as possible. Was it really an OS? Of course not, but it was a learning opportunity to get a grasp of certain things and it was extremely rewarding to have an easter egg with a command.com you could launch and interact with the system.
Would I ever try to build a real OS. Hell no, I'm not as smart as lcamtuf to invite couple friends for drinks and start Argante. :)
The issue with x86_64 is that you need to understand some of the legacy "warts" to actually use 64-bits. The CPU does not start in "long mode" - you have to gradually enable certain features to get yourself into it. Getting into 32-bit protected mode is prerequisite knowledge to getting into long mode. I recall there was some effort by Intel to resolve some of this friction, breaking backward compatibility, but not sure where that's at.
The reason most hobby OS projects die is more to do with drivers. While it's trivial to support VGA and serial ports, for a modern machine we need USB3+, SATA, PCI-E, GPU drivers, WIFI and so forth. The effort for all these drivers dwarfs getting a basic kernel up and running. The few novel operating systems that support more modern hardware tend to utilize Linux drivers by providing a compatible API over a hardware abstraction layer - which forces certain design constraints on the kernel, such as (at least partial) POSIX compatibility.
The primary issue with most tutorials that I’ve seen is they don’t, when completed, leave one in a position of understanding “what’s next” in developing a usable system. Sticking with x86_64, those following will of course have set up a basic GDT, and even a bare-bones TSS, but won’t have much understanding of why they’ve done this or what they’ll need to do to next to support syscall, say, or properly layout interrupt stacks for long mode.
By focusing mainly on the minutiae of legacy initialisation (which nobody needs) and racing toward “bang for my buck” interactive features, the tutorials tend to leave those completing it with a patchy, outdated understanding of the basics and a simple baremetal program that is in no way architected as a good base upon which to continue toward building a usable OS kernel.
> College is hard so I don't remember most of it.
Interesting how counter-productive high stress environments can be for ingraining knowledge.
The target is to be able to build Linux 0.12 using modern gcc and run it in QEMU, or preferable on a true 80386 machine. AFAIK, modern gcc still supports this architecture, so in concept this should be possible. There might be a LOT of changes in source code to be made, though.
The idea behind is to obtain a very old Linux that can be built easily on modern systems, with modern C support, and gradually add my own stuffs using the same tools, so that I don't limit myself to very old toolchains.
Edit: The reason to pick Linux 0.12/0.92 is because 1) It is kinda complete (without network I believe) but not too complex (I believe the loc is under 20K), and 2) It is not a toy OS and can run on real hardware (a 80386 machine), and 3) we have an excellent book for it: https://download.oldlinux.org/CLK-5.0-WithCover.pdf
> Hey! This is an old, abandoned project, with both technical and design issues. Please have fun with this tutorial but do look for more modern and authoritative sources if you want to learn about OS design.
it's a lot of fun to do these things, but it's good not to be convinced it's actually how modern OSes work, or modern hardware for that matter.
I really should learn more about kernel design. I would probably still be a better distributed systems engineer if I lift more concepts from OS design.
I think it's a good case for including lattice types in the OS ie. From the ground up. Bear in mind that an OS has an API ie. A DSL/language for it, Micropython is a good example:
I knew the bootstrapping expression and origins, but as a second language speaker it had never occurred to me that “boot” as verb was related.
The process of bootstrapping a compiler (not an operating system) is really interesting and the method used is ingenious.
I had read about it some years back.
Basically, at high level, it's a kind of chicken and egg situation:
After writing a compiler for language A (in language A), where A is a new language, how can you compile that compiler to an executable, so that you can compile application programs written in A?
Because there is not yet any runnable compiler for A.
I might not have described the issue very well.
And the concept of cross-compiling also comes into the picture, depending on the situation.
I don't remember the details perfectly now.
If somebody else who knows, describes it, I think it would be interesting for many people here.
Some links:
For example, let's say you create a new language called Brute. You decide to write it in C (original language).
Stage 1: Create a Brute compiler in C, which can compile only a small subset of Brute language. Lets call it bruteC compiler.
Stage 2: Implement a compiler in Brute lang and compile it using your bruteC compiler. This produces brute-compiler executable.
Stage 3: Add new feature to your brute compiler source code and compile it using your brute-compiler executable, which produces new brute-compiler executable.. .
And so on... At the end your brute-compiler supports all language features and thus is self hosted.
Let's back up to the start. When you switch on a computer, the power rails on a bunch of the chips come up. As this happens, the chips enter a "reset" state with initial data that is baked into the circuitry. This is the power-on reset (PoR) circuitry. When the power is up and stable, the "reset" is over and the processor starts executing. The initial value of program counter / instruction pointer is called the reset vector, and this is where software execution begins. On an x86 PC, this is something like 0xFFFFFFF0. The memory controller on the system is configured for reads from this address to go NOT to the main RAM, but to a flash memory chip on the board that holds the firmware software. From there the firmware will find your bootable disks, load the bootloader, and pass control to it.
In practice, systems vary wildly.
Is there any resource you could point me to where I can learn?
I’m mostly used to working at a higher abstraction level and taking as “magic” everything below that.
I’d like to bridge the gap with lower level stuff now, it’s about time.
For individual hardware stacks, there are processor and system documentation that explain exactly the memory addresses, the state of registers, the location on a drive where the firmware tries to find your bootloader, and all that.
so the BIOS transfers execution to the address. 0x7c00. That is where it also loads your bootsector. After that, your code runs.
For UEFI it's different. And in light of modern PC platforms with things like platform security processors, there's actualyl a lot that happens even before that stage, to verify BIOS and try to aid secure-booting / trusted boot.
I’ve always found it curious that most OS dev tutorials still focus on x86_32, even though nearly all CPUs today are 64-bit. It’s probably because older materials are easier to follow, but starting with x86_64 is more relevant and often cleaner.
Yes, 64-bit requires paging from the start, which adds complexity early on. But you can skip the whole 32-bit protected mode setup and jump straight from real mode to long mode, or better yet, use UEFI. UEFI lets you boot directly into a 64-bit context, and it's much easier to work with than BIOS once you get the hang of it. You’ll learn more by writing your own boot code than copying legacy snippets. UEFI support is straightforward, and you can still support BIOS if needed (most old machines are x64 anyway? it's been around for ages now...). Since the ESP partition is FAT32, you can place it early on disk and still leave room for a legacy bootsector if you want dual support. You can even run different payloads depending on the boot method. EDK2 has a learning curve, but once you understand how to resolve protocols and call services, it’s a huge upgrade. Most of it can be written in plain C. I only use a small inline asm trampoline to set up the stack and jump to stage 2. Also, skip legacy stuff like PIC/PIT. They’re emulated now. Use LAPIC for interrupts and timers, and look into MSI/MSI-X for modern interrupt routing. One thing I often see missing in older tutorials is a partition table in the bootsector. Without it, AHCI in QEMU won’t detect your disk atleast on some versions, and this again shows how crumbly and differently implemented some things can be (the ahci nor sata specs require this, so it's a tricky one if it hits you :D...). It’s easy to add, and makes sense if you want multiple partitions. UEFI helps here too—you can build a disk image with real partitions (e.g., FAT32 for boot, ext2/4 for root) and implement them properly. If you don't take into account your system will be using partitions and filesystems within those partitions it's gonna be a lot of re-writing.
Structure of the repo also matters. A clean layout makes it easier to grow your OS without it turning into a mess. This project seems to suggest / imply a nice structure from what I can tell, despite it's also ofcourse modelled around the tutorial itself. - thinking of architecture independence is also often forgotten. Especially when working in newer langauges like Rust that might be apealing ,but most C/C++ code can also be easily made portable if you put foresight into your OS running on different platforms. QEMU can emulate a lot of them for you to test things on.
TL;DR: Most tutorials teach you how to build an OS for 1990s hardware. Fun, but not very useful today. If you want to learn how modern OSes work, start with modern protocols and hardware. Some are more complex, but many are easier to use and better documented which can actually speed up development and reduce the need to port higher level systems over to newer low level code once you decide you want it to run on something less than 20 years old. (Athlon64 was 2003!)
A lot of newer things offer different ways to design your code and systems around it, more logical and readable ways.
To give an example: it's not easily discovered your machine just emulates pit/pic via IO-APIC routing to LAPIC :') - its transparent! So if you don't know, and someone told you something else that seems to work, there's no trigger for you to even consider it until you stumble across it by chance.
There are many of these things, especially when in QEMU or such platforms. (how fast does your lapic timer run on qemu vs what the intel docs tell u about its speed?)
Because if you want to write an OS, don't write a bootloader first. This article essentially describes a stage 1 bootloader on top of legacy BIOS firmware. It will teach you about historical x86 minutiae, which is nothing but a hindrance if you want to understand OS concepts.
Instead you should try to get a bootable ELF (multiboot) or PE (UEFI) image written in a high level language (C, C++, Rust, Zig, etc) as soon as possible. You can boot it up in QEMU very easily (compared to a boot sector image) and get a real debugger (gdb), system monitor and all the other invaluable tooling up. This will greatly affect the velocity of your project and get to the interesting stuff faster.
Because bare metal/OS projects may be hard to write but they are even harder to debug. You will need all the help you can get.
Leaving practicality aside and focusing on aesthetics... Normally, for hobby wheel reimplementation projects like this, I find doing it as close to the bare metal as possible, relying on the minimum amount of other people's code, a lot more fun.
But AFAIK, these days legacy BIOS boot is just some emulated compatibility mode running under UEFI anyway. The bootloader already ran and configured the hardware for you, and then it un-configured a bunch of stuff so you can re-do it. It's role-playing as an 80s PC for you. I find that deeply unsatisfying.
UEFI is the bare-metal API, for all intents and purposes. (Unless you want to go completely blobless, writing your own firmware.)
As a chronic wheel reinventor, I can understand this.
But if I felt like scratching this itch now, I would pick some other hardware than x86. The RP2040 could be a nice target, or maybe some ARM or RISC-V SoC.
That said, I totally understand that there's something different to having a bare metal / hobby OS project running on your daily driver computer than some embedded gadget.
I'm speaking from experience here, I've written bare metal projects in the style of this project (first project I did with MS-DOS debug.com, wrote to a floppy disk and rebooted my machine from the floppy) and the "modern" way with compilers, emulators, debuggers etc. The difference in productivity and learning the interesting stuff is huge.
For Rust there's this popular series: https://os.phil-opp.com/
If I were to start a new OS project I would do it in Rust too but as awesome as Rust is, I can't recommend doing a bare metal project as your first foray into the language. Learning two or more things at once doesn't work for me.
Easier to get started with than OSdev and less gruesome legacy hardware details to study to get stuff done.
Next time I need some lights blinking or actuators actuating I'm gonna do it with Rust and rp2040+.
Setting aside the fact that modern CPUs essentially JIT your machine code and their real execution model is completely foreign to their instruction set; a lot of OS development is still poke at a memory address to cause hardware to do something that is decidedly not setting bits in ram.
Sure, writing the page table management logic will demystify how shared memory works. But you are still just writing a new page table, invoking a particular processor opcode, then trusting the elves living in the processor to tell the elves living in the MMU to update how they translate memory addresses from the processor to memory addresses on the bus.
For this it is best to go with the osdev code and then attempt to learn what actually is going on much later.
https://web.archive.org/web/20241112015613if_/https://www.cs...
But here's a more succinct explanation of the reason for the memory offset: https://stackoverflow.com/a/51996005
So I'd say that GUI is not a part of the OS... Please tell if you agree or not
A few years ago I would have recommended the path I took (writing an OS for the Raspberry Pi) but the Pis have gone off the rails recently. So writing a simple OS for a Pi-1B+ is relatively doable (simple enough, sort of OK documentation, biggest downside is needing USB for the keyboard).
Things led to disaster once everyone wanted to use Pi4 (which was all we could manage to source during the CPU shortage of '23) as the documentation is poor, getting interrupts going became nearly impossible, and the virtual memory/cache/etc setup on the 64-bit cores (at least a few years ago) was not documented well at all.
But also: most of the work of an OS is supporting a variety of hardware. That's not very intellectually interesting work. Hardware is usually hacky, and since was only ever tested against the manufacturer's driver code, the only way to use it reliably is to slavishly follow their usage.
codazoda•4mo ago