We all know that Rust in the kernel is a political operation to change this to woke.
Do you have something against FreeBSD, NetBSD, OpenBSD, illumos, Redox, or (if you don't need a unix-like) Haiku?
Tilck is not like that!
Tilck is a real operating system. It runs on real hardware.
The space for the former is overly crowded but I think Tilck really fills a gap that has been mostly unoccupied before.
This looks like a very interesting project for OS devs but if you’re after a Windows replacement then there’s already a few options out there from ReactOS to Linux-distros that come with Wine configured as part of the base install.
Great to see it runs on the LicheeRV Nano, a $9 RISC-V board with 1.0 GHz 64 bit CPU (C906) with MMU, FPU, 128 bit vector unit (RVV draft 0.7.1) and 256 MB of in-package DDR3. That's comparable to a midlife Pentium III or PowerPC G4.
It should be a very easy port to the Milk-V Duo 256M (same SG2002 SoC) or Duo S (SG2000 with 512 MB RAM for $9.90) or original Duo (closely related CV1800B with 64 MB RAM for $5).
No network or block device support at present, and no multi-core either, by the looks.
And what speed G4, against what speed Pentium 4?
G4 started from 450 MHz and went to 1.6 GHz, with several different µarch along the way.
Conversely, the Pentium 4 ranged from 1.3 GHz to 3.8 GHz.
G4 and P3 at least covered a nearly identical MHz range, and with similar MHz in the market at the same times.
So, I assume, they are “step by step” comparable. i.e. slowest P4 equal to slowest G3 and they are again equal at next stepping and so on.
Feels like they rolled back a couple of commits, forked to a new branch and started over...
On the architecture itself, PowerPC did far more per cycle.
The first versions of Linux I used did not have multi-core support and I can imagine a Linux without networking, but no block devices?
Does that mean just character devices? How does the FAT driver work then?
Implementing those things is on the TODO list.
It looks like TILCK uses RAM images to provide FAT support, which is mainly used for initrd.
To be honest, you’re much better off with a battle hardened platform for your use case. Tilck is meant to be educational, not secure in either the InfoSec nor data robustness sense.
* “It could be fun to be the author of a block layer, VFS and an AHCI driver for SATA. NCQ support is a must if I do.”
* “It could be fun to port ZFS: ztest and zdb, plus the userland version of the driver into which they hook need pthreads, but the important userland utilities do not. Having to statically link everything would be a pain. I would probably have to reimplement the entire SPL for this to work and that would probably be at least 10,000 lines of code.”
* “If I port ZFS, a NFS server needs to follow so ZFS has an application. This will need support for setting/getting user/group ownership and mode bits. If I rewrite the VFS, I could maybe sneak that feature into it for a NFS server to use while preserving the documented syscall behavior that requires everything be root. If I port the NFS server from illumos, it could share the SPL code with ZFS. NFSv4 permissions will be needed to make it fully happy. Beyond that, I will need a network stack.”
* “It could be fun to port a network stack. Maybe LwIP could be used.”
* “It could be fun to write an e1000 driver.”
I have already found the documentation I need if I actually were to implement AHCI and e1000 drivers:https://www.intel.com/content/dam/www/public/us/en/documents...
https://www.intel.com/content/dam/doc/manual/pci-pci-x-famil...
If I were to do all of this, I would likely try my best to make it a production platform. The purpose would be fun.
Anyway, I wonder if these thoughts will continue if I sleep on them.
3 years ago, 75 comments. https://news.ycombinator.com/item?id=34295165 (no riscv64 then)
5 years ago, 7 comments. https://news.ycombinator.com/item?id=28040210
That's why performant "standard" hardware programing interfaces are so much important. Those interfaces should be the simple and stable in time as much as possible.
Many hardware realms have such _performant_ interface: nvme, usb, etc.
Basically it means DMA, doorbells/IRQ "ring buffers", command ring buffers("queues").
Because for those alternative kernel initiatives, this would allow them to become 'real-life' much faster.
And a NOTE related to RISC-V hardware: keep an eye on arm piggybacking RISC-V chips(they are the bad guys with their blocking/$$$ intellectual property), but the target goal does include AV1 video dec/enc blocks instead of mpeg, and DisplayPort instead of hdmi, because those are hardly less worse than arm.
And some hardware is going even further, the "next step": user level hardware queues (look at AMD GPUs which started to implement those). I know there is 3D pipeline programming behind this hardware interface, but I believ that if they started to clean the base hardware interface, they will cleanup their 3D pipeline programming too.
jlundberg•10h ago