That's one of the few times I've read about a proposed innovation "in the spirit of UNIX" that was not already present in the original UNIX or one of its descendants.
UNIX: Everything is a file.
=> A directory is a file.
Parent post: Everything is a directory.
A file is a directory.
I.e., a switch from "There are files and special files called directories that are handled differently." to the recursive definition "There are files, which are made up of 0..n files (blobs) and 0..n subdirectories" - so file versus directory is just a VIEW.
Makes sense & would make writing traversal code for files wiht internal structure much easier to read and write.
Expurple•2h ago
> to the recursive definition "There are files, which are made up of 0..n files (blobs) and 0..n subdirectories"
I think, it's more like "a file node contains metadata, a binary blob of data (may be empty), and 0..n child files".
Agreed that this idea is very elegant and removes special cases, nodes become uniform. And the argument for reusing the OS(FS)-provided tree abstraction is compelling.
Although, I can imagine some performance concerns in the real world. If implemented naively and similarly to the existing Unixes, this model results in a lot of small fragmented blocks and separate syscalls+descriptors for dealing with each small file. Also, when the "tree" is actually a sequential array of nameless elements, there's some extra overhead involved with writing and storing made-up file names, as well as sorting by name when reading. This could be remedied by some new API. And a single tree implementation reused by everything could be more cache-friendly than having a userland parser for every "old" format in every application.
Anyway, this mental model is useful and I'd like to see and try out the "automounting" that the author describes.
jll29•4h ago
Makes sense & would make writing traversal code for files wiht internal structure much easier to read and write.
Expurple•2h ago
I think, it's more like "a file node contains metadata, a binary blob of data (may be empty), and 0..n child files".
Agreed that this idea is very elegant and removes special cases, nodes become uniform. And the argument for reusing the OS(FS)-provided tree abstraction is compelling.
Although, I can imagine some performance concerns in the real world. If implemented naively and similarly to the existing Unixes, this model results in a lot of small fragmented blocks and separate syscalls+descriptors for dealing with each small file. Also, when the "tree" is actually a sequential array of nameless elements, there's some extra overhead involved with writing and storing made-up file names, as well as sorting by name when reading. This could be remedied by some new API. And a single tree implementation reused by everything could be more cache-friendly than having a userland parser for every "old" format in every application.
Anyway, this mental model is useful and I'd like to see and try out the "automounting" that the author describes.