Just pointing out how the same general idea can take distinct forms of implementation.
It can mean a couple of things:
- Kernel objects have an opaque 32 bit ID local to each process.
- Global kernel objects have names that are visible in the file system.
- Kernel objects are streams of bytes (i.e. you can call `read()`, `write()` etc.).
The first is a kind of arbitrary choice that limits modern kernels. (For example, a kernel might want to use all 64 bits to add tag bits to its handles - still possible, but now you are close to the limit.)
The second and third are mostly wrong. Something like a kernel synchronization primitive or an I/O control primitive does not behave anything like a file or a stream of bytes, and indeed you cannot use any normal stream operations on them. What's the point of conflating the concept of a file system path and kernel object namespacing? It makes a kind of sense to consider the latter a superset of the former, but they are clearly fundamentally different.
The end result is that the POSIX world is full of protocols. A lot of things are shoehorned into file-like streams of bytes (see for example: the Wayland protocol), even when a proper RPC/IPC mechanism would be more appropriate. Compare with the much maligned COM system on Windows, which though primitive and outdated does provide a much richer - and safer - channel of communication.
Also I always found it weird, that a lot of things are "files" in Linux, but not ethernet interfaces, so you have to do that enumeration dance before getting an fd to oictl on. I remember HP-UX having them as files in /dev, which was neat.
My main complaint in general with everything-is-a-file is that it isn't taken far enough:) (Well, on anything except Plan 9)
I think the article articulated it decently:
> It is the file descriptor that makes files, devices, and inter-process I/O compatible.
Or if you like, because pushing everything into that single abstraction makes it easier to use, including in ways not considered by the original devs. Consider, for example, exposing battery information. On other systems, you'd need to compile a program using some special kernel API to query the batteries and then check their stats (say, checking charge levels). In linux, you can just enumerate /sys/class/power_supply and read plain files to get that information.
I asked an LLM how to do this on Windows and got
> wmic path Win32_Battery get EstimatedChargeRemaining
Which doesn't seem meaningfully worse than looking at some sys path; it's not clear what the file abstraction adds for me there.
Because the flip side of your example is that you now have a plain text protocol, and if you wanted to do anything else besides cat’ing it to the console, you’re now writing a parser.
It's one of the local maxima for generality. You could make everything an object or something, but it would require a lot of ecosystem work and eventually get you into a very similar place.
> Because the flip side of your example is that you now have a plain text protocol, and if you wanted to do anything else besides cat’ing it to the console, you’re now writing a parser.
Slight nuance: You could have everything-is-a-file without everything-is-text. Unix usually does both, and I think both are good, but eg. /dev/video0 is a file but not text. That said, text is also a nice local maxima, and the one that requires the least work to buy in to. Contrast, say, powershell, which does better... as long as your programs are integrated into that environment.
AdieuToLogic•4mo ago
https://archive.org/details/patternlanguages0000unse
https://archive.org/details/patternlanguages0002unse
https://archive.org/details/patternlanguages0000unse_l3y0
howtofly•4mo ago
userbinator•4mo ago
AdieuToLogic•4mo ago