Interesting take.
I think that the FHS is still extremely helpful for packagers, sysadmins and others so they won't stomp on each other's feet constantly. It helps set expectations and prevents unnecessary surprises.
Just the fact that one particular FHS rule might be outdated or even harmful doesn't mean that the FHS as a whole has outlived its usefulness.
FHS hasn't changed in years. Since then, sandboxing, containers, novel package schemes, and more are the zeitgeist. What does the FHS say about them?
Nothing keeps you from following the FHS inside your container or sandbox.
Are you referring to the location where container images live? Then `/var/lib/containers/` and `/var/lib/containers/storage/` would be perfectly FHS compliant.
Systemd frustrates and angers people with Poettering's complete disregard for bug reports, tradition, and basic common courtesy. At the same time, change needed to happen and change is gonna hurt. And big changes can't wait until they're just as stable as the old system: does anyone develop software like that in their own careers? I try not to ship complete crap but "just as stable as v1" is never a goal.
Poettering is a Microsoft employee. It is normal that he follows the direction of the mothership. What is not normal is, that he has so many blind followers.
every distro has defined their own new file system layout standard
sure they all started out with the common ancestor of FHS 3.0, but diverged since then in various degrees
and some modern competing standards try to fix it (mainly UAPI Group)
(And yes some people will go one and one about how UAPI is just a way for systemd to force their ideas on others, but if you don't update a standard for 10+[1] years and aren't okay with others taking over this work either, idk. how you can complain for them making their own standard).
[1]: It's more like 20 years, but 10 years ago the Linux Fundation took over it's ownership.
I mean, yeah, I get it, systemd bad, democracy good, but these world-writable lock folders are actually a huge pain, and adding some shim code to upgrade to a more secure solution seems achievable?
rw,nosuid,nodev,noexec,size=5120kNow obviously people these days generally know about that so hopefully don’t use predictable file names but that’s one way.
Unless you do open("/run/lock/foo.lock", O_WRONLY|O_CREAT|O_EXCL|O_NOFOLLOW)
I remember the time (around 2001-2002) when just about every binary was discovered to have some variant on this exact exploit. I happened to be linux sysadmin for a very large, high-profile set of linux boxes at the time. Happy times.
Annoying side effect: now you gotta guess which process created the darn lockfile.
A more sensible approach is to do sanity checking on the lockfile and its contents (i.e. does the contained PID match one's own binary).
If you want what Debian provides, it's a poor choice for you... but -IME- it doesn't break on upgrade, unlike some Debian-derived distros I've tried in the past.
[0] Something along the lines of "Always try to package exactly what's provided by upstream, try hard to get distro patches upstreamed, and try to have the latest available upstream release in the 'testing channel'.".
[1] Well, I do have a machine that (aside from "side-loading" kernel updates from time to time) hasn't been updated in four years. While I'll try to update that one in the normal way, I'm probably going to need to reinstall.
Thus the title reflects the most interesting bit of the story.
like overriding it now makes a lot of sense, there needs to be grace periods etc.
but we live in a world where OSes have to become increasingly more resilient to misbehaving programs (mainly user programs, or "server programs" you can mostly isolate with services, service accounts/users etc.). And with continuous increases in both supply chain attacks and crappy AI code this will only get worse.
And as such quotas/usage limits of a temp fs being shared between all user space programs like lvm2 and dmraid is kinda a bad idea.
and for such robustness there aren't that many ways around this change, basically the alternatives are:
- make /var/lock root only and break a very small number of programs which neither use flock nor follow the XDG spec (XDG_RUNTIME_DIR is where your user scoped locks go, like e.g. for wayland or pipewire)
- change lvm2, dmraid, alsa(the low level parts) and a bunch of other things your could say are core OS components to use a different root only lock dir. Which is a lot of work and a lot of breaking changes, much more then the first approach.)
- use a "magic" virtual file system which presents a single unified view of /var/lock, but under the hood magically separates them into different tempfs with different quotas (e.g. based on used id the file gets remapped to /run/user/{uid}, except roots gets a special folder and I guess another folder for "everything else"???) That looks like a lot of complexity to support a very small number of program doing something in a very (20+ years) outdated way. But similar tricks do exist in systemd (e.g. PrivateTemp).
kinda only the first option makes sense
but it's not that it needs to be done "NOW", like in a year would be fine too, but in 5 years probably not
I hope they have a change of mind in their approach.
https://pubs.opengroup.org/onlinepubs/9799919799/utilities/V...
Personally I find an interesting observation, and microsoft contributing to linux in any way should be met with skepticism based on the entire last 30 years.
People are so quick to wipe away any wrongdoing from Microsoft as soon as they get thrown a bone, there's some interesting psychology here.
Like, should Lockheed intentionally hire North Korean programmers at cheap rates because North Korea can afford to devote resources to helping Lockheed? The issue here is not primarily that North Korea is a massive citizen-trampling megastate. It's that Lockheed's interests are misaligned with North Korea's.
While work now mandates "If you want to use Linux, it has to be Ubuntu" (and I complied). On personal front - about a decade ago I've moved from "vanilla" Gentoo to Calculate Linux - which was and still is 100% Gentoo.
These days difference is even smaller, but already 10+ years ago Calculate had sane profiles as well as all software packages as pre compiled binaries matching those profiles.
And although systemd is one of configurable USE keywords on Calculate/Gentoo - it's still not the default.
So there probably are some folks that haven't been touched by systemd at all... For now.
[1] https://shepherding.services/manual/html_node/Introduction.h...
* Their is an option for the old behavior.
* It is a security issue and better solutions to replace exist.
* FHS isn't maintained.
I think everyone involved would prefer updates to the applications, which fix the issue. Debian opted - for now - for reliability for its users, which fits in their mission statement. On Arch /run/lock is only writeable for the superusers, which improves security. As user I value reliability and security and that legacy tools remain usable (sometimes by default, sometimes by a switch).Does it? That means anyone who needs a lock gets superuser, which seems like overkill. Having a group with write permissions would seem to improve security more?
a global /run/lock dir is an outdated mechanism not needed anymore
when the standard was written (20 years ago) it standardized a common way programs used to work around not having something like flock. This is also reflected in the specific details of FHS 3.0 which requires lock files to be named as `LCK..{device_name}` and must contain the process id in a specific encoding. Now the funny part. Flock was added to Linux in ~1996, so even when the standard was written it was already on the way of being outdated and it was just a matter of time until most programs start using flock.
This brings is to two ways how this being a issues makes IMHO little sense:
- a lot of use cases for /var/lock have been replaced with flock
- having a global writable dire used across users has a really bad history (including security vulnerabilities) so there have been ongoing affords to create alternatives for anything like that. E.g. /run/user/{uid}, ~/.local/{bin,share,state,etc.}, systemd PrivateTemp etc.
- so any program running as user not wanting to use flock should place their lock file in `/run/user/{uid}` like e.g. pipewire, wayland, docker and similar do (specifically $XDG_RUNTIME_DIR which happens to be `/un/user/{uid}`)
So the only programs affected by it are programs which:
- don't run as root
- don't use flock
- and don't really follow best practices introduced with the XDG standard either
- ignore that it was quite predictable that /var/lock will get limited or outright removed due to long standing efforts to remove global writable dirs everywhere
i.e. software stuck in the last century, or in this case more like 2 centuries ago in the 2000th
But that is a common theme with Debian Stable, you have to fight even to just remove something which we know since 20 years to be a bad design. If it weren't for Debians reputation I think the systemd devs might have been more surprised by this being an issue then the Debian maintainers about some niche tools using outdated mechanisms breaking.
OK, but suppose you have a piece of software you need to run, that's stuck in the last century, that you can't modify: maybe you lack the technical expertise, or maybe you don't even have access to the source code. Would you rather run it as root, or run it as a user that's a member of a group allowed to write to that directory?
The systemd maintainers (both upstream and Debian package maintainers) have a long history of wanting to ignore any use cases they find inconvenient.
This was a general question to begin with.
> Their is an option for the old behavior.
The discussion never centered on an option for keeping old behavior for any legitimate reason. The general tone was "systemd wants it this way, so Debian shall oblige". It was a borderline flame-war between more reasonable people and another party which yelled "we say so!"
> It is a security issue and modern solutions to replace exist.
I'm a Linux newbie. Using Linux for 23 years and managing them professionally for 20+ years. I have yet to see an attack involving /var/lock folder being world-writeable. /dev/shm is a much bigger attack surface from my experience.
Migration to flock(2) is not a bad idea, but acting like Nero and setting mailing lists ablaze is not the way to do this. People can cooperate, yet some people love to rain on others and make their life miserable because they think their demands require immediate obedience.
> FHS isn't maintained.
Isn't maintained or not improved fast enough to please systemd devs? IDK. There are standards and RFCs which underpin a ton of things which are not updated.
We tend to call them mature, not unmaintained/abandoned.
> On Arch /run/lock is only writeable for the superusers. As user I value reliability and the legacy tools are usable.
I also value the reliability and agree that legacy tools shall continue working. This is why I use Debian primarily, for the same last 20+ years.
If FHS hadn't been unmaintained for nearly 2 decades I'm pretty sure non-root /var/lock would most likely have been deprecated over a decade ago (or at least recommended against being used). We know that cross user writable global dirs are a pretty bad idea since decades, if we can't even fix that I don't see a future for Linux tbh.(1)
Sure systemd should have given them a heads up, sure it makes sense to temporary revert this change to have a transition period. But this change has be on the horizon for over 20 year, and there isn't really any way around it long term.
(1): This might sound a bit ridiculous, but security requirements have been changing. In 2000 trusting most programs you run was fine. Today not so much, you can't really trust anything you run anymore. And it's just a matter of time until it is negligent (like in a legal liability way) if you trust anything but your core OS components, and even that not without constraints. As much as it sucks, if Linux doesn't adept it dies. And it does adopt, but mostly outside of the GPG/FSF space and also I think a bit to slow on desktop. I'm pretty worried about that.
> > FHS isn't maintained. > Isn't maintained or not improved fast enough to please systemd devs? IDK.
more like not maintained at all for 20+ years in a context where everything around it had major changes to the requirements/needs
they didn't even fix the definition of /var/lock. They say it can be used for various lock files but also specify a naming convention must be used, which only works for devices and also only for such not in a sub-dir structure. It also fails to specify that it you should (or at least are allowed to cleared the dir with reboot, something they do clarify for temp). It also in a foot note says all locks should be world readable, but that isn't true anymore since a long time. There are certain lock grouping folders (also not in the spec) where you don't need or want them to be public as it only leaks details which maybe an attacker could use in some obscure niche case.
A mature standard is one which has fixes, improvements and clarification, including wrt. changes in the environment its used in. A standard which recognizes when there is some suboptimal design and adds a warning, recommending not to use that sub-optimal desing etc. Nothing of the sort happened with this standard.
What we see instead is a standard which not only hasn't gotten any relevant updates for ~20 years but didn't even fix inconsistencies in itself.
For a standard to become mature it needs to be maintained for a long enough time, this standard wasn't maintained it still has "bugs"/"inconsistencies" in the standard which should have been fixed 20 years ago. Just because something has been used for a long time doesn't mean it's mature.
And if you want to be nit picky even Debian doesn't "fully" comply with FH3, because there are points in it which just don't make sense anymore, and they haven't been fixed them for 20 years.
The "security issue" expressed is that someone creates 4 billion lock files. The entire reason an application would have a path to create these lock files is because it's dealing with a shared resource. It's pretty likely that lock files wouldn't be the only route for an application to kill a system. Which is a reason why this "security issue" isn't something anyone has taken seriously.
The reason is much more transparent if you read between the lines. Systemd wants to own the "/run" folder and they don't like the idea of user space applications being able to play in their pool. Notice they don't have the same security concerns for /var/tmp, for example.
i think that is somewhat reasonable. but then systemd should have its own space, independent of a shared space: /var/systemd/run or /run/systemd/ ?
This would go contrary to an unstated goal: making everyone else to dance to systemd's tune, for their own good.
[0] <https://lore.kernel.org/all/20140402144219.4cafbe37@gandalf....>
[1] <https://lore.kernel.org/all/CA+55aFzCGQ-jk8ar4tiQEHCUoOPQzr-...>
The central problem with systemd is that they don't want to let you go about your business, they want you to conform to their rule.
Looking from the outside, it looks more that this is a failure of the Debian systemd package maintainer to follow Debian's rules. (Though since I'm not a part of that community, I recognize that there may be cultural expectations I'm not aware of.)
Yes this is a good response from upstream. I can work with that, but in that case, even this response didn't get reflected to mailing list discussion, or drowned out instantly.
My question was more general though, questioning systemd developers' behavior collectively (hence the projects' behavior) through time.
As a user, systemd has improved my productivity tremendously.
The kind of bad mouthing developers that work on solutions for complex problems, code that runs on billions of machines, reflects more of your own fragile ego than them.
> As a user, systemd has improved my productivity tremendously.
Both can be true at the same time. Particularly in the beginning, there was a long string of really important things that used to Just Work that were broken by systemd. Things like:
1. Having home directories in automounted NFS. Under sysv, autofs waited until the network was up to start running. Originally under systemd, "the network" was counted as being up when localhost was up.
2. Being able to type "exit" from an ssh session and have the connection close. Under systemd, closing the login shell would kill -9 all processes with that userid, including the sshd process handling the connection -- before that process could close the socket for the connection. Meaning you type "exit" in an interactive terminal and it hang.
It's been a while since I encountered any major issues with systemd, but for the first few years there were loads of issues with important things that used to Just Work and then broke and took forever to fix because they didn't happen to affect the systemd maintainers. If you didn't encounter any of these, it's probably because your use cases happened to overlap theirs.
Yes, systemd and journalctl have massively simplified my life. But I think it could have been done with far less disruption.
There's no need to be rude. While I'm not anti-systemd; it didn't change my life tremendously, either.
People tend to bash init scripts, but when they are written well, they both work and port well between systems. At least this is my experience with the fleet I manage.
Dependencies worked pretty well in Parallel-SysV, too, again from my experience. Also, systemd is not faster than Parallel-SysV.
It's not that "I had to learn everything from scratch!" woe either. I'm a kind of developer/sysadmin who never whines and just reads the documentation.
I wrote tons of service files and init scripts during Debian's migration. I was a tech-lead of a Debian derivative at that time (albeit working literally underground), too.
systemd and its developers went through a lot phases, remade a lot of mistakes despite being warned about them, and took at least a couple of wrong turns and booed for all the right reasons.
The anger they pull on themselves are not unfounded, yet I don't believe they should be on the receiving end of a flame-war.
From my perspective, systemd developers can benefit tremendously by stepping down from their thrones and look eye to eye with their users. Being kind towards each other never harms anyone, incl. you.
Systemd basically arose out of a frustration at the legacy issues so the whole project exists as a modernizing effort. No wonder they consider backwards compatibility low priority.
Systemd doesn't work for me, but it has taken over most Linux distributions, so clearly it's got something people want that I don't understand. That was the case for PulseAudio too.
a company that considers "consent" to be a dirty word
>Debian Policy still cites the FHS, even though the FHS has gone unmaintained for more than a decade.
What ongoing maintenance would a file system standard require? A successful standard of that type would have to remain static unless there was a serious issue to address. Regular changes are what the standard was intended to combat in the first place.
>The specification was not so much finished as abandoned after FHS 3.0 was released...
OK.
>...though there is a slow-moving effort to revive and revise the standard as FHS 4.0, it has not yet produced any results.
So it is not abandoned then. A slow moving process is exactly what you would want for the maintenance of a file system standard.
>Meanwhile, in the absence of a current standard, systemd has spun off its file-hierarchy documentation to the Linux Userspace API (UAPI) Group as a specification. LWN covered that development in August, related to Fedora's search for an FHS successor.
Ah. Systemd/Fedora want a standard that they can directly control without interference from others.
A standard does no good if it does not reflect reality. I think it is a worthwhile effort to try to bring it back in line with actual real world usage.
FHS seems to specifically imbue the user with the responsibility and consequences of filling up the disk.
[1] https://freedesktop.org/wiki/Software/systemd/separate-usr-i...
[2] https://www.freedesktop.org/wiki/Software/systemd/TheCaseFor...
systemd relies on things in /usr being available, including to decide which scripts to run, and mounting /usr would be one of those scripts, so it has a chicken-and-egg problem.
But ah, it doesn't! Instead the world needs to make sure /usr is mounted before systemd even gets started, so systemd doesn't have to fix its bug.
Personally, I don't mind /usr/bin merging with /bin, the benefit I can see is no more squabbling over whether something should be in /bin or not (i.e. is this tool needed to boot the system, or not?)
> the world needs to make sure /usr is mounted before systemd even gets started, so systemd doesn't have to fix its bug.
Unironically in the same post despite being, to my untrained eye, the same thing.
One is like "I'll run some scripts in order, everything else is on you", the other is like "I'll take care of everything, I'll do that, WHAT YOU DIDN'T MOUNT /USR ? SHAME ON YOU I DON'T WANT TO DEAL WITH THAT CORNER-CASE"
From the creators of systemd we also have GNOME, PulseAudio, and Wayland. They have some design philosophy in common.
BTW most sysvinit distros barely even use sysvinit. sysvinit is a service monitor, similar to systemd but more primitive, but typically most of what it's configured to do is to launch some shell scripts on startup. We really have "systemd distros" and "ad-hoc script distros", not sysvinit distros ("ad-hoc" is not a pejorative). I don't know why they don't make init a shell script directly - you can do that, and it's typically done that way in initramfs.
I want a nail only driven half in and at some crooked angle, that's my business.
It's not my hammers job to agree or disagree that it's a bad nail hammering job as far as it knows. I don't wantto have to convince it of the validity of a use-case it didn't think of before, or thought of and decided it doesn't agree to support.
I just want that crude coat hanger and I don't care who else likes it or doesn't like it or who else thinks I should buy an actual coat hanger and attach it in some way that someone else approves of.
especially for image based stuff it's a pain
which includes OCI images for things like docker
but also image based distros like e.g. ostree (as used through rpm-ostree by Atomic Fedora desktops like Fedora Silverblue, but also in similar but different forms something Ubuntu has been experimenting with)
[1] https://lists.busybox.net/pipermail/busybox/2010-December/07...
this doesn't matter for OS X which main changes mostly tend to be diverging away from it's roots into a fully proprietary direction
but it does matter if you build image based Linux distros which might be the future of Linux
One of the purposes of usrmerge is to cleanly separate the read-only and read-write parts of the system. This helps with image-based distros, where /usr can be on its own read-only filesystem, and related use cases such as [1]. Usrmerge is not required for image-based distros to work [2], but it makes things cleaner.
macOS, starting in 2019, is also an 'image-based distro', in that it has a read-only filesystem for system files and a separate read-write filesystem for user data. However, the read-only filesystem is mounted at / instead of /usr. Several different paths under the root need to be writable [3], which is implemented by having a single read-write filesystem (/System/Volumes/Data) plus a number of "firmlinks" from paths in the read-only filesystem to corresponding paths in the read-write filesystem. Firmlinks are a bespoke kernel feature invented for this purpose.
Both approaches have their advantages and disadvantages. The macOS approach is nice in that the system filesystem contains _all_ read-only files/directories, whereas under "distro in /usr" scheme, you need a separate tmpfs at / to contain the mount points and the symlinks into /usr. But "distro in /usr" has the advantage of making the separation between read-only and read-write files simpler and more visible to the user. Relatedly, macOS's scheme has the disadvantage that every writable file has two separate paths, one with /System/Volumes/Data and one without. But "distro in /usr" has the opposite disadvantage, in that a lot of read-only files have two separate paths, one with /usr and one without. Finally, macOS's scheme has the disadvantage that it required inventing and using firmlinks. Linux can already achieve similar effects using bind mounts or overlayfs, but those have minor disadvantages (bind mounts are more annoying to set up and tear down; overlayfs has a bit of performance overhead). Actual firmlinks are not necessarily any better, though, since they don't have a clear story for being shared between containers (which macOS does not support). It is nice that "distro in /usr" doesn't require any such complexity.
Ultimately, the constraints and motivations on both sides are quite different. macOS couldn't have gotten everything read-only under one directory as easily because it has /System in addition to /usr. macOS doesn't have containers. macOS doesn't have different distros with different filesystem layouts and deployment mechanisms. And philosophically, for all that people accuse systemd of departing from Unix design principles, systemd seems to see itself as evolving the Unix design, whereas macOS tends to treat Unix like some legacy thing. It's no surprise that systemd would try to improve on Unix with things like "/bin points to /usr/bin" while macOS would leave the Unix bits as-is.
[1] https://lwn.net/Articles/890463/ [2] https://blog.verbum.org/2024/10/22/why-bootc-doesnt-require-... [3] https://eclecticlight.co/2023/07/22/how-macos-depends-on-fir...
Prior to the group who started an update effort, it had not been touched in about a decade. That’s not slow-moving: that’s abandoned.
Developers have this thing where they will think of a standard as a specification. Instead it is a statement of political will. Saying that a standard is "abandoned" due to lack of "maintenance" seems like an example of thinking of a standard as the instantation of a specification; an actual program.
What's the timeline for software?
Laws remain in force until they are formally:
* Repealed (abolished) by the relevant legislative body (Parliament, Congress, etc.).
* Struck down by a court as unconstitutional or otherwise invalid.
A 150 year "delete" timer would genuinely undermine the foundation of the legal system. Lawyers, judges, and businesses rely on the continuity of core laws (e.g., contract, property, and tax law). If a 150-year-old property law suddenly lapsed, it could instantly void millions of land titles and commercial contracts...
In addition, laws are typically regularly amended to handle new societal developments, to clarify wording, or to fit better with other laws or changes in attitudes. A law that has gone 150 years without being amended at all is probably a law that falls into the categories above and is obsolete.
Of course, all this is getting somewhat off-topic, but the point is that laws absolutely can become outdated and unmaintained, either deliberately or by happenstance. And the inverse is also true: most laws that people deal with regularly are kept up-to-date to ensure that they still reflect the needs and wills of the society they're being used in.
Meanwhile some laws that are months old are ignored by law enforcement because nothing forces them to read it. It’s that effect which is why so many old laws are ignored rather than formally repealed. When nobody is ridding a horse nobody cares how you need to tie one up when visiting a store etc.
True, but it's been updated a lot more recently than that.
The last update was still much longer ago than 10 years, of course. The most recently ratified amendment to the Constitution - the Twenty-Seventh Amendment, ratified 1992 - was, incredibly enough, proposed in 1789 along with the ten we know as the Bill of Rights and another one which was never ratified. And of the twenty-seven amendments ratified so far, the one most recently proposed by Congress, the Twenty-Sixth Amendment, was both proposed and ratified in 1971.
Somehow has an impact on anything else? Because by that standard every change to any law updates all existing laws that were not changed. Or I’m just completely misunderstanding your point here.
law on its own can mandate the use of a specific standard, but a standard on its own is no law.
so much so that often doing non-standard stuff is the most successful route. dumb example: Apple and all of it proprietary, non standard stuff.
> What ongoing maintenance would a file system standard require? A successful standard of that type would have to remain static unless there was a serious issue to address. Regular changes are what the standard was intended to combat in the first place.
It's 2025, anything that wants to be considered modern (and everything should want that), needs to be undergoing constant change and delivering regular "improvements."
>>...though there is a slow-moving effort to revive and revise the standard as FHS 4.0, it has not yet produced any results.
> So it is not abandoned then. A slow moving process is exactly what you would want for the maintenance of a file system standard.
The FHS people to get off their butts. There's no excuse for that pace now that we have such well-developed AI assistants. They should be pushing quarterly updates at a minimum, and a breaking change at least every year or two. It's been obvious for decades that "etc" is in urgent need of renaming to "config", "home" to "user", and "usr" to "Program Files" to keep up with modern UX trends.
Anyway, Linux community as a whole has an antiquated development process, and needs to modernize and follow the best practices of an industry-leading trend-setter, like MS Teams.
It's like ASLR for files but no maps because maps aren't for trailblazers, they make the maps! It's very cutting edge and a value-add!
(Obligatory /s)
> Debian Policy still cites the FHS, and FHS has remained static for over a decade.
adaption to _a lot_ of subtle changes to requirements
- very different security related requirements today
- very different performance related requirements/characteristics
- very different need for various edge cases
and lastly adapt based on what turned out to work well and what didn't
so some examples not already mentioned in the article
- /boot -- dead or at least differently used if you use efistub booting
- /etc/X11 -- half dead on wayland
- /etc/xml, /etc/sgml -- dead, should IMHO never have existed
- also why was /etc/{X11,xml,sgml} every explicit part of the standard when the spec for `/etc` already implies them as long as e.g X11 is used ??
- `/media` -- dead/half dead depending on distro, replaced by `/run/media/{username}/{mount}`
- `/sbin` -- "controversial"; frequent reoccurring discussions that it isn't needed anymore, didn't work out as intended etc. It was useful for very old style thin clients as `/sbin` was in storage but `/bin` was mounted. And there are still some edge cases where it can makes sense today but most fall under "workaround for a different kind of problem which is better fixed properly".
- `/tmp` -- "controversial", long history of security issues, `/tmp` dir per program fixes the security issues (e.g. systemd service PrivateTmp option) but requires having a concept of "programs" instead of just "running processes" (e.g. by systemd services or flatpack programs). Also `tmpfiles.d` can help here.
- `/usr/libexec` -- dead, nice idea but introduces unneeded complexity and can be very misleading in combination swith suid and similar
- `/usr/sbin` see `/sbin`
- `/usr/share/{color,dict,man,misc,ppd,sgml,xml}` -- should never have been in the standard they are implied by the definition of `/usr/share`; at least sqml,xml are dead. dict was for spell check/auto completion, except that neither works anymore like dict expects
- `/var/account` -- to specific to some subset of partially dead programs, shouldn't be in the standard
- `/var/crash` -- distro specific mess
- `/var/games` -- basically dead/security mess, I mean 99% of games today are user per-user installed (e.g. Steam) and even for such which are packed any variable download data is per user, making it shared creates a permission/security mess
- `/var/lock` -- as mentioned there are better technical solutions by now, e.g. using `flock` instead of "presence of file" and some other techniques. Tend to also avoid issues of crashed programs not cleaning up "lock files" leading to dead locks and needing manual intervention.
- `/var/mail` assumes a quite outdated form of managing mail which is quite specific to the mailing program, as it's very program specific it IMHO shouldn't be in the standard
- various legacy program specific, non "generic" file system requirements e.g. that `/usr/lib/sendmail` must exist and be a link to a sendmail compatible program and similar.
also missing parts:
- `/run/user/{uid}`
- `/var/run/user/{uid}`
- `/proc`
- `/sys`
- user side versions (e.g. from the XDG spec which is also somewhat in a zombie state from my personal experience with it , e.g. .config, .local/{bin,share})
- references to light weight sandboxing, e.g. per-program /temp etc.
- factory reset stuff (`/usr/share/factory`) needed for having a uniform way for devices sold with Linux and device specific distro customization(e.g. steam deck)
so yes, it's quite outdated
Definitely not dead, the XDG portals and Polkit agents live here.
Letting upstream systemd single-handedly define what directories exist with what modes in your distro has never been the intended Modus Operandi.
Debian has a huge selection of packages available for it and clearly is going to have more headaches when it comes to preserving compatibility with all that software.
This is a trivial matter for Debian to handle appropriately, while systemd stays focused on its current priorities. I'm surprised this is being talked about at all outside the appropriate mailing lists... slow week for linux news?
I don't think that's an accurate paraphrase of "Consider this more a passing of the baton from upstream systemd to downstreams: if your distro wants this kind of legacy interface, then just add this via a distro-specific tmpfiles drop-in. But there's no point really in forcing anyone who has a more forward-looking view of the world to still carry that dir."
That kind of drop-in is pretty routine, so I don't know why this became a big thing we're all discussing now.
A software developer's primary job is to develop software for their users, not to comply with a third party distributor that repackages their software.
Really the whole raison d'etre of debian is move at this pace to prioritize stability/compatibility. If you don't like that philosophy there are other distros but a package maintainer's primary job is to repackage software for that distro (which presumably users have chosen for a reason), not comply with upstream.
There is support for quotas in tmpfs! /me runs and hides under desk to avoid fruit being thrown at me
Which includes services like lvm2, dmraid and audio drivers.
So you need at least a different /var/lock for "root" and every one else.
So you now can either fix all the root tooling to use a different lock folder. Which would break a lot of things.
Or you break a very small number of very old tools which (mostly) decided to neither use use flock (from 1996) nor follow the XDG spec (XDF_RUNTIME_DIR) from 2003.
So kinda obvious choice what way to go with ;), it just should be coordinated better and communicated better.
And yes you want quotas on tempfs on XDF_RUNTIME_DIR, without question. At least for a system more robust against misbehaving programs.
To be clear a lot of this security concerns have been irrelevant/ignored with thread models from the 2000th. But times (sadly) have changes and you shouldn't just blindly trust user space programs anymore. Even if we ignore malicious programs just thing about all the bugs AI will sneak in. And honestly I'm worried that Desktop Linux will fail to adopt to this changes. Server Linux is clearly adopting, but also more in the non-gnu Linux ecosystem while the more FSF/GNU parts of Linux seem mostly stuck in a past which doesn't look like it will have a future :/. This sucks without question but I just don't see a future without a lot of supply chain attack and AI coding induced crazy misbehavior of user space programs.
PS: Even if you just want a steam deck with the same degree of robustness as console (i.e. a game going rogue will have a hard time to hang the console fully not matter what it does, i.e. you can always press the menu button and close/kill it) you need this kind of subtle changes. And many other.
I know at least the first two have "ignore the lock" command line flags so you can get out of situations like /run being full. So whether it's a DOS depends on how you define DOS :)
> it just should be coordinated better and communicated better
I mean, that's the thing. I don't disagree it should be fixed. But is it really an important enough problem it justifies breaking users now? Not for me...
raverbashing•13h ago
> He said that he uses cu ""almost constantly for interacting with embedded serial consoles on devices a USB connection away from my laptop""
Whyyyyyyyyyyyyyyy
There are a million better ways of doing this.
munchlax•12h ago
I don't see the problem. Minicom and even picocom are bloated compared to cu
kees99•12h ago
ExoticPearTree•12h ago
ta1243•4h ago
raverbashing•11h ago
Hackbraten•12h ago
> create a lock file for every dial-in line to prevent its use by programs looking for a dial-out line.
[0]: https://lwn.net/Articles/1042594/
MobiusHorizons•10h ago