"Yes, it is written systemd, not system D or System D, or even SystemD. And it isn't system d either. [...]"
[TIL - it's not even as old as me!] https://australianfoodtimeline.com.au/1978-launch-of-big-m/
By the way, I love that the OP used the Text Fragment (Scroll-to-Text Fragment) feature. I hope it is going to catch on more, quite helpful / useful.
Anyway, why would someone use the spelling with upper D beats me. It's not proper English.
Entirely different thing. Software/things are not the same as people.
> why would someone use the spelling with upper D
SystemV, Zyklon B, Vampire hunter D, Plan B, Model T, Type O, ... It's extremely common.
Yesterday's, just in case: https://us.jlcarveth.dev/post/hardening-systemd.md https://news.ycombinator.com/item?id=44928504
I started my Linux journey so late I can't imagine living without systemd, the few systems I've encountered without systemd are such a major PITA to use.
I recently discovered "unshare" which I could use to remount entire /nix RW for some hardlinking shenanigans without affecting other processes.
systemd is so good, warty UX when interacting with it but the alternative is honestly Windows in my case.
This feeling is particular striking for me, because I once worked on a Linux project with the aim of improving packaging and software distribution. We also got a lot of hate, mainly for not being .deb or.rpm, and it looked to me as if the hate was a large reason for the failure of the project.
I remember an old Debian machine with /etc/init.d/something [start|stop|reload|restart] but I can't recall being able to automatically restart services or monitor status easily. (I didn't speak $shell well back then either)
/etc/init.d/whatever were all shell scripts, and they all had to implement all the features themselves. So `/etc/init.d/foo restart` wasn't guaranteed to work for every script, because "restart" is something that each script handled individually. And maybe this one just didn't bother implementing it.
There's no good status monitoring in sysV because it's all convenience wrappers, not a coherent system.
Thanks for the sysV explanation, it sounds worse to me.
A minimal SystemD service file and a minimal OpenRC service file are equally complex.
Here's the OpenRC service file for ntpd (which is not a minimal service file, but is pretty close):
#!/sbin/openrc-run
description="ntpd - the network time protocol daemon"
pidfile="/var/run/ntpd.pid"
command="/usr/sbin/ntpd"
command_args="${NTPD_OPTS}"
command_args_background="-p ${pidfile}"
command_args_foreground="-n"
depend() {
use net dns logger
after ntp-client
}
start_pre() {
if [ ! -f /etc/ntp.conf ] ; then
eerror "Please create /etc/ntp.conf"
return 1
fi
return 0
}
'depend' handles service dependency declaration and start/stop ordering (obviously).'start_pre' is a sanity check that could be removed, or reduced to calling an external script (just like -IIRC- systemd forces you to do). There are _pre and _post hooks for both start and stop.
For a service that has no dependencies on other services, backgrounds itself, and creates a pidfile automatically, the smallest OpenRC service file is four non-blank lines: the '#!/sbin/openrc-run' shebang followed by lines declaring 'pidfile', 'command', and 'command_args'. A program that runs only in the foreground adds one more line, which tells OpenRC to handle daemonizing the thing and writing its pidfile: 'command_background="true"'. See [3] for an example of one such service file.
If you want service supervision, it's as simple as adding 'supervisor=supervise-daemon', and ensuring that your program starts in the foreground. If it doesn't foreground itself automatically, then adding 'command_args_foreground=<Program Foregrounding Args>' will do the trick.
If you're interested in more information about OpenRC service file syntax, check out the guide for them at [0], and for a lot more information, the manual for openrc-run at [1]. For supervision, check out the supervision guide at [2].
[0] <https://github.com/OpenRC/openrc/blob/master/service-script-...>
[1] <https://man.uex.se/8/openrc-run>
[2] <https://github.com/OpenRC/openrc/blob/master/supervise-daemo...>
[3] The OpenRC service file for the 'cups-browsed' service (which is a program that does not daemonize itself) is this:
#!/sbin/openrc-run
pidfile="/run/cups-browsed.pid"
command="/usr/sbin/cups-browsed"
command_background="true"
depend() {
need cupsd avahi-daemon
}On the other hand, no sir, I still don't like it. It looks very much like Bash. I'm not very fond of Bash to start with and it might not even be actual Bash? Can't tell from the manpage.
But scrolling down to the bottom of the manpage I see a pretty long sample script, and that's exactly what I want to see completely gone. I don't want to look at a 3-way way merge of a service during an upgrade ever again and try and figure out what's all that jank doing. IMO if any of that shell scripting has any reason to be in a service file, it's a bug to be fixed.
My ideal is the simple systemd services: description, dependencies, one command to start, done. No jank with cleaning up temp files, or signals, or pid files (can they please die already), or any of that.
And one of the nice things about systemd services not being a script is that overrides are straightforward and there's never any diffs involved.
> ...
> But scrolling down to the bottom of the manpage I see a pretty long sample script, and that's exactly what I want to see completely gone.
First: I know that you read my comment. The two OpenRC service files that I embedded in it are typical service files. The one in the man page serves as a feature demo, as is appropriate for a man page. [0]
Second: necessary complexity has to live somewhere. Complex OpenRC service files are complex nearly always because the work that their author is doing needs to get done. As folks observe, SystemD service files often are used to invoke a bunch of scripts. As I have demonstrated, you can choose to write OpenRC service files in exactly the same style as you'd use to write a SystemD service file. [1] But if you choose to write OpenRC service files in only that style, and if you need to do complex things as part of service lifecycle management, then you'll need to do what SystemD forces you to do and push that logic out to separate scripts/programs.
Necessary complexity has to live somewhere.
> I don't want to look at a 3-way way merge of a service during an upgrade ever again and try and figure out what's all that jank doing.
I've been using Gentoo Linux for twenty-three years. In the handful of times I've had to examine a service file update, the changes to it have never not been straightforward.
> ...it might not even be actual Bash? Can't tell from the manpage.
It's sh, handled by openrc-run. I'll quote from the service-script-guide that I linked to previously. See [2]:
Service scripts are shell scripts. OpenRC aims at using only the standardized
POSIX sh subset for portability reasons. The default interpreter (build-time
toggle) is /bin/sh, so using for example mksh is not a problem.
OpenRC has been tested with busybox's sh, ash, dash, bash, mksh, zsh and
possibly others. Using busybox's sh has been difficult as it replaces
commands with builtins that don't offer the expected features.
The interpreter for service scripts is #!/sbin/openrc-run. Not using this
interpreter will break the use of dependencies and is not supported.
[0] You noticed that the commands added to the service file are named "eat" and "drink"? If nothing else indicated to you that this was a feature demo, that should have.[1] It's a lot less work to write them in this "key, value" style, and it's obviously good to do so whenever reasonably possible.
[2] <https://github.com/OpenRC/openrc/blob/master/service-script-...>
You might quite reasonably ask "What happens when supervise-daemon crashes or is OOM killed?". I would reply "I'd imagine the same thing that happens when 'systemd' crashes or is OOM killed, except with a far smaller circle of devastation.". I'd also expect supervise-daemon to be no less reliable than 'systemd'.
[0] I would also expect most service supervision systems to solve this problem. OpenRC has supported using s6 as a supervisor for a long time, and OpenRC's supervise-daemon is relatively new, seeing as how it was introduced in 2016. My comments are about supervise-daemon because that's the only one that I've bothered to use.
Yep. And it has a ton of accidental complexity in it. [0] At $DAYJOB, we ran into a production-down incident related to inscrutable SystemD failures once a year. It was always the case that the documentation indicated that our configuration and usage was A-OK. If there ever was a bug report filed, it was always the case that the SystemD maintainers either said words to the effect of "Despite the fact that the docs say that should work, that's an unsupported use case." or "Wow. Weird. Yeah, I guess that behavior is wrong, and it's true that the docs don't warn you about that.", and then go on to do nothing.
SystemD is -IME- like (again, IME) PulseAudio and NetworkManager... it's really great until you hit a show-stopping bug, and then you're just turbofucked because the folks who built and maintain it it want to treat it like it's a black box that works perfectly.
[0] NOTE: I am absolutely not opposed to complex things. I'm opposed to needlessly complex things, and very much opposed to things whose accidental complexity causes production issues, and the system's maintainers' reply to the bug report and minimal repro is "Wow, that's weird. I don't want to fix that. Maybe we should document that that doesn't work." and then go on to do absolutely nothing.
There are two ways to design a system: so simple that it has obviously no bugs, and so complex that it has no obvious bugs.
Most of the complainers weren't really relevant. They weren't making the decisions on what goes in a distro, and an init system is overall a fairly minor component most users don't use all that often anyway.
> This feeling is particular striking for me, because I once worked on a Linux project with the aim of improving packaging and software distribution. We also got a lot of hate, mainly for not being .deb or.rpm, and it looked to me as if the hate was a large reason for the failure of the project.
I think that's a good deal trickier because packaging is something a Linux user does get involved with quite often, and packaging systems don't mix well. A RPM based distro with some additional packager grafted on top is a recipe for disaster.
Still, I think it's also a case of the same thing: sell it to the right people. Find people making new distros suffering problems with DEB and RPM and tell them your tool can save them a lot of pain. The users can come in later.
To quote one of my favorite Clone Wars episodes: Fifty tried, fifty died [1].
There have been so, so many attempts at solving the "how to ship binary builds for Linux" question... both deb and rpm have their good and their bad, and on top of that you got `alien`, flatpak, Docker images, the sledgehammer aka shipping everything as a fully static binary (e.g. UT2004 did this) or outright banning prebuilt binaries (the Gentoo and buildroot way). But that's not the actual problem that needs solving.
The actual problem is dependency hell. You might be lucky to be able to transplant a Debian deb into an Ubuntu installation and vice versa, or a SLES rpm to RHEL, but only if the host-side shared libraries that the package depends on are compatible enough on a binary level with what the package expects.
That suddenly drives up the complexity requirements for shipping software even for a single Linux distribution massively. In contrast to Windows, where Microsoft still invests significant financial resources into API-side backwards compatibility, this is not a thing in any Linux distribution. Even if you're focusing just on Debian and Ubuntu, you have to compile your software at least four different times (one each for Debian Stable, Debian Testing, Ubuntu <current rolling release> and Ubuntu <current LTS>), simply because of different versions of dependencies. Oh and in the worst case you might need different codepaths to account for API changes between these different dependency versions.
And even if you had some sort of DSL that generated the respective package manager control files to build packages for the most common combinations of package manager, distributions and actively supported releases of these, there's so, so much work involved in setting up and maintaining the repositories. Add in actually submitting your packages to upstream (which is only possible for reasonably-ish open source packages in the first place), and the process becomes even more of a nightmare.
And that's all before digging into the topics of autotools, vendoring (hello nodejs/php/python ecosystems), digital signature keyrings, desktop manager ecosystems and god knows what else. Oh, and distribution bureaucracy is even more of a nightmare... because you now have to deal with quirks in other people's software too, and in the worst case with a time span of many years of your own releases plus the distribution release cadence!
Shipping software that's not fully OSS on Linux sucks, shipping closed source software for Linux sucks even more. Windows has had that sort of developer experience figured out from day one. Even if you didn't want to pirate or pay up for InstallShield, it was and is trivial to just write an executable, compile it and it will run everywhere.
[1] https://starwars.fandom.com/wiki/Mystery_of_a_Thousand_Moons
I do think packaging can be improved. I hate almost everything about how dpkg works, it's amazing. So I'm squarely in the RPM camp because I find the tooling a lot more tolerable, but still surely further improvements can be made.
Want to see Linux on the desktop actually happen? Then allow a hassle free way for commercial software that is not "pray that WINE works good enough" aka use win32 as an ABI layer.
Of course we can stay on our high horses and demand that everything be open source and that life for closed source developers be made as difficult as possible (the Linux kernel is particularly and egregiously bad in that perspective), but then we don't get to whine about why Linux on the desktop hasn't fucking happened yet.
The whole point of my comment was to keep in mind the incentives of different sub-groups. If “Linux on the desktop” doesn’t benefit the people that make Linux work, I don’t see what the big deal is.
Getting Linux adopted in F500 companies as the default desktop OS. That is the actual litmus test, because (large) companies need an OS that can be centrally managed with ease, doesn't generate a flood of DPU (Dumbest Possible User) support demand and can run the proprietary software that's vital to the company's needs in addition to the various spyware required by cybersecurity insurances and auditors these days.
At the moment, Linux just Is Not There. Windows has GPOs and AD (that, in addition, ties into Office 365 perfectly fine), Mac has JAMF and a few other MDM solutions. Many a corporate software doesn't even run properly under WINE (not surprising, the focus of Proton and, by it, WINE is gaming), there's a myriad ways of doing central management, and good luck trying to re-educate employees that have been at the company so long they grew roots into their chairs.
It sort of feels like we’re talking past each-other. I’ve been trying to point out that, due to the community nature of these open source projects, development tends to follow the interests of the people who tend to contribute open source code to the projects. You’ve listed a number of challenges or thresholds that you think are important. However, after reading your comments, I can’t articulate who those thresholds are important to or why they are worth following. I don’t need another litmus test, I need some reason to care about testing.
The idea of “Linux on the desktop” was a popular meme for a while, but I think it is a short-hand expression for a collection of things, some of which were achieved a decade ago, some of which weren’t, where there’s a strong correlation between “things that were accomplished” and “things that open source community contributors cared about,” and the remainder… were ignored because nobody wanted to do them.
You, uh, haven't used Gentoo in like twenty years, have you? You've been able to host your own prebuilt binaries (or use the prebuilts of others who bothered sharing them) for as long as I can remember (FWIW, I started using Gentoo in 2002 or 2004). The Gentoo folks decided to set up official binary package servers at the end of 2023 (look at the Dec 29, 2023 news item on the Gentoo home page for more info).
It can be argued that it didn't solve very many problems and added a huge amount of complexity.
systemd isn't opaque, it's open-source. systemd is objectively less opaque than init scripts, because it's very well documented. Init scripts are not.
Sure, you can read them. But then you'd realize that glued together init scripts just re-implement systemd but buggier and slower, at which point you might as well just read the systemd source. Or, better yet, the documentation.
systemd ALSO does not constantly change. The init system has been virtually untouched in a decade, save for bug fixes and a few new features. Your unit files will "just work", across many years and many distros. Yes, systemd is more portable than init scripts.
systemd ALSO does not have any scope creep. Here, people get confused between systemd-init, and systemd the project.
systemd-init is just an init system. Nothing more, nothing less, for a long time now, and forever. There is no scope creep, the unix principle is safe, yadda yadda yadda.
systemd coincidentally is also the name of a project which includes many binaries. All of those binaries are optional. They aren't statically linked, they're not even dynamically linked - they communicate over IPC like every other program on your computer.
systemd is also not complex, it's remarkably simple. systemd unit files are flat, declarative config files. Bash is a turing-complete scripting language with more footguns than C++. Which sounds more complex?
Sure, I'll bite. It'll be more interesting than watching some stupid Twitch streamer.
Gentoo Linux was using OpenRC back in 2002. Looking at the copyright notice in relevant source files, it looks like OpenRC is a project that has been under development since 1999, so I'd expect it was in use back in 1999. However, I will use 2002 as the start date for this discussion because that's when I started using it.
The simple OpenRC service file I mention in footnote 3 in [0] is syntactically identical to the syslogd service file added in to the OpenRC repo back in this commit in late 2007 [1]. The commit that appears to add support for 'command_args' and friends is earlier that day.
So, four years before SystemD's experimental release, the minimal OpenRC service file (that I talk about in [0]) was no more complicated than what would become the minimal SystemD service file no less than four years later. What's more, the more-verbose syntax for service files written in 2002 was supported by 2007's OpenRC, and continues to be supported by 2025's OpenRC.
23 years is quite a bit longer than 15.
> systemd is also not complex, it's remarkably simple. systemd unit files are flat, declarative config files.
See above (and below).
> Here, people get confused between systemd-init, and systemd the project.
In that case, it doesn't do you credit to use "systemd-init" and "systemd" interchangably in your commentary. SystemD absolutely has scope creep. systemd-init... well, I think I remember when it wasn't possible to have it rexecute itself for a no-reboot upgrade of PID 1. And does it still have a dependency on DBus, or did they see sense and get rid of that?
[0] <https://news.ycombinator.com/item?id=44945789>
[1] <https://github.com/OpenRC/openrc/commit/3ec2cc50261f37b76e0e...>
We call that the unix principle, lol.
Saying systemd has scope creep is like saying GNU has scope creep because they have a compiler and a text editor. Makes no fucking sense.
I also don't consider a dependency on dbus "scope creep". It has to communicate over IPC - okay, don't reinvent the wheel, just use dbus. Every program ever supports dbus if it has a public API over IPC. Sorry if that bothers you.
And sure, maybe OpenRC is just as simple as systemd, but the reality is every distro chose systemd and that's that, and for MOST of them they switched from primarily scripts to unit files.
That is a HUGE reduction in complexity. HUGE.
I suppose you don't understand (or are pretending not to understand) what "scope creep" means. Oh well.
> I also don't consider a dependency on dbus "scope creep".
I also don't consider a dependency on DBus scope creep. I consider making PID 1 crash whenever DBus needs to restart because of a upgrade/security fix/etc. in DBus or one of the libraries it links against to be a fantastically poor design decision.
> And sure, maybe OpenRC is just as simple as systemd...
It's far simpler. It concerns itself with bringing up and supervising services. It doesn't contain a DNS resolver, a (subtly buggy) Cron, a syslog daemon, and the many other things SystemD has decided to (whether correctly or incorrectly) reimplement.
It's made Linux more Mac/Windows-like. When it works, it works very well... but when it breaks... good luck figuring out anything.
I guess that's OK for a "desktop" but for a server it's a huge pain in the butt.
This matched my experience: there were a few vocal haters who were very loud but tended not to be professional sysadmins or shipping binaries to other people, and they didn’t have a realistic alternative. If you distributed or managed software, you had a single, robust solution for keeping daemons running with service accounts, restarts, dependencies, etc. for Windows NT circa 1993 and macOS in 2005 so Linux not having something comparable was just this ongoing source of paper cuts which caused some Linux shops to have unexpected, highly visible downtime (e.g. multiple times I saw data center outages where all of the Windows stuff and the properly configured Upstart/SystemD stuff come up after retrying but high-profile apps using SysV init stayed down for hours because the admins had to clean it up by hand).
Anyone who packaged software was also happy to stop supporting different combinations of buggy shell scripts and utilities, too – every RPM I built went from hundreds of lines of .sh to a couple dozen lines of better systemd. systemd certainly isn’t perfect but if you had an actual job to do you were going to look at systemd as the best path to reduce that overhead.
Broad view betweeen BSD ecosystem offers that this wasn’t a good way. I still want to see a good alternative from that point of view…
There's a large cohort of Linux users whose entire personality is "I'M A CoNtRaRiAn!" and argued against systemd because Red Hat was pushing it. Reddit was filled with such a minority of loud anti-systemd trolls. Pushed for reasons for their disdain you'd get non-sensical or baseless replies. The best ones were known bugs that had been closed for months.
Just an example of systemd limitation is that systemd does not support musl, so if you want to build a tiny embedded sysroot, you already have some limitations.
More information at https://news.ycombinator.com/item?id=2565780
1. Agglutination of shell scripts
2. "Oh wow, this is getting annoying"-phase: Wrapper for scripts (SRC SMC openrc etc pp)
3. Service supervision daemons (SMF, launchd, systemd)
> Why didn't you just add this to Upstart, why did you invent something new?
> Well, the point of the part about Upstart above was to show that the core design of Upstart is flawed, in our opinion. Starting completely from scratch suggests itself if the existing solution appears flawed in its core. However, note that we took a lot of inspiration from Upstart's code-base otherwise.
> If you love Apple launchd so much, why not adopt that?
> launchd is a great invention, but I am not convinced that it would fit well into Linux, nor that it is suitable for a system like Linux with its immense scalability and flexibility to numerous purposes and uses.
launchd is horrible though, the folks complaining about systemd would be up in arms if they had to write poorly typed XML key/value files
I'd also add, that there are some non-trivial requirements for good server daemon programs, like fork, detach from terminal, may be fork again, umask, chdir, may be close some descriptors, maintain PID file, output to syslog, drop privileges and so on. And a lot of those things are implemented in systemd, so basically you can just write very dumb server which will work properly under systemd. So some part of systemd have to be implemented in every server daemon program.
OpenEmbedded has carried a patchset to build systemd against musl for use in Yocto for a long time.
postmarketOS already got approval from Poettering to make a musl-linked systemd more officially supported.
https://madaidans-insecurities.github.io/guides/linux-harden...
https://discuss.privacyguides.net/t/add-gentoo-linux-void-li...
https://github.com/gentoo/hardened-refpolicy
https://krython.com/post/hardening-alpine-linux-system-secur...
/s
# ProtectSystem=
you can do TemporaryFileSystem=/:ro
BindReadOnly=/usr/bin/binary /lib /lib64 /usr/lib usr/lib64 <paths you want to read>
And essentially just including the binary and the path you want available. ProtectSystem= is currently not compatible with this behavior.EDIT: More info here: https://github.com/systemd/systemd/issues/33688
this will not take off I'm afraid, because locking these unitfiles down is offloaded to the end-user (I've yet to see maintainers embrace shipping locked down files). Maybe they will? But this same approach hasn't worked with apparmor so why should it work with systemd? Who will do the job?
If you consider apparmor maintainers provide skeleton-templates in many cases that will make the parser stop complaining. ("look I have a profile so apparmor shuts up, but don't take too close a look OK")
Then there is firejail, which some argue[2] is snake-oil considering the high level of administrative glue compared to its massive attack-surface (also it's a setuid binary).
I didn't mention SElinux since I don't know a single person who had the joy (or pain depending on perspective) of working with it. But again, seems the expectation to implement security with it is shifted to the user.
https://fedoraproject.org/wiki/Changes/SystemdSecurityHarden...
and really, these should be written by the developers, not distro maintainers
poking around on my Ubuntu machine, a few daemons have some hardening, chronyd looks pretty good
Maybe your point is that this isn't done by the vendor in practice. And I'm sure there's room for lots of improvement. However, one of the great things about how systemd units can be provided by the vendor and seamlessly tweaked by the administrator is that the vendor (i.e. packager and/or distro) can set these up easily.
There definitely are packages that ship with locked-down files. Tor and powerdns (pdns) are two off the top of my head.
→ Overall exposure level for pdns.service: 1.9 OK
→ Overall exposure level for tor.service: 7.1 MEDIUMIt would be great to see it implemented but for now at least on Debian/sid the situation is as follows:
UNIT EXPOSURE PREDICATE
ModemManager.service 6.3 MEDIUM
NetworkManager.service 7.8 EXPOSED
alsa-state.service 9.6 UNSAFE
anacron.service 9.6 UNSAFE
atop.service 9.6 UNSAFE
atopacct.service 9.6 UNSAFE
avahi-daemon.service 9.6 UNSAFE
blueman-mechanism.service 9.6 UNSAFE
bluetooth.service 6.0 MEDIUM
cron.service 9.6 UNSAFE
dbus.service 9.3 UNSAFE
dictd.service 9.6 UNSAFE
dm-event.service 9.5 UNSAFE
dnscrypt-proxy.service 8.1 EXPOSED
emergency.service 9.5 UNSAFE
exim4.service 6.9 MEDIUM
getty@tty1.service 9.6 UNSAFE
irqbalance.service 1.2 OK
lvm2-lvmpolld.service 9.5 UNSAFE
polkit.service 1.2 OK
rc-local.service 9.6 UNSAFE
rescue.service 9.5 UNSAFE
rtkit-daemon.service 7.2 MEDIUM
smartmontools.service 9.6 UNSAFE
systemd-ask-password-console.service 9.4 UNSAFE
systemd-ask-password-wall.service 9.4 UNSAFE
systemd-bsod.service 9.5 UNSAFE
systemd-hostnamed.service 1.7 OK
systemd-journald.service 4.9 OK
systemd-logind.service 2.8 OK
systemd-networkd.service 2.9 OK
systemd-timesyncd.service 2.1 OK
systemd-udevd.service 7.1 MEDIUM
tor@default.service 6.6 MEDIUM
udisks2.service 9.6 UNSAFE
upower.service 2.4 OK
user@1000.service 9.4 UNSAFE
wpa_supplicant.service 9.6 UNSAFEWhy would you say that? I would agree that the developer likely has better insight into what the software needs. But the security boundary exists at the interface of the application and the system, so I think that both application devs and system devs (i.e. distros) have something to contribute here.
And because systemd allows for composition of these settings, it doesn't have to be a one-or-the other situation--a distro can do some basic locking down (e.g. limiting SUID, DynamicUser, etc.), and then the application dev can do syscall filtering.
In any case, I agree that I'd like to see things get even more locked down. But it's worth remembering that, before systemd, there was basically no easy-to-use least-privilege stuff available beyond Unix users and filesystem permissions. The closest you had (afaik) was apparmor and selinux. In both of those cases, the distro basically had to do all the work to create the security policy.
Also, n.b., that pdns.service I noted is provided by PowerDNS themselves.
Normally the rule is that people mis-capitalizing the name are usually critical of the project.
It's systemd, not SystemD
When Vault is not available, if I’m working on a side project, for example, that’s what I always go for. Even wrote a small Go package[2] to get said credentials when your application is running inside a service with that feature.
dir, err := os.Getenv("CREDENTIALS_DIRECTORY")
cred, err := os.ReadFile(filepath.Join(dir, "name"))
That's less complexity than left-pad.I thought the go culture was that dependencies are bad, and abstractions (i.e. any function calls) are confusing and bad, so it's better to just inline stuff like this and write it fresh each time you need it.
For my projects I can just include the dependency, as I wrote it and don’t mind using it. Other people can copy it instead, since the proverb goes “a little copying is better than a little dependency”.
That'll also let you avoid writing the `if err != nil { return fmt.Errorf("context string something: %w", err) }` boilerplate again and again too (since you can just write '.context("context")?' each time).
If you're using Go, you're not supposed to build abstractions, small packages, or any sorta clever or good code. And be really careful using generics.
If you want to write abstractions, you're supposed to use a different language. Those are the rules.
Really? I thought the point of the environment variable was it was the same, and the directory it pointed to differed depending on the service type.
I'd love a reference since at least for every systemd version I've used, you're wrong.
> Proper error handling and graceful fallback
That's application specific, so it can't really fit in a generic library well.
I must've had it confused with StateDirectory[0]. Thank you for pointing my mistake out. That does make the library a bit less useful.
[0]: https://www.freedesktop.org/software/systemd/man/latest/syst... Table 2
systemd-run -p StateDirectory=test -t sh -c 'echo $STATE_DIRECTORY'
works fine with both `--system` and `--user`, so seems like that's the same too.The library the person linked is to deal with systemd credentials in go.
The two lines of code I wrote are the same, and in fact are effectively 100% of the code in the library.
Yes, it is written systemd, not system D or System D, or even SystemD. And it isn't system d either. Why? Because it's a system daemon, and under Unix/Linux those are in lower case, and get suffixed with a lower case d. And since systemd manages the system, it's called systemd.
As an insult, it was rather less successful than the "Micro$oft" / "Slowaris" / "HP-SUX" mockery from the 1990s - but it did manage to sow enough confusion that that it still pops up regularly today, even in contexts that are otherwise neutral or positive about it.
I’ve been using it because having some random letter capitalized seems to be the totally unsurprising for this sort of plumbing software. (And by plumbing, I mean: very useful and helpful boring stuff that deals with messy problems that I’m happy not to care about, just to be clear that I mean it positively, haha).
From packaging stuff for nixpkgs, a distro that often is without upstream support, it is usually very useful to look at how mainstream distro package services.
Those hardening steps also tend to be well tested even if sometimes a bit lax. If you want to find out how, e. G., postgresql can be hardened, consider looking at the Debian, Ubuntu and/or RHEL packages as a starting point.
well, the con is you might unknowingly break some setups. take NetworkManager: after tightening it down, did you check both IPv4 and IPv6 connectivity? did you check that both the `dns=systemd-resolved` and `dns=default` modes of operation (i.e. who manages /etc/resolv.conf) work? did you check its ModemManager integrations, that it can still manage cellular connections? did you check that the openvpn and cisco anyconnect plugins still work? what about the NetworkManager-dispatcher hooks?
> Why don't distros flip more of these switches?
besides the bit of "how many distro maintainers actually understand the thing they're maintaining well enough to know which switches can be flipped without breaking more than 0.01% of user setups", there's the bit of "should these flags be owned by the distro, or by the upstream package?" if the distro manages these, they'll get more regressions on the next upstream release. if the upstream sets these, they can't be as aggressive without breaking one or two of their downstreams.
Maintainers _could_ take the time to lock down sshd and limit the damage it can do if exploited, but there are costs associated with that:
1. Upfront development cost
2. Maintenance cost from handling bug reports (lots of edge cases for users)
3. Maintenance cost from keeping this aligned with upstream changes
You could extend this argument and say that distros shouldn't bother with _any_ security features, but part of the job of a distro maintainer is to strike a balance here, and similar to SELinux / AppArmor / whatever, most mainstream desktop distro maintainers probably don't think the juice is worth the squeeze.We could also ask why nobody seems to use SELinux or AppArmor, or any other random security feature. Most distros have these things available but most developers and users are not familiar, don't truly need it, etc.
From https://news.ycombinator.com/item?id=29995566 :
> Which distro has the best out-of-the-box output for?:
systemd-analyze security
desbma/shh generates SyscallFilter and other systems unit rules from straces similar to how audit2allow generates SELinux policies by grepping for AVC denials in permissive mode (given kernel parameters `enforcing=0 selinux=1`), but should strace be installed in production?:desbma/shh: https://github.com/desbma/shh
ibizaman•5mo ago