* https://en.wikipedia.org/wiki/Jamie_Zawinski#Zawinski's_Law
:)
Make an `smtp.socket`, which calls `smtp.service`, which receives the mail and prints it on standard output, which goes to a custom journald namespace (thanks `LogNamespace=mail` in the unit) so you can read your mail with `journalctl --namespace=mail`.
All the services you forgot you were running for ten whole years, will fail to launch someday soon.
Because last time I wrote systemd units it looked like a job.
Also, way over complex for anything but a multi user multi service server. The kind you're paid to maintain.
Why wouldn't you want unit files instead of much larger init shell scripts which duplicate logic across every service?
It also enabled a ton of event driven actions which laptops/desktops/embedded devices use.
Indeed, that criticism makes no sense at all.
> It also enabled a ton of event driven actions which laptops/desktops/embedded devices use.
Don't forget VMs. Even in server space, they use hotplug/hotunplug as much as traditional desktops.
TIL. Didn't know I can get paid to maintain my PC because I have a background service that does not run as my admin user.
Fascinating. Last time I wrote a .service file I thought how muhc easier it was than a SysV init script.
However, it is not easy figuring out which of those script are actually a SysVInit script and which simply wrap systemd.
In this way I’m able to set up AWS EC2 instances or digital ocean droplets, a bunch of game servers spin up and report back their existence to a backend game services API. So far it’s working but this part of my project is still in development.
I used to target containerizing my apps, which adds complexity, but often in AWS I have to care about VMs as resources anyways (e.g. AWS gamelift requires me to spin up VMs, same with AWS EKS). I’m still going back and forth between containerizing and using systemd; having a local stack easily spun up via docker compose is nice, but with systemd what I write locally is basically what runs in prod environment, and there’s less waiting for container builds and such.
I share all of this in case there’s a gray beard wizard out there who can offer opinions. I have a tendency to explore and research (it’s fuuun!) so I’m not sure if I’m on a “this is cool and a great idea” path or on a “nobody does this because <reasons>” path.
https://docs.podman.io/en/latest/markdown/podman-systemd.uni...
(In fact, nothing prevents anyone from extracting and repackaging the sysvinit generator, now that I think of it).
https://adamgradzki.com/lightweight-development-sandboxes-wi...
You provide us a docker image, and we unpack it, turn it into a VM image and run as many instances as you want side-by-side with CPU affinity and NUMA awareness. Obviating the docker network stack for latency/throughput reasons - since you can
They had tried nomad, agones and raw k8s before that.
As a hobbyist part of me wants the VM abstracted completely (which may not be realistic). I want to say “here’s my game server process, it needs this much cpu/mem/network per unit, and I need 100 processes” and not really care about the underlying VM(s), at least until later. The closest thing I’ve found to this is AWS fargate.
Also holy smokes if you were a part of the team that architected this solution I’d love to pick your brain.
By making it an “us” problem to run the infrastructure at a good cost, and be cheaper then than AWS for us to run, meaning we could take no profit on cloud vms. making us cost competitive as hell.
At a previous job, we used azure container apps - it’s what you _want_ fargate to be. AIUI, Google Cloud Run is pretty much the same deal but I’ve no experience with it. I’ve considered deploying them as lambdas in the past depending on session length too…
Still, I can see the draw for independent devs to use docker compose. Teams and orgs though makes sense to use podman and systemd for the smaller stuff or dev, and then literally export the config as a kubernetes yaml.
This all probably speaks to my odd prioritization: I want to understand and use. I’ve had to step back and realize part of the fun I have in pursuing these projects is the research.
The closer you get to 100% resource utilization the more regular your workload has to become. If you can queue requests and latency isn't a problem, no problem, but then you have a batch process and not a live one (obviously not for games).
The reason is because live work doesn't come in regular beats, it comes in clusters that scale in a fractal way. If your long term mean is one request per second what actually happens is you get five requests in one second, three seconds with one request each, one second with two requests, and five seconds with 0 requests (you get my point). "fractal burstiness"
You have to have free resources to handle the spikes at all scales.
Also very many systems suffer from the processing time for a single request increasing as overall system loads increase. "queuing latency blowup"
So what happens? You get a spike, get behind, and never ever catch up.
https://en.wikipedia.org/wiki/Network_congestion#Congestive_...
I’ve also done Microsoft Orleans clusters and still recommend the single pid, multiple containers/processes approach. If you can avoid Orleans and kubernetes and all that, the better. It just adds complexity to this setup.
> Required minimum versions of following components are planned to be raised in v260:
* Linux kernel >= 5.10 (recommended >= 5.14),
Don't these two statements contradict each other?
v259? [cue https://youtu.be/lHomCiPFknY]
anotherhue•3h ago