You can define rootless containers to run under systemd services as unprivileged users. You can use machinectl to login as said user and interact with systemctl.
Autoscaling fleet - image starts, downloads container from registry and starts on instance
1:1 relationship between instance and container - and they’re running 4XLs
When you get past the initial horror it’s actually beautiful
To me that's why compose is neat. It's simple. Works well with rootless podman also.
> Of course, as my luck would have it, Podman integration with systemd appears to be deprecated already and they're now talking about defining containers in "Quadlet" files, whatever those are. I guess that will be something to learn some other time.
really raw-dogging it here but I got tired of endless json-inside-yaml-inside-hcl. ansible yaml is about all I want to deal with at this point.
This limitation creates numerous headaches. Instead of Deployments, I'm stuck with manual docker compose up/down commands over SSH. Rather than using Ingress, I have to rely on Traefik's container discovery functionality. Recently, I even wrote a small script to manage crontab idempotently because I can't use CronJobs. I'm constantly reinventing solutions to problems that Kubernetes already solves—just less efficiently.
What I really wish for is a lightweight alternative offering a Kubernetes-compatible API that runs well on inexpensive VPS instances. The gap between enterprise-grade container orchestration and affordable hobby hosting remains frustratingly wide.
But you've already said yourself that the cost of using K8s is too high. In one sense, you're solving those solutions more efficiently, it just depends on the axis you use to measure things.
Put another way, in my experience running clusters, in $(ps auwx) or its $(top) friend always show etcd or sqlite generating all of the "WHAT are you doing?!" and those also represent the actual risk to running kubernetes since the apiserver is mostly stateless[1]
1: but holy cow watch out for mTLS because cert expiry will ruin your day across all of the components
Still on k3s, still love it.
My cluster is currently hosting 94 pods across 55 deployments. Using 500m cpu (half a core) average, spiking to 3cores under moderate load, and 25gb ram. Biggest ram hog is Jellyfin (which appears to have a slow leak, and gets restarted when it hits 16gb, although it's currently streaming to 5 family members).
The cluster is exclusively recycled old hardware (4 machines), mostly old gaming machines. The most recent is 5 years old, the oldest is nearing 15 years old.
The nodes are bare Arch linux installs - which are wonderfully slim, easy to configure, and light on resources.
It burns 450Watts on average, which is higher than I'd like, but mostly because I have jellyfin and whisper/willow (self hosted home automation via voice control) as GPU accelerated loads - so I'm running an old nvidia 1060 and 2080.
Everything is plain old yaml, I explicitly avoid absolutely anything more complicated (including things like helm and kustomize - with very few exceptions) and it's... wonderful.
It's by far the least amount of "dev-ops" I've had to do for self hosting. Things work, it's simple, spinning up new service is a new folder and 3 new yaml files (0-namespace.yaml, 1-deployment.yaml, 2-ingress.yaml) which are just copied and edited each time.
Any three machines can go down and the cluster stays up (metalLB is really, really cool - ARP/NDP announcements mean any machine can announce as the primary load balancer and take the configured IP). Sometimes services take a minute to reallocate (and jellyfin gets priority over willow if I lose a gpu, and can also deploy with cpu-only transcoding as a fallback), and I haven't tried to be clever getting 100% uptime because I mostly don't care. If I'm down for 3 minutes, it's not the end of the world. I have a couple of commercial services in there, but it's free hosting for family businesses, they can also afford to be down an hour or two a year.
Overall - I'm not going back. It's great. Strongly, STRONGLY recommend k3s over microk8s. Definitely don't want to go back to single machine wrangling. The learning curve is steeper for this... but man do I spend very little time thinking about it at this point.
I've streamed video from it as far away as literally the other side of the world (GA, USA -> Taiwan). Amazon/Google/Microsoft have everyone convinced you can't host things yourself. Even for tiny projects people default to VPS's on a cloud. It's a ripoff. Put an old laptop in your basement - faster machine for free. At GCP prices... I have 30k/year worth of cloud compute in my basement, because GCP is a god damned rip off. My costs are $32/month in power, and a network connection I already have to have, and it's replaced hundreds of dollars/month in subscription costs.
For personal use-cases... basement cloud is where it's at.
I have a couple dedicated servers I fully manage with ansible. It's docker compose on steroids. Use traefik and labeling to handle reverse proxy and tls certs in a generic way, with authelia as simple auth provider. There's a lot of example projects on github.
A weekend of setup and you have a pretty easy to manage system.
One can read more here: https://doc.traefik.io/traefik/routing/providers/docker/
This obviously has some limits and becomes significantly less useful when one requires more complex proxy rules.
I hate sounding like an Oracle shill, but Oracle Cloud's Free Tier is hands-down the most generous. It can support running quite a bit, including a small k8s cluster[1]. Their k8s backplane service is also free.
They'll give you 4 x ARM64 cores and 24GB of ram for free. You can split this into 1-4 nodes, depending on what you want.
I think that's if you are literally on their free tier, vs. having a billable account which doesn't accumulate enough charges to be billed.
Similar to the sibling comment - you add a credit card and set yourself up to be billed (which removes you from the "free tier"), but you are still granted the resources monthly for free. If you exceed your allocation, they bill the difference.
So choose your home region carefully. Also, note that some regions have multiple availability domains (OCI-speak for availability zones) but some only have one AD. Though if you're only running one free instance then ADs don't really matter.
Or maybe look into Kamal?
Or use Digital Ocean app service. Got integration, cheap, just run a container. But get your postgres from a cheaper VC funded shop :)
For single server setups, it uses k3s, which takes up ~200MB of memory on your host machine. Its not ideal, but the pain of trying to wrangle docker deployments, and the cheapness of hetzner made it worth it.
Just the other day I found a cluster I had inadvertently left running on my macBook using Kind. It literally had three weeks of uptime (running a full stack of services) and everything was still working, even with the system getting suspended repeatedly.
"What if I could define a systemd unit that managed a service across multiple nodes" leads naturally to something like k8s.
It's basically just this command once you have compose.yaml: `docker compose up -d --pull always`
And then the CI setup is this:
scp compose.yaml user@remote-host:~/
ssh user@remote-host 'docker compose up -d --pull always'
The benefit here is that it is simple and also works on your development machine.Of course if the side goal is to also do something fun and cool and learn, then Quadlet/k8s/systemd are great options too!
Edit: Nevermind, I misunderstood the article from just the headline. But I'm keeping the comment as I find the reference funny
Check it out. Doron
I host all of my hobby projects on a couple of raspi zeros using systemd alone, zero containers. Haven’t had a problem since when I started using it. Single binaries are super easy to setup and things rarely break, you have auto restart and launch at startup.
All of the binaries get generated on GitHub using Actions and when I need to update stuff I login using ssh and execute a script that uses a GitHub token to download and replace the binary, if something is not okay I also have a rollback script that switches things back to its previous setup. It’s as simple as it gets and it’s been my go-to for 2 years now.
- is there downtime? (old service down, new service hasn't started yet)
- does it do health checks before directing traffic? (the process is up, but its HTTP service hasn't initialized yet)
- what if the new process fails to start, how do you rollback?
Or it's solved with nginx which sits in front of the containers? Or systemd has a builtin solution? Articles like this often omit such details. Or no one cares about occasional downtimes?
it helped of course that people writing them knew what they were doing.
[1] ParticleOS:
https://github.com/systemd/particleos
[2] Systemd ParticleOS:
I even run my entire Voron 3D printer stack with podman-systemd so I can update and rollback all the components at once, although I'm looking at switching to mkosi and systemd-sysupdate and just update/rollback the entire disk image at once.
The main issues are: 1. A lot of people just distribute docker-compose files, so you have to convert it to systemd units. 2. A lot of docker images have a variety of complexities around user/privilege setup that you don't need with podman. Sometimes you need to do annoying userns idmapping, especially if a container refuses to run as root and/or switches to another user.
Overall, though, it's way less complicated than any k8s (or k8s variant) setup. It's also nice to have everything integrated into systemd and journald instead of being split in two places.
klooney•4h ago
pa7ch•3h ago
thebeardisred•1h ago