Not only that, but it helps to eliminate the very real risk that you get kicked off of a platform that you depend on without recourse. Imagine if you lost your Gmail account. I'd bet that most normies would be in deep shit, since that's basically their identity online, and they need it to reset passwords and maybe even to log into things. I bet there are a non-zero number of HN commenters who would be fucked if they so much as lost their Gmail account. You've got to at least own your own E-mail identity! Rinse and repeat for every other online service you depend on. What if your web host suddenly deleted you? Or AWS? Or Spotify or Netflix? Or some other cloud service? What's your backup? If your answer is "a new cloud host" you're just trading identical problems.
Oh now you don’t only self host, now you have to have space to keep gear, plan backups, install updates, oh would be good to test updates so some bug doesn’t mess your system.
Oh you know installing updates or while backups are running it would be bad if you have power outage- now you need a UPS.
Oh you know what - my UPS turned out to be faulty and it f-up my HDD in my NAS.
No I don’t have time to deal with any of it anymore I have other things to do with my life ;)
Note, I’ve got all the things you mentioned down to the UPSes setup in my garage, as well as multiple levels of backups. It’s not perfect, but works for me without much time input vs utility it provides. Each to their own.
Is it really worth going through so much effort to mitigate that risk?
Cloud is just someone else's computer. These systems aren't special. Yes they are impressively engineered to deal with the scale they deal with, but when systems are smaller, they can get a lot simpler. I think as an industry we have conflated distributed systems with really hard engineering problems, when it really matter at what level of abstraction the distribution happens when it comes to down stream complexity.
How far do we take this philosophy?
There are alternatives that should be promoted.
Note that I'm not saying you shouldn't self-host email or anything else. But it's probably more risky for 99% of people compared to just making sure they can recover their accounts.
And good luck getting anyone from Google to solve your problem assuming you get to a human.
Google will never comment on the reasons they disable an account, so all you've read are the unilateral claims of people who may or may not be admitting what they actually did to lose their accounts.
Domains are cheap; never use an email address that's email-provider-specific. That's orthogonal to whether you host your own email or use a professional service to do it for you.
I will lose some email history, but at least I don’t lose my email future.
However, you can’t own a domain, you are just borrowing it. There is still a risk that gets shut down too, but I don’t think it is super common.
I back up all my email every day, independent of my hosting provider. I have an automatic nightly sync to my laptop, which happens right before my nightly laptop backups.
I self host my mails but still use a freemail for the contact address for my providers. No chicken and egg problem for me.
But running it is different issue. Notably, I have no idea, and have not seen a resource talking about troubleshooting and problem solving for a self hosted service. Particularly in regards with interoperability with other providers.
As a contrived example, if Google blackballs your server, who do you talk to about it? How do you know? Do that have email addresses, or procedures for resolution in the error messages you get talking with them?
Or these other global, IP ban sites.
I’d like to see a troubleshooting guide for email. Not so much for the protocols like DKIM, or setting DNS up properly, but in dealing with these other actors that can impact your service even if it’s, technically, according to Hoyle, set up and configured properly.
It's nearly impossible to get 100% email deliverability if you self host and don't use a SMTP relay. It might work if all your contacts are with a major provider like google, but otherwise you'll get 97% deliverability but then that one person using sbcglobal/att won't ever get your email for a 4 week period or that company using barracuda puts your email in a black hole. You put in effort to get your email server whitelisted but many email providers don't respond or only give you a temporary fix.
However, you can still self host most of the email stack, including most importantly storage of your email, by using an SMTP relay, like AWS, postmark, or mailgun. It's quick and easy to switch SMTP relays if the one you're using doesn't work out. In postfix you can choose to use a relay only for certain domains.
The reason why I bring this up is because many early adopters of Gmail switched to it or grew to rely upon it because the alternatives were much worse. The account through your ISP, gone as soon as you switched to another ISP. That switch may have been a necessary switch if you moved to a place the ISP did not service. University email address, gone soon after graduation. Employer's email address, gone as soon as you switched employers (and risky to use for personal use anyhow). Through another dedicated provider, I suspect most of those dedicated providers are now gone.
Yeap, self-hosting can sort of resolve the problem. The key word being sort of. Controlling your identity doesn't mean terribly much if you don't have the knowledge to setup and maintain a secure email server. If you know how to do it, and noone is targetting you in particular, you'll probably be fine. Otherwise, all bets are off. Any you don't have total control anyhow. You still have the domain name to deal with after all. You should be okay if you do your homework and stay on top of renewals, almost certainly better off than you would be with Google, but again it is only as reliable as you are.
There are reasons why people go with Gmail, and a handful of other providers. In the end, virtually all of those people will be better off in both the short to mid-term.
Obviously you should have enough technical knowledge to do a rough sanity check on the reply, as there's still a chance you get stupid shit out of it, but mostly it's really efficient for getting started with some tooling or programming language you're not familiar with. You can perfectly do without, it just takes longer. Plus You're not dependent on it to keep your stuff running once it's set up.
I'm using it to learn unfamiliar languages and frameworks.
But if I didn't know how to do it myself, it'd be useless- the subtle bugs Claude occasionally includes would be showstopper issues instead of a quick fix.
It's heartening in the new millennium to see some younger people show awareness of the crippling dependency on big tech.
Way back in the stone ages, before instagram and tic toc, when the internet was new, anyone having a presence on the net was rolling their own.
It's actually only gotten easier, but the corporate candy has gotten exponentially more candyfied, and most people think it's the most straightforward solution to getting a little corner on the net.
Like the fluffy fluffy "cloud", it's just another shrink-wrap of vendor lockin. Hook 'em and gouge 'em, as we used to say.
There are many ways to stake your own little piece of virtual ground. Email is another whole category. It's linked to in the article, but still uses an external service to access port 25. I've found it not too expensive to have a "business" ISP account, that allows connections on port 25 (and others).
Email is much more critical than having a place to blag on, and port 25 access is only the beginning of the "journey". The modern email "reputation" system is a big tech blockade between people and the net, but it can, and should, be overcome by all individuals with the interest in doing so.
https://www.purplehat.org/?page_id=1450
p.s. That was another place the article could mention a broader scope, there is always the BSDs, not just linux...
It raised some interesting questions:
- How long can I be productive without the Internet?
- What am I missing?
The answer for me was I should archive more documentation and NixOS is unusable offline if you do not host a cache (so that is pretty bad).
Ultimately I also found out self-hosting most of what I need and being offline really improve my productivity.
• Info¹ documentation, which I read directly in Emacs. (If you have ever used the terminal-based standalone “info” program, please try to forget all about it. Use Emacs to read Info documentation, and preferably use a graphical Emacs instead of a terminal-based one; Info documentation occasionally has images.)
• Gnome Devhelp².
• Zeal³
• RFC archive⁴ dumps provided by the Debian “doc-rfc“ package⁵.
1. https://www.gnu.org/software/emacs/manual/html_node/info/
2. https://wiki.gnome.org/Apps/Devhelp
I have a bash alias to use wget to recursively save full websites
yt-dlp will download videos you want to watch
Kiwix will give you a full offline copy of Wikipedia
My email is saved locally. I can queue up drafts offline
SingleFile extension will allow you to save single pages really effectively
Zeal is a great open source documentation browser
Unfortunately it doesn't work well on single page apps. Let me know if anyone has a good way of saving those
So you end up with something like this [1]:
> chromium --headless --window-size=1920,1080 --run-all-compositor-stages-before-draw --virtual-time-budget=9000 --incognito --dump-dom https://github.com | monolith - -I -b https://github.com -o github.html
- [0] https://github.com/Y2Z/monolith
- [1] https://github.com/Y2Z/monolith?tab=readme-ov-file#dynamic-c...
There are certain scenarios you have no control over (upstream problems), but others have contingencies. I enjoy working out these contingencies and determining whether the costs are worth the likelihoods - and even if they're not, that doesn't necessarily mean I won't cater for it.
I have long thought that I need my homelab/tools to have hardcases and a low power, modularity to them. Now I am certain of it. Not that I need first world technology hosting in emergency situations, but I am now staying with family for at least a few weeks, maybe months, and it would be amazing to just plonk a few hardcases down and be back in business.
But yeah, things like NixOS and Gentoo get very unhappy when they don't have Internet for more things. And mirroring all the packages ain't usually an option.
Ubuntu and CentOS at least HAD the concept of a "DVD" source, though I doubt it is used much anymore.
I think a cache or other repository backup system is important for any software using package managers.
Relying on hundreds if not thousands of individuals to keep their part of the dependency tree available and working is one of the wildest parts of modern software developmemt to me. For end use software I much prefer a discrete package, all dependencies bundled. That's what sits on the hard-drive in practice either way.
You can't install or update new software that you'd pull from the web, but you couldn't do that with any other system either. I can't remember specifically trying but surely if you're just e.g. modifying your nginx config, a rebuild will work offline?
But surprisingly the day I needed to change a simple network setting without the internet I got stuck ! I still can't explain why.
So I now feel we are rolling the dices a bit with an offline NixOS
Selfhosting is a pain in the ass, it needs updating docker, things break sometimes, sometimes it’s only you and not anyone else so you’re left alone searching the solution, and even when it works it’s often a bit clunky.
I have a extremely limited list of self hosted tool that just work and are saving me time (first one on that list would be firefly) but god knows i wasted quite a bit of my time setting up stuffs that eventually broke and that i just abandoned.
Today I’m very happy with paying for stuff if the company is respecting privacy and has descent pricing.
Plus I've found nearly every company will betray your trust in them at some point so why even give them the chance? I self host Home Assistant, but they seem to be the only company that actively enacts legal barriers for themselves so if Paulus gets hit by a bus tomorrow the project can't suddenly start going against the users.
There's your problem. Docker adds indirection on storage, networking, etc., and also makes upgrades difficult as you have to either rebuild the container, or rely on others to do so to get security and other updates.
If you stick to things that can be deployed as an upstream OS vendor package, or as a single binary (go-based projects frequently do this), you'll likely have a better time in the long run.
Single binary sometimes works, but means you need more memory and disk space. (granted much less a concern today than it was back in 1996 when I first started self hosting, but it still can be an issue)
Conflicting versions, I'll give you that, but how frequently does that happen, especially if you mostly source from upstream OS vendor repos?
The most frequent conflict is if everything wants port 80/443, and for most self-hosted services you can have them listen on internal ports and be fronted by a single instance of a webserver (take your pick of apache/nginx/caddy).
If your images use the same base container then the libraries exist only once and you get the same benefits of a non-docker setup.
This depends on the storage driver though. It is true at least for the default and most common overlayfs driver [1]
[1] https://docs.docker.com/engine/storage/drivers/overlayfs-dri...
Let's say some Heartbleed (which affected OpenSSL, primarily) happens again. With native packages, you update the package, restart a few things that depend on it with shared libraries, and you're patched. OS vendors are highly motivated to do this update, and often get pre-announcement info around security issues so it tends to go quickly.
With docker, someone has to rebuild every container that contains a copy of the library. This will necessarily lag and be delivered in a piecemeal fashion - if you have 5 containers, all of them need their own updates, which if you don't self-build and self-update, can take a while and is substantially more work than `apt get update && reboot`.
Incidentally, the same applies for most languages that prefer/require static linking.
As mentioned elsewhere in the thread, it's a tradeoff, and people should be aware of the tradeoffs around update and data lifecycle before making deployment decisions.
I think you're grossly overblowing how much work it takes to refresh your containers.
In my case, I have personal projects which have nightly builds that pull the latest version of the base image, and services are just redeployed right under your nose. All it take to do this was to add a cron trigger to the same CICD pipeline.
TBH, a slower upgrade cycle may be tolerable inside a private network that doesn't face the public internet.
What? You think the same guys who take an almost militant approach to how they build and run their own personal projects would somehow fail to be technically inclined to automate tasks?
last thing I want is to build my own CI/CD pipeline and tend it
> Docker adds indirection on storage, networking, etc.,
What do you mean by "indirection"? It adds OS level isolation. It's not an overhead or a bad thing.
> makes upgrades difficult as you have to either rebuild the container, or rely on others to do so to get security and other updates.
Literally the entire selfhost stack could be updated and redeployed in a matter of:
docker compose pull
docker compose build .
docker compose up -d
Self hosting with something like docker compose means that your server is entirely describable in 1 docker-compose.yml file (or a set of files if you like to break things apart) + storage.You have clean separation between your applications/services and their versions/configurations (docker-compose.yml), and yous state/storage (usually a NAS share or a drive mount somewhere).
Not only are you no longer depended on a particular OS vendor (wanna move your setup to a cheap instance on a random VPS provider but they only have CentOS for some reason?), but also the clean seperation of all the parts allows to very easily scale individual components as needed.
There is 1 place where everything goes. With the OS vendor package everytime you need to check is it in systemd unit? is it a config file in /etc/? wth?
Then next time you're trying to move the host, you forget the random /etc/foo.d/conf change you made. With docker-compose, that change has to be stored somewhere for the docker-compose to mount or rebuild, so moving is trivial.
It's not Nixos, sure. but it's much much better than a list of APT or dnf or yum packages and scripts to copy files around
Isolation technologies are also available outside of docker, through systemd, jails, and other similar tools.
Your comment is technically correct, but factually wrong. What you are leaving out is the fact that, in order to do what Docker provides out of the box, you need to come up with a huge custom Ansible script to even implement the happy path.
So, is your goal to self host your own services, or to endlessly toy with the likes of Ansible?
This. These anti-containerisation comments read like something someone oblivious to containers would say if they were desperately grabbing onto tech from 30 years ago and refused to even spend 5 minutes exploring anything else.
Containers as practiced by many are basically static linking and "declarative" configuration done poorly because people aren't familiar with dynamic linking or declarative OS config done well.
I don't think so. Containerization solves about 4 major problems in infrastructure deployment as part of it's happy path. There is a very good reason why the whole industry pivoted towards containers.
> . I've used docker and k8s plenty professionally, and they're both vastly more work to maintain and debug than nixos and systemd units (...)
This comment is void of any credibility. To start off, you suddenly dropped k8s into the conversation. Think about using systemd to setup a cluster of COTS hardware running a software-defined network, and then proclaim it's easier.
And then, focusing on Docker, think about claiming that messing with systemd units is easier than simply running "docker run".
Unbelievable.
The point is when you have experience with a Linux distribution that already does immutable, declarative builds and easy distribution, containers (which are also a ~2 line change to layer into a service) are a rather specific choice to use.
If you've used these things for anything nontrivial, yes systemd units are way simpler than docker run. Debugging NAT and iptables when you have multiple interfaces and your container doesn't have tcpdump is all a pain, for example. Dealing with issues like your bind mount not picking up a change to a file because it got swapped out with a `mv` is a pain. Systemd units aren't complicated.
No, it sounds like a poorly thought through strawman. Even Docker supports Docker swarm mode and many k8s distributions use containerd instead of Docker, so it's at best an ignorant stretch to jump to conclusions over k8s.
> Containers per se are just various Linux namespace features, and are unrelated to e.g. distribution or immutable images. So it makes sense to mention experience with the systems that are built around containers.
No. Containers solve many operational problems, such as ease of deployment, setup software defined networks, ephemeral environments, resource management, etc.
You need to be completely in the dark to frame containerization as Linux namespace features. It's at best a naive strawman, built upon ignorance.
> If you've used these things for anything nontrivial, yes systemd units are way simpler than docker run.
I'll make it very simple to you. I want to run postgres/nginx/keycloak. With Docker, I get everything up and running with a "docker run <container image>".
Now go ahead and show how your convoluted way is "way simpler".
nix makes it trivial to set up ephemeral environments: make a shell.nix file and run `nix-shell` (or if you just need a thing or two, do e.g. `nix-shell -p ffmpeg` and now you're in a shell with ffmpeg. When you close that shell it's gone). You might use something like `direnv` to automate that.
Nixos makes it easy to define your networking setup through config.
For your last question:
services.postgres.enable = true;
services.nginx.enable = true;
services.keycloak.enable = true;
If you want, you can wrap some or all of those lines in a container, e.g. containers.backend = {
config = { config, pkgs, lib, ... }: {
services.postgres.enable = true;
services.keycloak.enable = true;
};
};
Though you'd presumably want some additional networking and bind mount config (e.g. putting it into its own network namespace with a bridge, or maybe binding domain sockets that nginx will use plus your data partitions).Docker has a lot of use cases but self hosting is not one of them.
When self-hosting you wanna think long term and the fact you will loose interest in the fiddling after a while. So sticking with software packaged in a good distribution is probably the way to go. This is the forgotten added value of a Linux or BSD distribution, a coherent system with maintenance and an easy upgrade path.
The exception are things like Umbrel which I would say use docker as their package manager and maintain everything, so it is ok.
Docker is THE solution for self hosting stuff since one often has one server and runs a ton of stuff on it, with different PHP, Python versions, for example.
Docker makes it incredibly easy to a multitude of services on one machine however different they may be.
And if you ever need to move to a new server, all you need to do is move the volumes (if even necessary) and run the containers on the new machine.
So YES, self hosting stuff is a huge use case for docker.
But before Docker there was the virtualisation hype when people sweared every software/service needs its own VM. VM or containers we end up with frankenstein systems with dozens of images on one machine. And with Docker we probably lost a lot of security.
So this is fine I guess in the corporate world because things are messy anyway and there are many other contraints (hence the success of containers).
But in your home, serving a few apps for a few users you actually don't need that gas factory.
If you wanna run everything on your home lab with Docker or Kubernetes because you wanna build a skillset for work or reuse your professional skills, fine go for it. But everything you think is easy with Docker is actually simpler and easier with raw Linux or BSD.
I have been around since long before Docker was a thing, so yes I have been there, serving apps on bare metal and then using unwieldy VMs.
It doesnt matter if its my home lab or some Saas server, how is it simpler to serve 3 PHP apps with different PHP versions on raw linux than simply using docker for example?
This is called progress and thats why its a popular tool. Not because of some “hype” or whatever you are implying.
You got me, I have no idea how to manage PHP. Intuitively I would try to solve that problem with Nix first.
But here if you can't run two PHP versions on the same OS I would say the flaw is at the PHP level or how distributions handle it. So personally in my home I would probably avoid it and be grateful containers solve this problem for me at work.
To end on a more positive note, if you like Docker I would recommend you check FreeBSD Jails too because you probably gonna love it.
I used both but Jails have a level of integration in the OS which is next level (if used with ZFS). You really get the best of both world.
Its the same like running multiple versions of python (python vs python3, etc… cue the xkcd meme pic).
in the end its all a mess and thats why docker is such a nice thing.
Thanks for the heads up i will check Jails out.
Backing up relevant configuration and data is a breeze with Docker. Upgrading is typically a breeze as well. No need to suffer with a 5-year old out of date version from your distro, run the version you want to and upgrade when you want to. And if shit hits the fan, it's trivial to roll back.
Sure, OS tools should be updated by the distro. But for the things you actually use the OS for, Docker all the way in my view.
Mostly agreed, I actually run most of my software on Docker nowadays, both at work and privately, in my homelab.
In my experience, the main advantages are:
- limited impact on host systems: uninstalling things doesn't leave behind trash, limited stability risks to host OS when running containers, plus you can run a separate MariaDB/MySQL/PostgreSQL/etc. instance for each of your software package, which can be updated or changed independently when you want
- obvious configuration around persistent storage: I can specify which folders I care about backing up and where the data that the program operates on is stored, vs all of the runtime stuff it actually needs to work (which is also separate for each instance of the program, instead of shared dependencies where some versions might break other packages)
- internal DNS which makes networking simpler: I can refer to containers by name and route traffic to them, running my own web server in front of everything as an ingress (IMO simpler than the Kubernetes ingress)... or just expose a port directly if I want to do that instead, or maybe expose it on a particular IP address such as only 127.0.0.1, which in combination with port forwarding can be really nice to have
- clear resource limits: I can prevent a single software package from acting up and bringing the whole server to a standstill, for example, by allowing it to only spike up to 3/4 CPU cores under load, so some heavyweight Java or Ruby software starting up doesn't mean everything else on the server freezing for the duration of that, same for RAM which JVM based software also loves to waste and where -Xmx isn't even a hard limit and lies to you somewhat
- clear configuration (mostly): environment variables work exceedingly well, especially when everything can be contained within a YAML file, or maybe some .env files or secrets mechanism if you're feeling fancy, but it's really nice to see that 12 Factor principles are living on, instead of me always needing to mess around with separate bind mounted configuration files
There's also things like restart policies, with the likes of Docker Swarm you also get scheduling rules (and just clustering in general), there's nice UI solutions like Portainer, healthchecks, custom user/group settings, custom entrypoints and the whole idea of a Dockerfile saying exactly how to build an app and on the top of what it needs to run is wonderful.At the same time, things do sometimes break in very annoying ways, mostly due to how software out there is packaged:
https://blog.kronis.dev/blog/it-works-on-my-docker
https://blog.kronis.dev/blog/gitea-isnt-immune-to-issues-eit...
https://blog.kronis.dev/blog/docker-error-messages-are-prett...
https://blog.kronis.dev/blog/debian-updates-are-broken
https://blog.kronis.dev/blog/containers-are-broken
https://blog.kronis.dev/blog/software-updates-as-clean-wipes
https://blog.kronis.dev/blog/nginx-configuration-is-broken
(in practice, the amount of posts/rants wouldn't change much if I didn't use containers, because I've had similar amounts of issues with things that run in VMs or on bare metal; I think that most software out there is tricky to get working well, not to say that it straight up sucks)
Been self-hosting for 35+ years. Docker's made the whole thing 300% easier — especially when thinking long term.
None of your points make any sense. Docker works beautifully well as an abstraction layer. It makes trivially simple to upgrade anything and everything running on it, to the point you do not even consider it as a concern. Your assertions are so far off that you managed to.l get all your points entirely backwards.
To top things off, you get clustering for free with Docker swarm mode.
> If you stick to things that can be deployed as an upstream OS vendor package, or as a single binary (go-based projects frequently do this), you'll likely have a better time in the long run.
I have news for you. In fact, you should be surprised to learn that nowadays that today you even get full blown Kubernetes distributions up and running in Linux distributions after a quick snap package install.
Everything you're saying is complete overkill, even in most Enterprise environments. We're talking about a home server here for hosting eBooks and paperless documents, and you're implying Kubernetes clusters are easy enough to run and so are a good solution here. Madness.
> I have news for you.
I have news for _you_: using Docker to run anything that doesn't need it (i.e. it's the only officially supported deployment mechanism) is like putting your groceries into the boot of your car, then driving your car onto the tray of a truck, then driving the truck home because "it abstracts the manual transmission of the car with the automatic transmission of the truck". Good job, you're really showing us who's boss there.
Operating systems are easy. You've just fallen for the Kool Aid.
Not really. It defies any cursory understanding of the problem domain, and you must go way out of your way to ignore how containerization makes everyone's job easier and even trivial to accomplish.
Some people in this discussion even go to the extreme of claiming that messing with systemd to run a service is simpler than typing "docker run".
It defies all logic.
> Everything you're saying is complete overkill, even in most Enterprise environments.
What? No. Explain in detail how being able to run services by running "docker run" is "overkill". Have you ever went through an intro to Docker tutorial?
> We're talking about a home server here for hosting eBooks and paperless documents, and you're implying Kubernetes clusters are easy enough to run and so are a good solution here. Madness.
You're just publicly stating your ignorance. Do yourself a favor and check Ubuntu's microk8s. You're mindlessly parroting cliches from a decade ago.
You'd have to go out of your way to ignore how difficult they are to maintain and secure. Anyone with a few hours of experience trying to design an upgrade path for other people's container; security scanning of them; reviewing what's going on inside them; trying to run them with minimal privileges (internally and externally), and more, will know they're a nightmare from a security perspective. You need to do a lot of work on top of just running the containers to secure them [1][2][3][4] -- they are not fire and forget, as you're implying.
This one is my favourite: https://cheatsheetseries.owasp.org/cheatsheets/Kubernetes_Se... -- what an essay. Keep in mind someone has to do that _and_ secure the underlying hosts themselves for there is an operating system there too.
And then this bad boy: https://media.defense.gov/2022/Aug/29/2003066362/-1/-1/0/CTR... -- again, you have to do this kind of stuff _again_ for the OS underneath it all _and_ anything else you're running.
[1] https://medium.com/@ayoubseddiki132/why-running-docker-conta...
[2] https://wonderfall.dev/docker-hardening/
[3] https://www.isoah.com/5-shocking-docker-security-risks-devel...
[4] https://kubernetes.io/docs/tasks/administer-cluster/securing...
They have their place in development and automated pipelines, but when the option of running on "bare metal" is there you should take it (I actually heard someone call it that once: it's "bare metal" if it's not in a container these days...)
You should never confuse "trivial" with "good". ORMs are "trivial", but often a raw SQL statement (done correctly) is best. Docker is "good", but it's not a silver bullet that just solves everything. It comes with its own problems, as seen above, and they heavily outweigh the benefits.
> Explain in detail how being able to run services by running "docker run" is "overkill". Have you ever went through an intro to Docker tutorial?
Ah! I see now. I don't think you work in operations. I think you're a software engineer who doesn't have to do the Ops or SRE work at your company. I believe this to be true because you're hyper-focused on the running of the containers but not the management of them. The latter is way harder than managing services on "bare metal". Running services via "systemctl" commands, Ansible Playbooks, Terraform Provisioners, and so many other options, has resulted in some of the most stable, cheap to run, capable, scalable infrastructure setups I've ever seen across three countries, two continents, and 20 years of experience. They're so easy to use and manage, the companies I've helped have been able to hire people from University to manage them. When it comes to K8s, the opposite is completely true: the hires are highly experienced, hard to find, and very expensive.
It blows my mind how people run so much abstraction to put x86 code into RAM and place it on a CPU stack. It blows my mind how few people see how a load balancer and two EC2 Instances can absolutely support a billion dollar app without an issue.
> You're just publicly stating your ignorance. Do yourself a favor and check Ubuntu's microk8s. You're mindlessly parroting cliches from a decade ago.
Sure, OK. I find you hostile, so I'll let you sit there boiling your own blood.
All I would say is: can you run that same thing without a containerisation layer? Remember that with things like ChatGPT it's _really_ easy to get a systemd unit file going for just about any service these days. A single prompt and you have a running service that's locked down pretty heavily.
I do run the containers as systemd user services however, so everything starts-up at boot, etc
It's pretty simple these days I think to run k3s or something similar and then deploy stuff how you like via a yaml file. I'll agree though if you need services to share filesystem for some reason it gets more complicated with storage mounts.
And then, some software would require older one and break when you update the dependencies for another package.
Docker is a godsend when you are hosting multiple tools.
For the limited stuff I host, navidrome, firefly, nginx, .. I have yet to see single binary package. It doesn’t seem very common in my experience.
Also an extremely limited list.
>>> if the company is respecting privacy It's very rare to see companies doing it, and moreover it is hard to trust them to even maintain a unique stance as years pass by.
For starters, addressing security vulnerabilities.
https://docs.docker.com/security/security-announcements/
> I kept my box running for more than 1 year without upgrading docker.
You inadvertently raised the primary point against self-hosting: security vulnerabilities. Apparently you might have been running software with known CVEs for over a year.
1) You did something to it (changed a setting, upgraded software, etc.)
2) You didn't do something to it (change a setting, upgrade a software, etc.)
3) Just because.
When it does you get the wonderful "work-like" experience, frantically trying to troubleshoot while the things around your house are failing and your family is giving you looks for it.
Self host but be aware that there's a tradeoff. The work that used to be done by someone else, somewhere else, before issues hit you is now done by you alone.
Coincidentally, I just decided to tackle this issue again on my Sunday afternoon: https://github.com/geerlingguy/ansible-role-firewall/pull/11...
Sometimes it's not fun anymore.
Indeed, no one can predict the future but there are companies with bigger and stronger reputation than other. I pay for instance for iCloud because it’s e2e in my country and pricing is fair, it’s been like that for years and so I don’t have to set up baikal server for calendar, something for file archieving, something else for photo and so on.
I’d be surprised apple did willingly something damaging to user privacy, for the simple reason that they paid so much ads on privacy, they would instantly loose a lot of credibility.
And even stuff you self host, yes you can let it be, not update it for a year but I wouldn’t do that because of security issue. Somethings like navidrome (music player), it’s accessible from the web, no one want to launch a vpn each time you listen to music and so it got to be updated or you may get hacked. And no one can say that the navidrome maintainer will still be there in the coming years, could stop the project, be sick, die… it’s not a guarantee that others take back on the project and provide security update.
I use rhel/rocky Linux exactly because of this. I don’t need the latest software on my home server, and i am reasonably sure i can run yum update without messing up my system.
Most of the time people complain about system administration when self-hosting it’s because they’re using some kind of meme-distro that inevitably breaks (which is something you don’t want on a server, irrespective if it’s at work or at home).
Bonus point: i can run rootless containers with podman (orchestrated via docker-compose).
And i get professionally curated software (security patches backported, selinux policies, high-quality management and troubleshooting tooling).
People who don’t care, ”I’ll just pay”, are especially affected, and the ones who should care the most. Why? Because today, businesses are more predatory, preying on future technical dependence of their victims. Even if you don’t care about FOSS, it’s incredibly important to be able to migrate providers. If you are locked in they will exploit that. Some do it so systematically they are not interested in any other kind of business.
Also shout-out to Zulip for being open source, self hostable, with a cloud hosted service and transfer between these setups.
- kubetail: Kubernetes log viewer for the entire cluster. Deployments, pods, statefulsets. Installed via Helm chart. Really awesome.
- Dozzle: Docker container log viewing for the N150 mini pc which just runs docker not Kubernetes. Portainer manual install.
- UptimeKuma: Monitor and alerting for all servers, http/https endpoints, and even PostgreSQL. Portainer manual install.
- Beszel: Monitoring of server cpu, memory, disk, network and docker containers. Can be installed into Kubernetes via helm chart. Also installed manually via Portainer on the N150 mini pc.
- Semaphore UI: UI for running ansible playbooks. Support for scheduling as well. Portainer manual install.
You can only rent a domain. The landlord is merciless if you miss a payment, you are out.
There are risks everywhere, and it depresses me how fragile is our online identity.
You very reasonably could replace the whole system with just "lists of trusted keys to names" if the concept has enough popular technical support.
If ICANN-approved root.zone and ICANN-approved registries are the only options.
As an experiment I created own registry, not shared with anyone. For many years I have run own root server, i.e., I serve own custom root.zone to all computers I own. I have a search experiment that uses a custom TLD that embeds a well-known classification system. The TLD portion of the domainname can catgorise any product or service on Earth.
ICANN TLDs are vague, ambiguous, sometimes even deceptive.
Any of your special domains will be ones your server claims as authoritative, so I don't understand why you need a root server?
Yes.
None of this is connected to the internet. It is "home lab" stuff.
I have alternatives for so-called "modern" web browsers controlled by advertising companies, too.
For all the third-party-mediated stuff on today's internet I generally have alternatives that let me have more control.
That’s a skill issue though.
I have a domain that i used to pre-pay for years in advance.
For my current main domain i had prepaid nine years in advance and it was paid up to 2028. A couple of years ago i topped it up and now it’s prepaid up to 2032.
It’s not much money (when I prepaid for 9 years i spent like 60€ or so) and you’re usually saving because you’re fixing the price so skipping price hikes, inflation etc.
Yeah, good point!
Self-hosting doesn’t mean you have to buy hardware. After a few years, low-end machines are borderline unusable with Windows, but they are still plenty strong for a Linux server. It’s quite likely you or a friend has an old laptop laying around, which can be repurposed. I’ve done this with an i3 from 2011 [1] for two users, and in 2025 I have no signs that I need an upgrade.
Laptops are also quite power efficient at idle, so in the long run they make more sense than a desktop. If you are just starting, they are a great first server.
(And no, laptops don’t have an inbuilt UPS. I recommend everyone to remove the battery before using it plugged 24x7)
1: https://www.kassner.com.br/en/2023/05/16/reusing-old-hardwar...
Free yes. Power efficient no. Unless you switch your laptops every two years, it's unlikely to be more efficient.
some benchmarks show the Raspberry Pi 4 idling below 3W and consuming a tad over 6W under sustained high load.
Power consumption is not an argument that's in favor of old laptops.
That is the key. The RPi works for idling, but anything else gets throttled pretty bad. I used to self host on the RPi, but it was just not enough[1]. Laptops/mini-PCs will have a much better burstable-to-idle power ratio (6/3W vs 35/8W).
1: https://www.kassner.com.br/en/2022/03/16/update-to-my-zfs-ba...
I don't have a dog in this race, but I recall that RPi's throttling issues when subjected to high loads were actually thermal throttling. Meaning, you picked up a naked board and started blasting benchmarks until it overheated.
You cannot make sweeping statements about RPi's throttling while leaving out the root cause.
So that'd take 30 years to pay back. Or, with discounted cash flow applied... Probably never.
I'm currently running Syncthing, Forgejo, Pihole, Grafana, a DB, Jellyfin, etc... on a M910 with an i5 (6th or 7th Gen) without problems.
Something with 8th gen i5 can be had for about 100-150 USD from ebay, and that's more than powerful enough for nearly all self-hosting needs. Supports 32-64gb of RAM and two SSD.
These are great and the M920q is also nice.
At 100 to 160 used these are a steal, just test the disks before you commit to long term projects with them (some have a fair bit of wear). Its newer cousins quickly climb in price to the $300+ range (still refurb/used)
The bleeding edge of this form factor is the Minisforum MS-01. At almost 500 bucks for the no ram/storage part it's a big performance jump for a large price jump. This isnt a terrible deal if you need dual SFP+ ports (and you might) and a free PCIE slot but it is a large price jump.
I’m pissed at Lenovo for making the perfect machine for a home server, and then cheaping out by not adding the $0.50 M.2 connector on the back of the board. 2xM.2 + 1xSATA requires upgrading to “Tall” Intel NUCs if you want 3 discs.
> if you want the resilience offered by RAID
IMHO, at that stage, you are knowledgeable enough to not listed to me anymore :P
My argument is more on the lines of using an old laptop as a gateway drug to the self-hosting world. Given enough time everyone will have a 42U rack in their basements.
raid is NOT media or connection dependent and will happily do parity over mixed media and even remote blockdevs
That being said, the reason why I'm afraid of not using RAID is data integrity. What happens when the single HDD/SSD in your system is near its end of life? Can it be trusted to fail cleanly or might it return corrupted data (which then propagates to your backup)? I don't know and I'd be happy to be convinced that it's never an issue nowadays. But I do know that with a btrfs or zfs RAID and the checksuming done by these file systems you don't have to trust the specific consumer-grade disk in some random laptop, but instead can rely on data integrity being ensured by the FS.
Also, if you're paranoid avout drive behavior, run ZFS. It will detect such problems and surface it at the OS level (ref "Zebras All The Way Down" by Bryan Cantrill)
for the same reason i don't buy laptops with soldered SSD. if the laptop dies, chances are the SSD is still ok, and i can recover it easily.
Also, because it's fun and probably many self-hosters had racked servers and plugged disks in noisy, cold big chambers and they want to live again the fun part of that.
Way more convenient to just swap out a drive then to swap out a drive and restore from backup.
I had a look at my notes and so far the only unexpected downtime has been due to 1x CMOS battery running out after true power off, 1x VPS provider randomly powering off my reverse proxy, 2x me screwing around with link bonding (connections always started to fail a few hours later, in middle of night).
This means nothing until the need to replace one drive arises, then it's not an if..
No downtime with raid 5, you can swap out one drive as needed while the rest runs just fine.
The challenge is a "bit of a backup" is risky. There's no back up if it's only a single copy of something, or even a single copy of something.
3-2-1 backups are really in time teach everyone the lesson that you don't buy storage, you buy backups, some that are more quickly accessible than others.
The cost of "maximizing" space with the drives I have, for example, is relatively trivial and simpler, its in the hundreds of dollars now instead of thousands. The upside is huge.
Solely trusting third party services is risky, and locally holding your data can be relatively managed well.
What I meant by “bit of backup” was you could actually restore files that were deleted since the last sync since it’s not always live like RAID and requires scheduled syncs to update the parity info. It’s a compromise I make as a home user for my media server.
Consumer NASes have been around for 20 years, now, though, so I think most people would just mount or map their storage.
I bet Framework laptops would take this dynamic into overdrive, sadly I live in a country that they don't ship to.
It’s in my (long-term) TODO list to build my own enclosure for a Framework motherboard, to make a portable server to carry around during long trips. Something compact that carries the punch of an i7. One day…
it's true that with a bit of education, you can get pretty far with old machines
GPU: NVIDIA GeForce GT 640M
Memory: 7818MiB
Those specs are showing their age for sure, but I run the TV at 1366x768, so they've been enough. The CPU has been an absolute champ. I'm sure running XFCE as the window manager has a lot to do with why it trucks along, XFCE is amazingly low footprint and snappy.
I hope Asahi for Mac Mini M4 becomes a thing. That machine will be an amazing little server 10 years from now.
Where I live (250 apartment complex in Sweden) people throw old computers in the electronics trash room, I scavenge the room every day multiple times when I take my dog out for a walk like some character out of Mad Max. I mix and match components from various computers and drop debian on them then run docker containers for various purposes. I've given my parents, cousins and friends Frankenstein servers like this. You'd be amazed at what people throw away, not uncommon to find working laptops with no passwords that log straight into Windows filled with all kinds of family photos. Sometimes unlocked iPhones from 5 years ago. It's a sick world we live in. We deserve everything that's coming for us.
I hope it reflects the fact that most people don't have a great understanding of IT and cyber security rather than a sign of a sick world ;)
I would have thought any reasonably recent laptop would be fine to leave plugged in indefinitely. Not to mention many won't have an easily removable battery anyway
Also when using an old laptop, the battery could be pretty beaten up (too many cycles or prolonged exposure to heat) or it could have been replaced by a cheap non-compliant alternative, making it harder to trust wrt fire risk. And if you have to buy a brand-new one to reduce that risk, it immediately changes all the economic incentives to use an old laptop (if you are gonna spend money, might as well buy something more suitable).
> many won't have an easily removable battery
That’s true, although I’d guess majority can still have the battery disconnected once you get access to the motherboard.
Both methods work under Asahi Linux on the ARM macs.
My homelab servers have Athlon 200GE CPUs in them: https://www.techpowerup.com/cpu-specs/athlon-200ge.c2073
They're x86 so most software works, AM4 socket so they can have the old motherboards I had in my PC previously, as well as the slower RAM from back then. At the same time they were dirt cheap on AliExpress, low TDP so I can passively cool them with heatsinks instead of fans and still powerful enough for self-hosting some software and using them as CI runners as well. Plus, because the whole setup is basically a regular PC with no niche components, the Linux distros I've tried on them also had no issues.
Honestly it's really cool that old components can still be of use for stuff like that.
Typically available regularly via ebay (or similar) as businesses rotate them out for new hardware.
The other week I picked up an i5 9400T Lenovo m720q with 16GB of memory for £100 delivered.
They practically sip power, although that's less true now I've shoved a 10Gb dual SFP NIC in there.
A friend told me about Linux. So I thought I had nothing to lose. What I didn't know is what I had to gain.
Ended up getting hooked. Grabbed computers out of the dumpster at my local community college and was able to piece together a few mildly decent machines. And even to this day I still recycle computers into random servers. Laptops and phones are usually great. They can't do everything but that's not the point. You'd be surprised what a 10 yo phone can still do.
I'm not trying to brag, but do want people to know that it's very possible to do a lot in absolutely nothing. I was living paycheck to paycheck at the time. It's not a situation I want anyone to go through, but there is a lot more free hardware out there than you think. People throw out a lot of stuff. A lot of stuff that isn't even broken! Everything I learned on was at least 5 years old at the time. You don't need shiny things and truth is that you don't get a lot of advantages from them until you get past the noob stage. It's hard, but most things start hard. The most important part is just learning how to turn it into play.
Uses a bunch of power but two orders of magnitude less in cash than buying another ECC ram desktop over 3 years.
If it blows up it cost me nothing other than an hour of part swapping.
> If it blows up it cost me nothing other than an hour of part swapping.
I think this is part of the magic sauce.When you're poor you're probably less risky with expensive stuff and what's considered expensive is a low threshold.
But if it was a dumpster find... who cares?
I have a fairly high end M4 Macbook Pro but prefer to live as if I don't most of the time. All of us can take a big fall in life so it makes sense to keep one foot in both worlds.
It's about making the most of opportunity, something which is all around us and needs to be utilised.
I agree but not all laptops can run without battery being plugged in. I use a Acer E5 575 as a home-lab and it can’t run without battery being plugged in, but interestingly the laptop has decided to bypass the battery completely after it died. Operating Systems detect no battery but its there and without it the laptop won’t boot.
https://www.reddit.com/r/selfhosted/comments/1kqrwev/im_addi...
It has Cloudflare Tunnel in front of it, but I previously have used nginx+letsencrypt+public_ip. It stores data on Cloudflare R2 but I've stored on S3 or I could store on a local NAS (since I access R2 through FUSE it wouldn't matter that much).
You have to rent:
* your domain name - and it is right that this is not a permanent purchase
* your internet access
But almost all other things now have tools that you can optionally use. If you turn them off the experience gets worse but everything still works. It's a much easier time than ever before. Back in the '90s and early 2000s, there was nothing like this. It is a glorious time. The one big difference is that email anti-spam is much stricter but I've handled mail myself as recently as 8 years ago without any trouble (though I now use G Suite).
I try to use LXCs whenever the software runs directly on Debian (Proxmox's underlying OS), but it's nice to be able to use a VM for stuff that wants more control like Home Assistant's HAOS. Proxmox makes it fairly straightforward to share things like disks between LXCs, and automated backups are built in.
It's hardly a requirement but if someone is just starting to learn, proxmox has lots of documentation on how to do things and the UI keeps you from footgunning yourself copy/pasting config code off websites/LLM too much.
I also like that Proxmox can be fully managed from the web UI. I'm sure most of this is possible with LCD on some distro, but Proxmox was the standard at the time I set it up (LXD wasn't as polished then)
But I also run Unraid on the main NAS server purely for its ZFS drive setup. Being able to throw in a bunch of drives of various sizes and brands on a home machine is pretty valuable and saves a huge amount of money.
Which is not exactly what you want from a gaming PC.
I can’t help but wonder if mainstream adoption of open source and self hosting will cause a regulatory backlash in favour of big corpo again (thinking of Bill Gates’ letter against hobbyists)
Great read!
I don't run into resource issues on the Pi4B, but resource paranoia (like range anxiety in EVs) keeps me on my toes about bandwidth use and encoding anyway. I did actually repurpose my former workstation and put it in a rackmount case a couple weeks ago to take over duties and take on some new ones, but it consumes so much electricity that it embarrasses me and I turned it off. Not sure what to do with it now; it is comically over-spec'd for a web server.
Most helpful thing to have is a good router; networking is a pain in the butt, and there's a lot to do when you host your own when you start serving flask servers or whatever. Mikrotik has made more things doable for me.
every day, a script checks all IP addresses in the post-processed database to see if there are "clusters" on the same subnet. I think it's if we see 3 visitors on the same subnet, we consider it a likely bot and retroactively switch those entries to being a bot in the database. Without taking in millions of visitors, I think this is reasonable, but it can introduce errors, too.
I've built tons of stuff in my career, but building the thing that can host all of it for myself has been hugely rewarding (instead of relying on hosting providers that inevitably start charging you)
I now have almost 15 apps hosted across 3 clusters:
One of the most cherised things I've built, and I find myself constantly coming back and improving / updating out of love.
So, again, start at the end - when disaster strikes - and consider what you think is worth keeping/bringing back up, and then plan backwards from there to make it so.
larodi•8mo ago