It needs to be stupid easy and reliable.
I really like what's happening in the ublue space were folks are tweaking and optimizing distros for specific use cases (like bazzite for gaming) and then sharing them
NixOS does support that to an extent, but it certainly doesn't have the same community movement behind it like those
I want to self-host one of those floss Pocket replacements but I don't want to pay more than what these projects charge for hosting the software themselves (~$5). I am also considering self-hosting n8n. I don't have any sophisticated requirements. If it was possible I would host it from my phone with a backup to Google Drive.
Any old PC with low idle power draw.
See https://www.servethehome.com/introducing-project-tinyminimic... for a good list of reviews.
Really, any machine from the last decade will be enough, so if you or someone you know have something lying around, go use that
The two main points to keep in mind are power draw (older things are usually going to be worse here) and storage expandability options (you may not need much storage for your use case though). Worse case you can plug in a USB external drive, but bare in mind that USB connection might be a little flaky
It won't give you 99.999% uptime, but for that stage in my life it was just stellar. I even had an open source project (Slackware fork) where I collaborated with someone else through that little machine.
Second-hand hardware is also a great way to get high-quality enterprise hardware. E.g. during the same time period I had a Dell workstation with two Xeon CPUs (not multi-core, my first SMP machine) and Rambus DRAM (very expensive, but the seller maxed it out).
I've looked into Wallabag but perhaps there are more I don't know?
Why did you go with Nextcloud instead of using something more barebones, for example a restic server?
As for Nextcloud vs a restic server, Nextcloud is heavier, but I do benefit from it's extra features (like Calendar and Contact management) as well as use a couple of apps (Memories for photos is quite nice). Plus it's much more family friendly, which was a core requirement for my setup
If you dont have a home lab, start one. Grab a 1l pc off of ebay. Think center m720q or m920q with an i5 is a great place to start. It will cost you less than 200 bucks and if you want to turn it into a NAS or an Opnsense box later you can.
When it arrives toss Proxmox on it and get your toys from the community scripts section... it will let you get set up on 'easy mode'. Fair warning, having a home lab is an addiction, and will change how you look at development if you get into it deeply.
So I started buying junk on eBay and trying to connect it together and make it do things, and the more frustrated I got, the less able I was to think about literally anything else, and I'd spend all night poking around on Sourceforge or random phpBBs trying to get the damn things to compile or communicate or tftp boot or whatever I wanted them to do.
The only problem was eventually I got good enough that I actually _could_ keep the thing running and my wife and kid and I started putting good stuff on my computers, like movies and TV shows and music and pictures and it started to actually be a big deal when I blew something up. Like, it wasn't just that I felt like a failure, but that I felt like a failure AND my kid couldn't watch Avatar and that's literally all he wanted to watch.
So now I have two homelabs, one that keeps my family happy and one that's basically the Cato to my Clouseau, a sort of infrastructural nemesis that will just actually try to kill me. Y'know, for fulfillment.
should I be using terraform and ansible?
im using cursor to ssh and it constantly needs to run commands to get "state" of the setup.
basically im trying to do what I used to do on AWS: setup VMs on private network talking to each other with one gateway dedicated to internet connection but this is proving to be extremely difficult with the bash scripts generated by cursor
if anyone can help me continue my journey with self hosting instead of relying on AWS that would be great
That is a pretty broad target. I would say start by setting up an opnsense vm, from there you can do very little to start, just lock down your network so you can work in peace. But it can control your subnet traffic, host your tailscale, dchp server, and adguard home, etc.
As somebody who was quite used to hosting my own servers, before I first set up my homelab I thought proxmox would be the heart of it. Actually opnsense is the heart of the network, proxmox is much more in the background.
I think proxmox + opnsense is great tech and you should not be adding in terraform and ansible, but I am not sure that using cursor is helping you. You need a really good grasp of what is going on if your entire digital life is going to be controlled centrally. I would lean heavily on the proxmox tutorials and forums, and even more on the opnsense tutorials and forums. Using cursor for less important things afterwards, or to clarify a fine point every once in a while would make more sense.
Read the docs!
https://pve.proxmox.com/wiki/Network_Configuration#_choosing...
I think the networking experience for hosts is one of the worst things about Proxmox.
Also, I found TrueNAS's interface a little more understandable. If Proxmox isn't jiving with you, you could give that a try
> This means keep one login per person, ideally with SSO, for as many services as I can
Truly S-tier target. Incredible hard, incredible awesome.
I've said for a long time that Linux & open source is kind of a paradox. It goes everywhere, it speaks every protocol. But as a client, as an end. The whole task of coordinating, of groupwareing, of bringing together networks: that's all much harder, much more to be defined.
Making the many systems work together, having directory infrastructure: that stuff is amazing. For years I assumed that someday I'd be running FreeIPA or some Windows compatible directory service, but it sort of feels like maybe some OpenID type world might possibly be gel'ing into place.
I've been thinking if a platform which connects techies to non-techies can help solve that, say like a systems integrator for individuals.
[1] https://needgap.com/problems/484-foss-are-not-accessible-to-...
On 2023-12-15 they published an update to OpenID Connect Core 1.0, called "errata set 2". Previously it said to verify an ID token in a token response, the client needs to
> * If the ID Token contains multiple audiences, the Client SHOULD verify that an azp Claim is present.
> * If an azp (authorized party) Claim is present, the Client SHOULD verify that its client_id is the Claim Value.
The new version is quite different. Now it says
> * If the implementation is using extensions (which are beyond the scope of this specification) that result in the azp (authorized party) Claim being present, it SHOULD validate the azp value as specified by those extensions.
> * This validation MAY include that when an azp (authorized party) Claim is present, the Client SHOULD verify that its client_id is the Claim Value.
So core parts of the security of the ID Token are being changed in errata updates. What was the old purpose of azp? What is the new purpose of azp? Hard to tell. Did all the OIDC implementations in existence change to follow the new errata update (which didn't update the version number)? I doubt it.
https://openid.net/specs/openid-connect-core-1_0.html
https://web.archive.org/web/20231214085702/https://openid.ne...
Or how about a more fundamental question: Why does the ID Token have a signature? What attack does that signature prevent? What use cases does the signature allow? The spec doesn't explain that.
I mean, both the old and new version (at least, the parts quoted upthread) are exclusively SHOULD and MAY with no MUST, so (assuming, for the SHOULDs, the implementer had what they felt was sufficiently good reason) literally any behavior is possible while following the spec.
And I agree with the feeling that open source is everywhere, up until a regular user picks up something. I think part of the paradox you mention is that every project is trying to work on their own thing, which is great, but also means there isn't a single entity pushing it all in one direction
But that doesn't mean we can't get to nice user experiences. Just in the self-hosting space, things have gotten way more usable in the last 5 years, both from a setup and usage perspective
I think "home labbing" fulfils much the same urge / need as the old guys (I hate to say it but very much mostly guys) met by creating hugely detailed scale model railways in their basement. I don't mean that in a particularly derogatory way, I just think some people have a deep need for pocket worlds they can control absolutely.
Each "stage" above is like incremental failure domains, unifi only keeps internet working, core vms add functionality (like unifi mgmt, rancher, etc), truenas is for "fun extras" etc. k8s lab has nothing I need to keep on it because distributed storage operators are still kind of explodey.
Like each part makes sense individually but when I look at the whole thing I start to question my mental health.
Imagine simplest possible deployment you've cooked up.
Now imagine explaining your mother how to maintain it after you're dead and she needs to access the files on the service you setup.
usually, selfhosting is not particularly hard. It's just conceptually way beyond what the average joe is able to do. (Not because they're not smart enough, but simply because they never learned to and will not learn now because they don't want to form that skill set. And I'm not hating on boomers, you can make the same argument with your hypothetical kids or spouse. The parents are just an easy placeholder because you're biologically required to have them, which isn't the case for any other familial relationship)
I assume most people know at least one person who would do this for them , in the event of their death?
- I have an ntfs formatted external USB drive to which cron copies over a snapshot of changed daily into a new folder. Stuff like paperless, flat file copy of seafile libraries. The size of that stuff is small <50gb, duplication is cheap. In event of death or dismemberment... that drive needs to be plugged into another machine. There are also seafile whole library copies on our various laptops without the iterative changes. Sync breaks... keep using your laptop.
- I've been meaning to put a small pc/rpi at a friend's place/work with a similar hard drive.
- the email domain is renewed for a decade and is hosted on iCloud for ease of renewal. Although I am not impressed that it bounces emails when storage is full from family member photos which happens regularly so may switch back to migadu.
The most important thing is to be able to get important data off of it and have access to credentials that facilitate that. You could setup something like Nextcloud to always sync important data onto other people's devices, so make part of that easier
But I think another important aspect is making folks invested in the services. I don't expect my partner to care about or use most of them, but she does know as much as I do about using and automating Home Assistant (the little we've done). Things like that should keep working because of how core they can become to living our lives. It being a separate "appliance" and not a VM will also help manage that
But also that's a lot of hope and guessing. I think sitting down with whoever might be left with it and putting together a detailed plan is critical to any of that being successful
I'm more worried by home automation in my case ^^;
The chance of someone breaking into your house is sadly much more likely, and them choosing to take any computers they see is almost a certainty at that point.
Your drives are unencrypted. What's your next step if you come home tonight and find the house ransacked and the server gone?
I’ve been thinking of making a version of this that does a webhook but it doesn’t offer a huge amount of value over the email method.
While I haven't given all of my keys to my family, there's a clear route for them to get them, and written instructions how to do so. Along with an overview of the setup and a list of friends and colleagues they can turn to, this is enough for them to get access to everything and then decide if they want to carry on using it, or migrate the data somewhere else.
- Store your SSH public keys and host keys in LDAP.
- Use real Solaris ZFS that works well or stick with mdraid10+XFS, and/or use Ceph. ZoL bit me by creating unmountable volumes and offering zero support when their stuff borked.
- Application-notified, quiesced backups to some other nearline box.
- Do not give all things internet access.
- Have a pair (or a few) bastion jumpboxes, preferably one of the BSDs like OpenBSD. WG and SSH+Yubikey as the only ways inside, both protected by SPA port knocking.
- Divy up hardware with a type 1 hypervisor and run kubernetes inside guests in those.
- Standardize as much as possible.
- Use configuration and infrastructure management tools checked into git. If it ain't automated, it's just a big ball of mud no one know how to recreate.
- Have extra infrastructure capacity for testing and failure hot replacements.
Security paranoia, but here are the details of my home lab. WHY? If god forbid someone gets in they could in an instant identify the target...
I just don't like the lock-in that you get Synology. Plus I do enjoy tinkering with these things, so I wanted to put together something that balances usability, complexity while minimizing that lock-in
I want to have a block of gunk on the LAN, and to connect devices to the LAN and be able to seamlessly copy that block to them.
Bonus: any gunk I bring home gets added to the block.
First part works with navidrome: I just connect through the LAN to my phone with amperfy and check the box to cache the songs. Now my song gunk is sync'd to the phone before I leave home.
This obviously would fit a different mindset. Author has a setup optimized for maximum conceivable gunk, whereas mine would need to be limited to the maximum gunk you'd want to have on the smallest device. (But I do like that constraint.)
> My main storage setup is pretty simple. It a ZFS pool with four 10TB hard drives in a RAIDZ2 data vdev with an additional 256GB SDD as a cache vdev. That means two hard drives can die without me loosing that data. That gives me ~19TB of usable storage, which I’m currently using less than 10% of. Leaving plenty of room to grow.
I would question this when buying a new system and not having a bunch of disks laying around... having a RAID-Z2 with four 10GB disks offers the same space as a RAID1 with two 20GB disks. Since you don't need the space NOW, you could even go RAID1 with two 10TB disks and grow it by replacing it with two 20TB as soon as you need more. This in my opinion would be more cost effective, since you only need to replace 2 disks instead of 4 to grow. This would take less time and since prices per TB are probably getting lower over time, it could also save you a ton of money. I would also say that the ability of losing 2 disks won't save you from having a backup somewhere...
Also agree, RAID isn't a replacement for a backup. I have all my important data on my desktop and laptop with plans for a dedicated backup server in the future. RAID does give you more breathing room if things go wrong, and I decided that was worth it
Two drives are easy to replace, easy to spare, consume less power and are quieter than 4+.
The only advantage i See in raid5/6 is on 25Tb of storage requirement within 3 years.
As another data point, my NAS runs 4x4TB drives. When I bought them new some 2-3 years ago, all at the same time, they were cheaper than buying the equivalent 2x8TB.
My situation was somewhat different, though, since I'm running raidz1. But I did consider running a mirror, specifically in order to ease upgrading the capacity. However, I didn't expect to fill them /that/ quickly and I was right: yesterday it was still less than 70% full.
Estimating storage growth is hard but when you monitor it regularly, its saving you much money
- one single machine - nginx proxy - many services on the same machine; some are internal, some are supposed to be public, are all accessible via the web! - internal ones have a humongous large password for HTTP basic auth that I store in an external password manager (firefox built in one) - public ones are either public or have google oauth
I coded all of them from scratch as that's the point of what I'm doing with homelabbing. You want images? browsers can read them. Videos? Browsers can play them.
The hard part is the backend for me. The frontend is very much "90s html".
Recently I found out Gitea or Forgejo can act as an Oauth provider. And since these support ldap you can for example deploy a Samba AD and set it up as an authentication source for Gitea/Forgejo. If you enable the OAuth feature you can connect stuff like grafana and log in with your Samba AD credentials.
To me this is more convenient than running a dedicated auth service considering Forgejo can also provide git, wiki, docker registry (also authenticated) and other function. It's such an underrated piece of software and uses so few resources.
Homelabbing is fun :')
At this rate if I keep seeing good article about NixOS I might actually switch for real haha!
Nobody uses the local nextcloud because they just don't think they can rely on it, it doesn't always work from their perspective, and is too finicky to use, because it needs an external app (Tailscale).
This can be only fixed when the app itself can trigger a vpn connection, and I don't think this is going to happen any time soon.
Also, how do you configure Cloudflare for a road warrior setup? How do you track ever changing dynamic IPs? As mentioned, all I need is a Wireguard client and I’m golden.
That's a fair point, but for my use case, I feel comfortable enough with CloudFlare given the trade-offs.
> You also need to trust they Cloudflare doesn’t make mistakes, either.
I think the chances of CloudFlare making a mistake are much lower than me or any other individual Developer.
> Cloudflare for a road warrior setup? How do you track ever changing dynamic IPs?
I think you need to read the docs. All of that works without any extra config when using tunnels.
mirdaki•8h ago
I wanted to a share my blog post walking through how I finally built a setup that I can just be happy with and use. It goes over my goals, requirements, tech choices, layout, and some specific problems I've resolved.
Where I've landed of course isn't where everyone else will, but I hope it can serve as a good reference. I’ve really benefited from the content and software folks have freely shared, and hope I can continue that and help others.
redrove•7h ago
The reason I ask is I homelab “hardcore”; i.e. I have a 25U rack and I run a small Kubernetes cluster and ceph via Talos Linux.
Due to various reasons, including me running k8s in the lab for about 7 years now, I’ve been itching to change and consolidate and simplify, and every time i think about my requirements I somehow end up where you did: Nix and ZFS.
All those services and problems are very very familiar to me, feel free to ask me questions back btw.
MisterKent•6h ago
What does your persistent storage layer look like on Talos? How have you found it's hardware stability over the long term?
esseph•6h ago
redrove•6h ago
Well, for its own storage: it's an immutable OS that you can configure via a single YAML file, it automatically provisions appropriate partitions for you, or you can even install the ZFS extension and have it use ZFS (no zfs on root though).
For application/data storage there's a myriad of options to choose from[0]; after going back and forth a few times years ago with Longhorn and other solutions, I ended up at rook-ceph for PVCs and I've been using it for many years without any issues. If you don't have 10gig networking you can even do iSCSI from another host (or nvmeof via democratic-csi but that's quite esoteric).
>How have you found it's hardware stability over the long term?
It's Linux so pretty good! No complaints and everything just works. If something is down it's always me misconfiguring or a hardware failure.
[0] https://www.talos.dev/v1.11/kubernetes-guides/configuration/...
udev4096•5h ago
cess11•3h ago
mirdaki•5h ago
But once I fully understood how it's features really make it easy for you to recover from mistakes and how useful the package options available from nixpkgs are, I decided it was time to sink in and figure it out. Looking at other folks nix config on GitHub (especially for specific services you're wanting to use) is incredibly helpful (mine is also linked in the post)
I certainly don't consider myself to be a nix expert, but the nice thing is you can do most things by using other examples and modifying them till you feel good about it. Then overtime you just get more familiar with and just grow your skill
Oh man, having a 25U rack sounds really fun. I have a moderate size cabinet I keep my server, desktop, a UPS, 10Gig switch, and my little fanless Home Assistant box. What's yours look like?
I should add it to the article, but one of my anti-requirements was anything in the realm of high availability. It's neat tech to play with, but I can deal with downtime for most things if the trade off is everything being much simpler. I've played a little bit with Kubernetes at work, but that is a whole ecosystem I've yet to tackle
redrove•4h ago
Those are my chief complaints as well, actually. I never quite got to the point where I grasped how all the bits fit together. I understand the DSL (though the errors are cryptic as you said) and the flakes seemed recommended by everyone yet felt like an addon that was forgotten about (you needed to turn them on through some experimental flag IIRC?).
I'll give it another shot some day, maybe it'll finally make sense.
>Oh man, having a 25U rack sounds really fun. I have a moderate size cabinet I keep my server, desktop, a UPS, 10Gig switch, and my little fanless Home Assistant box. What's yours look like?
* 2 UPSes (one for networking one for compute + storage)
* a JBOD with about 400TB raw in ZFS RAID10
* a little intertech case with a supermicro board running TrueNAS (that connects to the JBOD)
* 3 to 6 NUCs depending on the usage, all running Talos, rook-ceph cluster on the NVMEs, all NUCs have a Sonnet Solo 10G Thunderbolt NIC
* 10 Gig unifi networking and a UDM Pro
* misc other stuff like a zima blade, a pikvm, shelves, fans, ISP modem, etc
I'm not necessarily thinking about downsizing but the NUCs have been acting up and I've gotten tired of replacing them or their drives so I thought I'd maybe build a new machine to rule them all in terms of compute and if I only want one host then k8s starts making less sense. Mini PCs are fine if you don't push them to the brim like I do.
I'm a professional k8s engineer I guess, so on the software side most of this comes naturally at this point.
bjoli•4h ago
redrove•4h ago
raybb•5h ago
https://coolify.io/
mirdaki•5h ago
oulipo•2h ago
colordrops•3h ago
My goal is to have a small nearly zero-conf apple-device-like box that anyone can install by just plugging it into their modem then going through a web-based installation. It's still very nascent but I'm already running it at home. It is a hybrid router (think OPNSense/PFSense) + app server (nextcloud, synology, yunohost etc). All config is handled through a single Nix module. It automatically configures dynamic DNS, Letsencrypt TLS certs, and subdomains for each app. It's got built in ad blocking and headscale.
I'm working on SSO at the moment. I'll take a look at your work and maybe steal some ideas.
The project is currently self-hosted in my closet:
https://homefree.host
ultra2d•2h ago
I have dabbled before with FreeIPA and other VMs on a Debian host with ZFS. For simplicity, I switched to running Seafile with encrypted libraries on a VPS and back that up to a local server via ZFS send/receive. That local server switches itself on every night, updates, syncs and then goes into sleep again. For additional resiliency, I'm thinking of switching to ZFS on Linux desktop (currently Fedora), fully encrypted except for Steam. Then sync that every hour or so to another drive in the same machine, and sync less frequently to a local server. Since the dataset is already encrypted, I can either sync to an external drive or some cloud service. Another reason to do it like this is that storing a full photo archive within Seafile on a VPS is too costly.
A4ET8a8uTh0_v2•7m ago