How to actually reliably expose a homelab to the broader internet is a little tricky, cloudflare tunnels mostly does the trick but can only expose one port at a time, so the set up is somewhat annoying
Some family members are behind CGNAT, and I'm not sure if their ISP has the option to move out from behind that, but since they don't self-host it's probably slightly more secure from outside probes. We're still able to privately share communications via my VPN hub to which they connect, which allows me to remotely troubleshoot minor issues.
I haven't looked into cloudflare tunnels, but haven't felt the need.
I run cloudflared on one machine, and it proxies one subdomain to one port, and another to a unix socket (could have been a second port, no pb).
Also Proxmox was called out as the only choice when that is very much not the case. It is a good choice for sure, but there are others.
A 'power outage' incident doesn't seem to have been mitigated. My homelab has had evolving mitigations: I cut a hole in the side of a small UPS so I could connect it to a larger (car) battery for longer uptime, which got replaced by a dedicated inverter/charger/transfer-switch attached to a big-ass AGM caravan battery (which on a couple of occasions powered through two-to-three hour power outages), and has now been replaced with these recent LiFePo4 battery power station thingies.
Of course, it's only a homelab, there's nothing critically important that I'm hosting, but that's not the dang point, I want to beat most of "the things", and I don't like having to check that everything has rebooted properly after a minor power fluctuation (I have a few things that mount remote file stores and these mounts usually fail upon boot due to the speed at which certain devices boot up - and I've decided not to solve that yet).
Can you share more about this? I have a APC Back UPS PRO USV 1500VA (BR1500G-GR) and it would be nice to know if this is possible with that one as well.
It was a crude mod. Take the cover off and remove the existing little security alarm battery, use tin snips to cut a hole in the side of the metal UPS cover (this was challenging, it was relatively thick metal, I'd recommend using an angle grinder in an appropriately safe environment far away from the internals of the UPS), and feed the battery cables out through the hole. I probably got some additional cables with appropriately sized terminations to effectively extend the short existing ones (since they were only designed to be used within the device). And then connect it up to a car battery.
Cover any exposed metal on the connectors with that shrink rubber tubing or electrical tape. Be very careful with exposed metal around it anywhere, especially touching the RED POSITIVE pole of the battery. Get a battery box - I got one for the big-ass AGM battery.
Test it out on a laptop that's had it's battery removed or disconnected that, just in case, you don't care too much about losing.
Get a battery charger that can revive a flat battery, and do a full refresh/renew charge on the car battery once a year or after it's had to push through a power outage that may have used more than a few percent of its capacity.
Personally, I think it's safer a less hassle to go for a LiFePo4 (LFP) Power Station style device that has UPS capabilities. LFP batteries have 3,000-ish cycle lifetimes, which could be nearly ten years with daily use.
Why not just drill a hole? Drill bits large enough to drill a hole for 120A cables exist.
> Get a battery charger that can revive a flat battery, and do a full refresh/renew charge on the car battery once a year or after it's had to push through a power outage that may have used more than a few percent of its capacity.
If you're going this route I'd recommend a marine battery. Car batteries don't handle deep cycles well, and, TBH, UPS chargers aren't designed for failed car batteries (nor marine batteries) and can possibly cause an explosion in the lead-acid battery has a few dead cells.
the real worry is these are already a fire hazzard and so something goes wrong insurance will blame the mod even if not at fault
Maybe it’s because when I was a kid, I fancied myself an experimenter, and I had a wire ripped off a lamp, and touched the two ends together…
The discharging circuitry is fine, but the _charger_ might overheat because a larger battery can draw more current while charging for longer periods. I discovered that when I tried to attach a "lead-acid compatible" LFP battery to an UPS.
These days, it's just easier to buy a dedicated rack-mountable LFP battery originally meant for solar installations, an inverter/charger controller, and a rectifier. The rectifier output will serve as a "solar panel" input for the battery. You get a double-conversion UPS with days-long holdover time for a fraction of a lead-acid UPS.
If you don't recognize the terms "sealed", "lead-acid", "battery", "capacity", or "voltage" then you shouldn't do this.
About the only advantage of it is that it's cheap (free if you find a UPS in the trash with an already dead battery), but those cheap UPSs make really crap quality power, and for some of them the only reason they don't overheat is because their stock battery is so small. It's a bit like how you can cook a whole turkey in the microwave, but you probably don't want to.
Do you have power outages often? Even if I have one, my services can come up automatically without doing anything, when the power is restored.
Speaking of... does anyone know how to speed up wifi router boot time? Stupid thing takes 5 minutes almost.
This is probably due to the access point having minimal hardware for the task, and it's startup not being particularly well optimised, so "buy a better AP/router" use likely the most practical answer.
As an alternative, you could buy a small device (perhaps a recent rPi model) with more umph (or add this task to an existing machine in your home lab setup), give it a wireless NIC if it doesn't already have one, and run hostapd to turn it into an AP. That might startup a lot faster.
Maybe try using OpenWRT if your router hardware is supported
If your OS is using systemd, you can fix that pretty easily by adding an After=network-online.target (so the ExecStart doesn't even try to check if there is no networking yet) and an ExecCondition shell script [1] to actually check if nfs / smb on the target host is alive as an override to the fs mounts.
Add a bunch of BindsTo overrides to the mounts and the services that need the data, and you have yourself a way to stop the services automatically when the filesystem goes away.
I've long been in the systemd hater camp, but honestly, not having to wrangle with once-a-minute cronjobs to check for issues is actually worth it.
[1] https://forum.manjaro.org/t/for-those-who-use-systemd-servic...
It doesn't conflict with anything you've said, just a very handy document.
So even if your local node could transmit, none of the others could, and they can't buffer either.
To mitigate power outage, you would need both power, and a cellular connection, and that connection would only be good for 2-3 hours (Cell tower backups), and those would require a Cradlepoint.
What I don't want though is a power outage putting my server offline while I'm on holidays, and not be able to access my services at all.
My ISP-provided router supports Wireguard, so I can use that to connect to my KVM and send the Wake on LAN packages.
But if you can't explain the difference between voltage and current, or know what "short circuit" means, then this isn't something to poke at.
I don't know why nobody sells these as COTS yet.
I think that says it all. It's gone beyond practicality for me, and I'm OK with that. I'm also satisfied with the current setup; I don't need to spend more.
I have a couple of Ecoflow's and Bluetti, and a Segway LFP battery. They all work fine so far.
As an example, I use cloudflare tunnel to point to an nginx that reverse proxies all the services, but I could just as well point DNS to that nginx and it would still work. I had to rebuild the entire thing on my home server when I found that the cheap VPS I was using was super over-provisioned ($2/mo for 2 Ryzen 7950 cores? Of course it was) and I had this thing at home anyway, and this served me well for that use-case.
When I rebuilt it, I was able to get it running pretty quickly and each piece could be incrementally done: i.e. I could run without cloudflare tunnel and then add it to the mix, I could run without R2 and then switch file storage to R2 because I used FUSE s3fs to mount R2, so on and so forth.
I also used to over-engineer my homelab, but I recently took a more simplistic approach (https://www.cyprien.io/posts/homelab/), even though it’s probably still over-engineered for most people.
I realized that I already do too much of this in my day job, so I don’t want to do it at home anymore.
You will end up paying much more for your services, along with spending a ton of time maintaining it (and if you don't, you will probably find yourself on the end of a 0-day hack sometime).
In Northern/Western Europe, where power costs around €0.3/kWh on average, just the power consumption of a simple 4 bay NAS will cost you almost as much as buying Google Drive / OneDrive / iCloud / Dropbox / Jottacloud / Whatever.
A simple Synology 4 bay NAS like a DS923+ with 4 x 4TB Seagate Ironwolf drives will use between 150 kWh and 300 kWh per year (100% idle vs 100% active, so somewhere in between), which will cost you between €45 and €90 per year, and that's power alone. Factoring in the cost of the hardware will probably double that (over a 5 year period).
It's cheaper (and easier) to use public cloud, and then use something like Cryptomator (https://cryptomator.org/) to encrypt data before uploading it. That way you get the best of both worlds, privacy without any of the sysadm tasks.
Edit: I'll just add, as you grow older, you come to realize that time is a finite resource, and while money may seem like it is finite, you can always make more money.
Don't spend your time hunched over servers. Spend it doing things you love with people that matter to you. Eventually those people won't be there anymore, and the memories you make with those people will matter far more to you in 20 years, than the €20/month you paid for cloud services.
All this assuming that you even need that much storage, which most people definitely do not.
In any case, you should always make backups regardless of where your data is stored. At home, your biggest threat is loss of data, probably through hardware malfunction, house fires or similar.
In the cloud your biggest threat is not loss of data but loss of access to data. Different scenarios but identical outcomes.
Backup solves both scenarios, RAID doesn't solve any of them, but sadly, many people think "oh but I've got RAID6 so surely I cannot lose data".
Of course, syncing a NAS between yourself and a friend or family member's home may be the better solution over cloud options.
For my personal use case, that involves photos and documents, all things i cannot easily recreate (photos less so). Those are what matters to me, and storing them in the cloud means i not only get redundancy in a single data center, but also geographical redundancy as many cloud providers will use erasure coding to make your data available across multiple data centers.
Everything else will be just fine on a single drive, even a USB drive, as everything that originated on the internet can most likely be found there again. This is especially true for media (purchased, or naval aquisition). Media is probably the most replicated data on the planet, possibly only behind the bible and IKEA catalog.
So, back to the important data, i can easily fit an entire family of 4 into a single 2TB data plan. That costs me somewhere around €85 - €100 per year, for 4 people, and it works no matter what i do. I no longer need to drag a laptop with me on vacation, and i can basically just say "fuck it" and go on vacation for 2 weeks.
If you need to commute to work daily, and you're concerned about the cost, you don't really care if you're comparing a city car vs a sports car vs the bus, despite on goes at 80km/h, and another can do 230km/h, if all you're interested in is the price.
Obviously as your storage needs increase, so will cloud costs, but unless you're a professional photographer, I'm guessing 2TB will be more than enough for most people.
Again, not talking about people trying to run their own media server on pirated content, and saving money that way. In my book that's comparable to saving money by robbing a bank. You're not saving anything, you're breaking the law, and 9 out of 10 times, it's cheaper to steal someone else's bike than it is taking a taxi home.
I'm talking actual storage for data you actually own, and possibly even data you have created yourself. Anything that came from the internet can be found on the internet again, purchased or naval acquisition.
2 TB ought to good for everyone is hilarious. There is so many people I know who would fill 512 GB phone in 1-2 year with photos and videos.
Maybe you do not have use case or situation where larger storage is needed. But it is strange to assume everyone in same bucket.
I would that this were true. I guess it depends on what you mean by "the internet", but there's a reason the Internet Archive exists. Sure, you don't need to back up your recent Firefox installer or your Debian ISO but lots of important and valuable data can't be found on the internet anymore. There are very valid reasons that groups like Archiveteam [1] do what they do, not to mention recent headlines like individuals losing access to their entire cloud storage [2].
[1] https://wiki.archiveteam.org/index.php/Main_Page [2] https://www.theregister.com/2025/08/06/aws_wipes_ten_years/
Google One for 10TB is 274,99€/mo (at least in my country) so you'd make the entire nas price and subscription cost within a few months, let alone years.
There just aren't compelling public cloud for large sizes (My NAS is 30TB capacity and I'm using 18 right now) and even if you go the more complex loops with like S3 and whatnot you still get billed more than it's worth. Public cloud is meant for public files, there's a lot of costs you're paying for stuff you don't need like being fast to access from everywhere.
I, for one, don't want to have Google, etc. as a dependency[1], so I will pay some energy cost to do that.
1: see: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
VPS are very expensive for what you get. If you have the capital, doing it yourself saves you money very quickly. It's not rare to pay $50 for a semi-decent VPS, but for $2000 you would get an absolute beast that would last 10 years at the very least.
With Docker, maintenance is basically zero and unused services are stopped or restarted with 1 command.
I've also self hosted for decades, but it turns out i don't really need that much, at least not public.
I basically just need mail, calendar, file storage and DNS adblocking. I can get mail/calendar/file storage with pretty much any cloud provider (and no, there is no privacy when it comes to mail, there is always another participant in the conversation), and for €18/year i can get something like NextDNS, Control D, or similar.
For reference, a Raspberry Pi 4 or 5 will use around 50 kWh per year, which (again in europe) will translate to €15/year. For just €3 more per year i get redundancy and support.
I still run a bunch of stuff at home, but nothing is opened to the public internet. Everything runs behind a Wireguard VPN, and i have zero redundant storage. My home storage is used for backing up cloud storage, as well as storing media and client backups. And yes, i also have another cloud where i backup to.
My total cloud bill is around €20-€25/month, with 8TB of storage, ad blocking DNS, mail/calendar/office apps and even a small VPS.
Not to mention that I love them.
On my homelab, I update everything every quarter and it takes about 1 hour, so 4 hours a year is pretty reasonable. Docker helps a lot with this.
And I’ve almost never run into trouble in years, so I have very few unexpected maintenance tasks.
EDIT: I am referring to a homelab that is only accessible for private purposes through a VPN.
If you only access your homelab over VPN or similar, then by all means, update whenever you feel like it, but if you expose your services to the internet, you want to be damned sure there are no vulnerabilities in them.
The internet of today is not like it was 20 years ago. Today you're constantly being hammerede by bots that scan every single IPv4 address for open ports, and when they find something they record it in a database, along with information on what's running on that port (provided that information is available).
When (not if) a vulnerability for a given service is discovered, an attacker doesn't need to "hunt & peck" for vulnerable hosts, they already have that information in a database, and all they need to do is start shooting at their list of hosts.
You can use something like shodan.io to see what a would be attacker might see (can check your own IP with "ip:xxx.xxx.xxx.xx".
Try entering something like Synology, Proxmox, Truenas, Unraid, Jellyfin, Plex, Emby, or any of the other popular home services.
RSS feeds FTW
I got this setup automatically with Renovate: https://github.com/shepherdjerred/homelab/blob/main/src/cdk8...
It’s also an excuse for me to stay in most summer days.
As for electric heating, that is true in 1:1 heating scenarios, but i assume you guys are also using heat pumps these days, and while you still get heat "for free", it will not be anywhere as efficient as your heat pump.
Yes, it's probably peanuts in the grand scheme of things, i know our air to water heat pump in Denmark uses around 4500-5500 kWh per year, so adding another 100 kWh probably won't mean much.
Even at the high end estimate the homelab is giving you several times the storage for the same cost.
Very few people i know has use for that much storage. Yes, you can download the entire netflix catalog, and that will of course require more storage, and no, you probably shouldn't put it in the cloud (or even back it up, or use redundant storage for it).
Setting up your own homelab to be your own netflix, but using pirated content, is not really a use case i would consider. I'm aware people are doing it, and i still think it's stupid. They're saving money by "stealing" (or at least breaking laws), which is hardly a lifehack.
My wife is a professional photographer, and while we do archive most of her RAW files somewhere else, pretty much everything HEIC, JPEG or any other compressed format lives in our main cloud.
We have 2.2TB in total for “direct storage”, and we’re currently using around 1.5TB, and that’s including myself and our kids.
My personal photo library has just short of 90,000 photos, and about 5,000 videos. My wife’s library is roughly twice that. I have no idea how many photos the kids have, but they each take up around 200GB for photos.
And then we have backups, which actually take up about 1TB per person, mostly because that’s the space I’ve allocated for each, so history just grows until it’s filled. Photos ideally won’t change much. We backup originals along with XMP metadata for edits, so the photos stays the same, and changes are described in easily compressed text files. Backups of course also have deduplication enabled.
My friend, in her mid 20s, uses nearly 3tb of apple cloud space with photos and videos, mostly of her kids and dog.
I dont even film much but im using about a terabyte.
There’s always a “right tool for the job”. Sometimes it’s the cloud. Sometimes it isn’t. The article is for people who found the cloud isn’t a good fit and need something at home.
A lot of people have large collections of music or movies. Or want to keep full control over some data no matter the cost. Or need it to work without internet. There are many solid reasons to avoid the cloud and use your own solution.
You are arguing that your original assertion isn’t wrong, people’s stated needs must be wrong. Because you have different needs so others must be doing it wrong. And this undermines everything else you say.
Yes.
> (excluding things you've downloaded from the internet)
Why on earth would I do that? My storage includes things I downloaded from the internet that are not there anymore/hard to find/now paywalled. If you were thinking the only thing to download from the internet is pirated media - I haven't included that in my >2TB assessment.
Who are you to tell people how to spend their time? Let people have hobbies ffs
It isn’t unreasonable to want some alone time.
Once your user count goes beyond 1, you suddenly have a SLA as people are dependent on your services. Like it or not, you are now the primary support staff of your local cloud business.
The more users you get, the more time you will need to spend to fix problems.
At which point does it go from a hobby to a 2nd job ?
The thing is, it's worth it to learn. Do you know the basics of how to set up a completely redundant environment? There's no conceptual difference between setting one up at home by using consumer equipment and setting it up in a data center. You can get pretty capable equipment (Mikrotik) for less. The enterprise stuff has more configuration options, but it's doubtful that you'll use most of them.
Set up backup WANs, redundant routing, DNS, power, etc is fun. Setting up redundant load balancers, backend services, databases, etc is also fun. It's not hard to do, it's just hard to get it all right. There are probably a zillion configuration parameters you can mess with, and only a few sets actually work. Unfortunately, the sets that work in your home won't be the ones that will work in production, but you could possibly run load tests etc to simulate a real environment (though simulating multiple clients from multiple endpoints is harder than you think).
And of course, getting production equipment is hard. Nobody has 2 F5s lying around. And you really need at least 4 F5s, because you have redundant locations. That's a lot of cashola. And in most environments you wouldn't want some random person messing around with the production (or test) F5s. It's the same with NAS, VM servers, docker registries, etc.
I suspect getting the whole end-to-end setup isn't something people experience anymore, because small companies have (or at least should have) moved to the cloud by now.
However, it also depends on how you use that data. In my case, I'm a Sunday photographer, so I tend to wrangle multiple GB of data at a time. I usually edit my photos locally, but I sometimes will want to revisit older stuff. I can download it, but it's a PITA and s_l_o_w. Google drive file stream is terrible for this, you never know if the files are uplaoded or not. Onedrive isn't any better. I haven't tried dropbox.
Hetzner has some storage box which exposes SMB but doesn't seem to enforce encryption nor IP filtering, so I'm not very comfortable with that.
Also, my internet connection pretends I have 5 Gb down, 0.5 up. The down part is usually as expected (my machine only has a 1 Gb nic), but upload is sometimes very slow. Running a local NAS is much faster. It's ZFS, so backups are trivial to send to encrypted offsite storage.
It also doesn't need to run 24/7, which helps with power usage (0.22 €/kWh here).
> I'll just add, as you grow older, you come to realize that time is a finite resource, and while money may seem like it is finite, you can always make more money.
Indeed. Waiting around for files to transfer gets old quick. I have better things to do with my time. My NAS needs a whopping five minutes of my time every now and again when a new kernel comes out.
don't spend your time cooking food, pay for others to prepare it for you.
don't spend your time maintainig a house, rent and let someone else do the maintenance.
just lease a car and get a new one automatically every 3 years.
honestly, everyone has their own setpoint for things. and there are degrees of solutions for every point you make.
I think most people would benefit from being just a little bit self-sovereign.
Personally, "majority of people" could use one low power fanless server with 1tb for the few things most people need continuously online.
And separately a server you turn on occasionally with lots of storage, like maybe
https://www.amazon.com/dp/B09TV1XPDD
I'm reminded of jwz's backups info www.jwz.org/doc/backups.html
"RAID is a goddamn waste of your time and money"
My homelab is my hobby. I maintain it for my pleasure and to learn new skills. We have an infra nerds club with a few colleagues and we're having a lot of fun comparing our approaches!
To some, spending time hunched over servers is doing things they love.
I mean, each and every thing you said about maintaining a home lab you can also say about maintaining cloud infrastructure.
There was a time when having hobbies was normal. It seems nowadays some people mistake hobbies for work after hours? Where is that hacker mindset?
Every homelab I have come across is a hobby project and time sink that is more like a backyard garden or classic car.
Agreed, but it doesn't have to take time from your family. I'm on a small team that self-hosts internal services to lower costs/risks. It takes very little time to maintain, and maintenance windows happen on our terms. Our uptime this year is better than "Github Actions", the latency is incredible, and we've had no known security issues.
There are two keys to doing this successfully: (1) don't deploy anything you don't understand (so it won't take you long to fix), and (2) even then, aggressively avoid complexity (so it doesn't break in the first place.)
For example, despite significant network expertise, we stuck to a basic COTS router and a simple IPv4 subnet for our servers. And the services we run are typically self-contained golang binaries that we can deploy with bash onto baremetal. No docker, kvm, ansible, or k8s.
This DIY setup saves us considerably more than it costs. Not for everyone, but with proper scoping, many readers of hacker news could pull this off without losing time with their loved ones.
I ran a home lab for a number of years. This was a fairly extensive set up - 4 rack mount servers, UPS, ethernet switch etc with LTO backups. Did streaming, email and file storage for the whole family as well as my own experiments.
One morning I woke up to a dead node. The DMZ service node. I found this out because my wife had no internet. It was running the NAT and email too. Some swapping of power supplies later and I found the whole thing was a complete brick. Board gone. It's 07:45 and my wife can't check her schedule and I'm still trying to get 3 kids out of the door.
At that point I realised I'd fucked up by running a home lab. I didn't have the time or redundancy to have anyone rely on me.
I dug the ISP's provided WiFi router out, plugged it in and configured it quickly and got her laptop and phone working on it. Her email was down but she could check calendar etc (on icloud). By the end of the day I'd moved all the family email off to fastmail and fixed everything to talk to the ISP router properly. I spent the next year cleaning up the shit that was on those servers and found out that between us we only had about 300 gig of data worth keeping which was distributed out to individual macbooks and everyone is responsible for backing their own stuff up (time machine makes this easy). Eventually email was moved to icloud as well when domains came along.
I burned 7TB of crap, sold all the kit and never ran a home lab again. Then I realised I didn't have to pay for the energy, the hardware or expend the time running it. There are no total outages and no problems if there's a power failure. The backups are simple, cheap and reliable. I don't even have a NAS now - I just bought everyone some Samsung T7 shield disks.
I have a huge weight off my shoulders and more free time and money. I didn't learn anything I wouldn't have learned at work anyway.
I need to update it and patch it, hoping nothing goes wrong in the process. If something breaks I'm the only one that can repair it, and I really don't want to hear my wife screaming at me at 7am when I wake up.
Eventually I came to your same conclusion, but I still run a hybrid setup that allows me to keep the router (for now), and a NAS for backup (3-2-1) and some local services. I run a dedicated server from Hetzner for "always on" services, so that the hardware, power redundancy and operational toil are offloaded. I gave up long ago on email: any hosting service will be way better than me doing it - I know I can do it, but is it worth my sanity? Nope.
I wrote about why I don't (want to) self-host services for others: https://ergaster.org/posts/2023/08/09-i-dont-want-to-host-se...
Having a uptime SLA for your "hobby" is a huge pain in the ass and absolutely sucks the fun out of it.
For me it was the constant requests for new media or midnight complaints about jellyfin being down.
If you want to learn infra ops just get a job in the field and get paid for it.
But yeah; things with SLAs are probably better off not self-hosted unless you really enjoy midnight fixes.
Works well enough for what I need.
Sounds like a lot, but I was almost paying the same before - 220€ for power at home, 110€ for a dedicated Hetzner server, 95€ for a secondary internet connection (as not to interfere with the main uplink used for home office by my partner and me).
Not having to deal with the extra heat, noise and used up space at home anymore has been worth it as well.
I'd have colo'ed or used dedicated as it's definitely better than their VMs, but they don't have that in their US datacenters.
I am pretty happy with my current setup, I have significantly less down time (few mins a month) than when I was on hetzner - but this is mostly due to my need for more ram at times.
I also used this as an excuse to get 56G mellanox fiber switch and get poe cameras etc in a full homelab manner, so it's been fun, on top of being cheaper. Noise is not a concern, I got a sound-proof server rack that's pretty nice. It takes up space, but i have kids, so my garage is near full at times anyways :)
At some point you'll need to upgrade hardware and software, you get to do the exercise over again. There will always be lessons learned and its get better each time. Its still work.
The encryption question is interesting. I don't have disk encryption turned on, because I want the computer to recover from power failure. If power turns off then on, the server would be offline until I decrypt it.
How does your "Wake on LAN" work with the encryption?
You could use an IP KVM, or you could install Dropbear SSH server into the initramfs.
I've heard that it might be difficult to get one in the US though.
I keep putting it off since it is on a UPS and power outages aren't that frequent. Accessibility isn't too bad since it's under the TV stand.
I know we're in the AI hype cycle but I bet you meant LVM there >:-)
There's no reason for ECC to have significantly higher power consumption. It's just an additional memory chip per stick and a tiny bit of additional logic on CPU side to calculate ECC.
If power consumption is the target, ECC is not a problem. I know firsthand that even old Xeon D servers can hit 25W full system idle. On AMD side 4850G has 8 cores and can hit sub 25W full system idle as well.
I can totally see why someone who doesnt need expandability would choose the cheap mini PC.
A decade later, I like NUCs and Pis and the like because they’re tiny,low-power, and easy to hide. Then again, I don’t have nearly as much time and drive for offhand projects as I get older, so who knows what a younger me would have decided with the contemporary landscape of hardware available to us today.
There are tasks that benefit from speed, but the most important thing is good idle performance. I don't want the noise, heat or electricity costs.
I'm reluctant to put a dedicated GPU into mine, because it would almost double the idle power consumption for something I would rarely use.
https://pve.proxmox.com/wiki/Proxmox_Cluster_File_System_(pm...
- Lightning protection
- 2 2U UPSes with 2 extra 2U battery packs each, temperature and humidity monitoring, and remote management
- 1x vSphere 7u3w Enterprise Plus + vCenter, 512 GiB ECC RAM, 16 TiB of RAID10 SSD, 96 thread EPYC server with 4 10 Gb optical NICs and 4x SAS3 ports
- 4U JBOD external NAS 330 TiB usable shared mostly over Samba with Time Machine support
- 2x Ryzen boxes with 128 GiB of ECC RAM, 100 Gb links to each other, 4 TiB RAID1 SSD, also used for distributed builds (also vSphere)
- Additional non-ECC 96 GiB tiny ITX Ryzen Windows lab machine
- Misc. non-ECC 128 GiB micro ITX Ryzen for additional distributed build capacity, currently Fedora w/ Podman and Docker
- Deciso OPNsense (Business license) router with 10 Gb optical ports, WireGuard, NTP-DHCP-DNS
- PoE 4x RPi 5 + SSD Ceph cluster
- Ubiquiti U7 Pro XGS APs
- Eufy security cameras with Home base
- PKI (TLS CA), TOTP 2FA ssh, YK gpg/ssh agent, RustDesk
- All boxes except lab, Fedora, and RPis are lights-out manageable and so don't need a KVM
To Do: NFS 4.2, LDAP, Krb5, TACACS+, SAML/OpenID (authelia), SNMP, Nagios or Grafana/Prometheus, K8s
Cost: $150/month in electricity
treve•6mo ago
senectus1•6mo ago
denkmoon•6mo ago
Those little thin clients aren't gonna be fast doing "big" things, but serving up a few dns packets or whatever to your local network is easy work and pretty useful.
treve•6mo ago
rwyinuse•6mo ago
pandemic_region•6mo ago
ninjin•6mo ago
rwyinuse•6mo ago
fho•6mo ago
MrDresden•6mo ago
bombela•6mo ago
jurip•6mo ago
(clarification: that's euro cent, so 0.0635€ etc)
fho•5mo ago
novok•6mo ago
vladvasiliu•6mo ago
It has an i5-6500, 32 GB RAM (16 + 2x8 DIMMs), 2 SATA SSDs and a 2x10Gb Connect-X3. It runs 24/7 hosting OpnSense and HomeAssistant on top of KVM (Arch Linux Hardened – didn't do anything specific to lower the power draw). Sometimes other stuff, but not right now.
I haven't measured it with this specific nic, but before it had a 4x1Gb i350. With all ports up, all VMs running but not doing much, some power meter I got off Amazon said it pulled a little over 14W. The peak was around 40 when booting up.
Electricity costs 0.22 €/kWh here. The machine itself cost me 0 (they were going to throw it out at work), 35 for the nic and maybe 50 for the RAM. It would take multiple years to break even by buying one of these small machines. My plan is to wait out until they start having 10 Gb nics and this machine won't be able to keep up anymore.
yamapikarya•6mo ago
dardeaup•6mo ago
https://www.delltechnologies.com/asset/pl-pl/products/thin-c...
Have you tried it with 32gb? If so, was this 2x 16gb or 1x 32gb?
Maxious•6mo ago
zrail•6mo ago
rwyinuse•6mo ago
ninjin•6mo ago
vladvasiliu•6mo ago
But not all those minis are the same. G4 (intel 8th gen) and G5 (intel 9th gen) HPs are horrendous. The fan makes an extremely aggravating noise, and I haven't found a way to fix it. Bonus points for both the fan and heatsink having custom mounts, so even if you wanted to have an ugly but quiet machine by slapping a standard cooler, you couldn't.
G6 versions (intel 10th gen) seem to have fixed this, and they're mostly inaudible on a desk, unless you're compiling something for half an hour.
vitaflo•6mo ago
finnjohnsen2•6mo ago
cenamus•6mo ago
addandsubtract•6mo ago
With an n100, you get a better, more upgradable system for around the same price and same power usage. On top, you will also have an x64 system that isn't limited to some ARM quirks. I made the switch n100's over a year ago and have had no issues with them so far.
cookiengineer•6mo ago
No idea what happened, but Raspberry Pis are super expensive for the last couple years, which is why I decided to just go with used Intel NUCs instead. They cost around 80-150EUR and they use more electricity but they are a quite good bang for the buck, and some variants also have 3x HDMI or Gbit/s ethernet or m2 slots you can use to have a SATA RAID in them.
tracker1•6mo ago
wraptile•6mo ago
novok•6mo ago
wraptile•6mo ago
novok•5mo ago
addandsubtract•6mo ago
bobcostas55•6mo ago
tracker1•6mo ago
At home, I'm using a 5900H based mini-pc I bought a few years ago and a synology nas.
ivanjermakov•6mo ago