Good start though!
Edit: and problem with micro clusters like this is always the IPv4 costs.
It lasted for about 3 years and the colocation company went bankrupt and got bought by another company, so they returned the hardware. I’m surprised a technical failure didn’t kill it.
even if they can sustain that, how the heat and energy health for that cheap building
Web hosts can start at $10 (or free + internet) and GPU hosts can start at $4,000 USD.
At peak, a “cluster node” could be $10,000 and a GPU node could be $80,000.
The question you have to ask yourself is: what are your requirements.
What I really want is a IP KVM that connects to a MacMini using a single Thunderbolt port for everything - power, video, keyboard and mouse
The higher specced max minis (more memory or pro) Are worse bang for buck.
EDIT According to some site M4 is 2 times faster and 3 times more expensive, while you can later add memory to Ryzen but not to M4
I came to a similar conclusion: TiniMiniMicro 1L PCs are in many ways a better option than Raspberry Pis. Or any mini PC with an Intel N-series CPU.
I rooted around on the block for a bit and I found several phishing sites, it was a mess.
The problem is the more serious colocators don't really want you if you just want 1U. And if they allow it it's definitely not for a good price.
The SPARC was slower and had less memory than the first raspberry pi ;) yes that was a while ago. The Mac mini was obviously later.
I think 99% of problems people have are related to one of those three things (same with most embedded devices, but people tend not to throw a cheap used phone charger and the SD card that came with an old cheap drone on more specialized devices).
It quite common, a stepping stone between using rented hardware and having your own data center.
At work we have ~10 of these passive cooled TopTon n100 with their 5x Intel i226-IV 2.5GE interfaces laying around for emergency router setups. They are great for a lot of things.
But be careful: starting with the n150 you will need active cooling.
I have an n305 with the CPU thermally bonded to its small aluminum case with a quiet 80 mm Noctua fan screwed into the fins of the case. The manufacturer on Ali Express said the fan is optional, but it can get to ~85°F in the room where the computer is, so I want to be careful. At idle, the CPU reports 5-10°F above the room's temperature.
It has 10 TB spread across 3 SSDs and 2 x 10 TB spinning drives attached. It's a Time Machine target for a handful of Macs and a Borg Backup target for several machines, including some across the internet using Tailscale. It's also running Home Assistant with AppDaemon (with dozens of little apps), Frigate (object detection for 3 Ethernet-connected cameras using a Google Coral TPU over USB), Paperless-ngx (15 GB of PDFs), LibrePhotos (1.2 TB of photos), Syncthing, Tiny Tiny RSS, a UniFi Controller, distinct PostgreSQL instances for various of those, and more. I count 21 Docker containers running right now, and not everything is containerized.
The spinning drives are powered down with a smart plug for all but 1-2 hours at night for backups. With those off, the thing sips power... 10-15 W most of the time with occasional spikes up to ~30 W when LibrePhotos is doing facial recognition or Paperless-ngx is updating its ML models. It never feels slow. I've been running one or more servers at home for 30+ years, and this single machine handles everything so much better than any combination of machines I've had.
64GB SODIMMs are now available and there are multiple reports of them working fine with the N305[0]. It is highly likely that it will work fine with the N100 as well.
0: https://www.reddit.com/r/homelab/comments/1m8fgec/intel_n305...
DDR2 (667MHz) > DDR3 (1.6GHz) > DDR4 (3.2GHz) etc. in longevity. and cas latency
<100GB SSDs from 2010-2015 out last all later >100GB SSDs.
SD cards can last 12 years before failing I know because I ran 2x 32GB SanDisk into the ground, one is still up sort of...
Hah! I wish! Though then I might be out of a job
We used them for a while and there are some photos here https://www.screenly.io/blog/2023/05/25/updated-qc-rig/
It was a small Apple machine running Debian. ISTR it was an Apple TV (1st gen), but it might have been a Mac Mini.
Pi's are powerable from the header pins, so you could save the usb adapters and route power directly from the relays to the power pins.
I'd also be tempted to add a way to access the serial consoles and the power button-equivalent pins of the pis for a lom-equivalent. It might be doable with a pico or two.
Nowadays with pi5s you could also of course hook a m2 board up to the pci lane and skip the m2 enclosures.
I'd personally prefer a recessed push-button power switch too; the switch you use would make me nervous something would drop on it and turn the system off
The nice thing about relay power is that you don't need a power button. In my case I actually had a little arduino running a USB stack that could toggle GPIOs to power cycle the Pis if they wedged: https://github.com/mmastrac/pi-power-vusb (I forgot that I even added default power states and power-on sequencing there)
Serial is definitely a nice to have. One USB-serial per pi, with one overall controller Pi that can aggregate it all.
I would add that TFTP boot for each Pi is also really convenient. This is pretty easy to set up. Dedicate one Pi to manage the cluster power, serial and TFTP and you have a pretty robust setup.
And selfhost on home fiber.
Saves space and cools silently.
Mixing old 2 and 4 for different use cases.
Raspberry 5 and 3588 are too hot.
Not in the picture Mean Well 50W passive PSU.
The only issue is one of the PoE Hats fan is catching something (though nothing i can see), so on occasion it will need persuasion to be quiet
One project I keep telling myself I'll eventually do is to make a cluster board with 32 Octavo SoMs (each with 2 ethernets, CPU, GPU, RAM, and some flash), and a network switch (or two). And 32 activity LEDs on the side so a set of 16 boards will look like a Connnection Machine module.
For a $800 setup and $30-50/mo in colocation fees, I think this is a lot worse value overall than a lot of hosting providers, e.g. Hetzner provides servers that start at ~$36/mo with around 64GB RAM and 1TB storage.
Even if someone absolutely wanted to configure + provide their own hardware to colocate, you could probably put together a much nicer server for less by scavenging parts off of eBay.
Fun yes, but practical no.
dnemmers•6mo ago
Could work well at home, however.