I know of FriendlyElec CM3588, are there others?
If you're running on consumer nvmes then mirrored is probably a better idea than raidz though. Write amplification can easily shred consumer drives.
https://www.phoronix.com/news/Intel-IGEN6-IBECC-Driver
Not every new CPU has it, for example, the Intel N95, N97, N100, N200, i3-N300, and i3-N305 all have it, but the N150 doesn't!
It's kind of disappointing that the low power NAS devices reviewed here, the only one with support for IBECC had a limited BIOS that most likely was missing this option. The ODROID H4 series, CWWK NAS products, AOOSTAR, and various N100 ITX motherboards all support it.
They're on 24/ and run monthly scrubs, as well as monthly checksum verification of my backup images, and not noticed any issues so far.
I had some correctable errors which got fixed when changing SATA cable a few times, and some from a disk that after 7 years of 24/7 developed a small run of bad sectors.
That said, you got ECC so you should be able to monitor corrected memory errors.
Matt Ahrens himself (one of the creators of ZFS) had said there's nothing particular about ZFS:
There's nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem. If you use UFS, EXT, NTFS, btrfs, etc without ECC RAM, you are just as much at risk as if you used ZFS without ECC RAM. Actually, ZFS can mitigate this risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY flag (zfs_flags=0x10). This will checksum the data while at rest in memory, and verify it before writing to disk, thus reducing the window of vulnerability from a memory error.
I would simply say: if you love your data, use ECC RAM. Additionally, use a filesystem that checksums your data, such as ZFS.
https://arstechnica.com/civis/viewtopic.php?f=2&t=1235679&p=...
Sun (and now Oracle) officially recommended using ECC ever since it was intended to be an enterprise product running on 24/7 servers, where it makes sense that anything that is going to be cached in RAM for long periods is protected by ECC.
In that sense it was a "must-have", as business-critical functions require that guarantee.
Now that you can use ZFS on a number of operating systems, on many different architectures, even a Raspberry Pi, the business-critical-only use-case is not as prevalent.
ZFS doesn't intrinsically require ECC but it does trust that the memory functions correctly which you have the best chance of achieving by using ECC.
Really hoping we see 25/40GbaseT start to show up, so the lower market segments like this can do 10Gbit. Hopefully we see some embedded Ryzens (or other more PCIe willing contendors) in this space, at a value oriented price. But I'm not holding my breath.
Until there is something in this class with PCIe 4.0, I think we're close to maxing out the IO of these devices.
I only came across the existence of this CPU a few months ago, it is Nearly the same price class as a N100, but has a full Alder Lake P-Core in addition. It is a shame it seems to only be available in six port routers, then again, that is probably a pretty optimal application for it.
I want smaller, cooler, quieter, but isn’t the key attribute of SSDs their speed? A raid array of SSDs can surely achieve vastly better than 2.5gbps.
TXT/DRTM can enable AEM (Anti Evil Maid) with Qubes, SystemGuard with Windows IoT and hopefully future support from other operating systems. It would be a valuable feature addition to Proxmox, FreeNAS and OPNsense.
Some (many?) N150 devices from Topton (China) ship without Bootguard fused, which _may_ enable coreboot to be ported to those platforms. Hopefully ODROID (Korea) will ship N150 devices. Then we could have fanless N150 devices with coreboot and DRTM for less-insecure [2] routers and storage.
[1] Gracemont (E-core): https://chipsandcheese.com/p/gracemont-revenge-of-the-atom-c... | https://youtu.be/agUwkj1qTCs (Intel Austin architect, 2021)
[2] "Xfinity using WiFi signals in your house to detect motion", 400 comments, https://news.ycombinator.com/item?id=44426726#44427986
This seems useful. But it seems quite different from his previous (80TB) NAS.
What is the idle power draw of an SSD anyway? I guess they usually have a volatile ram cache of some sort built in (is that right?) so it must not be zero…
- Warm storage between mobile/tablet and cold NAS
- Sidecar server of functions disabled on other OSes
- Personal context cache for LLMs and agents
Small/portable low-power SSD-based NASs have been commercialized since 2016 or so. Some people call them "NASbooks", although I don't think that term ever gained critical MAS (little joke there).
Examples: https://www.qnap.com/en/product/tbs-464, https://www.qnap.com/en/product/tbs-h574tx, https://www.asustor.com/en/product?p_id=80
Not really seeing that in these minis. Either the devices under test haven't been optimized for low power, or their Linux installs have non-optimal configs for low power. My NUC 12 draws less than 4W, measured at the wall, when operating without an attached display and with Wi-Fi but no wired network link. All three of the boxes in the review use at least twice as much power at idle.
One curiosity for @geerlingguy, does the Beelink work over USB-C PD? I doubt it, but would like to know for sure.
I just want a backup (with history) of the data-SSD. The backup can be a single drive + perhaps remote storage
You can install a third-party OS on it.
Why buy a tiny, m.2 only mini-NAS if your need is better met by a vanilla 2-bay NAS?
My first experience with these cheap mini PCs was with a Beelink and it was very positive and makes me question the longevity of the hardware. For a NAS, that’s important to me.
The entire cabinet uses under 1kwh/day, costing me under $40/year here, compared to my previous Synology and home-made NAS which used 300-500w, costing $300+/year. Sure I paid about $1500 in total when I bought the QNAP and the NVMe drives but just the electricity savings made the expense worth it, let alone the performance, features etc.
SSD = Solid State Drive
So you're moving from solid state to solid state?
Just something to be aware of.
You give up so much by using an all in mini device...
No Upgrades, no ECC, harder cooling, less I/O.
I have had a Proxmox Server with a used Fujitsu D3417 and 64gb ecc for roughly 5 years now, paid 350 bucks for the whole thing and upgraded the storage once from 1tb to 2tb. It draws 12-14W in normal day use and has 10 docker containers and 1 windows VM running.
So I would prefer a mATX board with ECC, IPMI 4xNVMe and 2.5GB over these toy boxes...
However, Jeff's content is awesome like always
No IPMI and not very many NVME slots. So I think you're right that a good mATX board could be better.
https://www.aliexpress.com/item/1005006369887180.html
Not totally upgradable, but at least pretty low cost and modern with an optional SATA + NVMe combination for Proxmox. Shovel in an enterprise SATA and a consumer 8TB WD SN850x and this should work pretty good. Even Optane is supported.
IPMI could be replaced with NanoKVM or JetKVM...
Running it with encrypted zfs volumes and even with a 5bay 3.5 Inch HDD dock attached via USB
My use case is a backup server for my macs and cold storage for movies.
6x2Tb drives will give me a 9Tb raid-5 for $809 ($100 each for the drives, $209 for the nas).
Very quiet so I can have it in my living room plugged into my TV. < 10W power.
I have no room for a big noisy server.
Helps a ton with response times with any NAS thats primarily spinning rust, especially if dealing with decent amount of small files.
(I assume the same applies to M.2, but have not confirmed.)
If this isn’t running 24/7, I’m not sure I would trust it with my most precious data.
I was thinking of replacing it with a Asustor FLASHSTOR 12, much more compact form factor and it fits up to 12 NVMes. I will miss TrueNAS though, but it would be so much smaller.
*well, they allowed on all CPUs, but after zen3 they saw how much money intel was making and joined in. now you must get a "PRO" cpu, to get ECC support, even on mobile (but good luck finding ECC sodimm).
koeng•3h ago
amluto•3h ago
MarkSweep•3h ago
FLASHSTOR 6 Gen2 (FS6806X) $1000 - https://www.asustor.com/en/product?p_id=90
LOCKERSTOR 4 Gen3 (AS6804T) $1300 - https://www.asustor.com/en/product?p_id=86
brookst•3h ago
Takennickname•3h ago
brookst•3h ago
vbezhenar•3h ago
dontlaugh•3h ago
At some point though, SSDs will beat hard drives on total price (including electricity). I’d like a small and efficient ECC option for then.
Havoc•3h ago
qwertox•3h ago
https://www.minisforum.com/pages/n5_pro
https://store.minisforum.com/en-de/products/minisforum-n5-n5...
96GB DDR5 SO-DIMM costs around 200€ to 280€ in Germany.https://geizhals.de/?cat=ramddr3&xf=15903_DDR5~15903_SO-DIMM...
I wonder if that 128GB kit would work, as the CPU supports up to 256GB
https://www.amd.com/en/products/processors/laptop/ryzen-pro/...
I can't force the page to show USD prices.
wyager•2h ago
Either way, on my most recent NAS build, I didn't bother with a server-grade motherboard, figuring that the standard consumer DDR5 ECC was probably good enough.
layer8•2h ago