If a file has a hundred thousand blocks you could tack on a thousands blocks of error correction for the cost of making it just 1% larger. If the file is a seldom/never written archive it's essentially free beyond the space it takes up.
The kind of massive data archives that you want to minimize storage costs of tend to be read-mostly affairs.
It won't save you from a disk failure but I see bad blocks much more often than whole disk failures these days... and raid/5/6 have rather high costs while being still quite vulnerable to the possibility of an aligned fault on multiple disks.
Of course you could use par or similar tools, but that lacks nice FS transparent integration and particularly doesn't benefit from checksums already implemented in (some) FS (as you need half the error correction data to recover from known-position errors, and-or can use erasure only codes).
My point is, it doesn't much matter what your FS does, so long as you have 3 or more of them.
any sane person uses FS / system with dedup in it, so you can have 7+4+12 snapshots for 5TB of data taking only 7TB of space. etc
you want snapshots, for example Manjarolinux (arch based) does use BTRFS capable of snapshots. so before every update it will make snapshot so if update fails, you can just select to go back into working state in grub...
Alma linux uses BTRFS too but im not sure if they have this functionality too.
ZFS, bcache, BTRFS, checksums
MD-INTERGITY inside of linux kernel can provide checksums for any fs essentially. just " lvcreate --type raidN --raidintegrity y " and you have checksumms + raid in linux
filesystem scrubbing.
or every linux distro,bsd has ZFS available,
or every linux distro has LVM raid available,
or BTRFS has raid 1, 0, 10,
or windows has their own software raid, just open Storage Spaces / Disk Management console,
so whole unraid / nonraid are just nonsensical waste of effort for everyone
why would i invest time and effort to this technology by small team, if i can have technology supervised, maintained by linux kernel devs ? ? makes no sense.
and things i mentioned here are here longer than unraid/nonraid existed. so it was nonsensical from start.
miffe•6h ago
eddythompson80•5h ago
- It lets you throw JBODs (of ANY size) and you can create a "RAID" over them.
- The biggest drive must be a parity drive(s).
- N parity = surviving N drive failures.
- You can expand your storage pool 1 drive at a time. You need to recalculate parity for the full array.
The actual data is spread across drives. If a drive fails, you rebuild it from the parity. This is another implementation (using MergerFS + SnapRAID) https://perfectmediaserver.com/02-tech-stack/snapraid/
It's a very simple model to think of compared to something like ZFS. You can add/remove capacity AND protection as you go.
Its perf is significantly less than ZFS of course.
somat•5h ago
phoronixrly•5h ago
hammyhavoc•5h ago
The cache pool is recommended to be mirrored for this reason (not many people see why I find this to be amusing).
phoronixrly•5h ago
> Increased perceived write speed: You will want a drive that is as fast as possible. For the fastest possible speed, you'll want an SSD
Great, now I have an SSD that is treated as a consummative and will die and need to be replaced. Oh and btw you are going to need two of them if you don't want to accidentally your data.
The alternative? Have the cache on a pair of spinning rust drives which will again be overloaded and are expected to fail earlier and need to be replaced while also having the benefit of being slow... But at least you won't have to go through a full rebuild after a cache drive failure.
Man, I am not sold on the cost savings of this approach at all... Let alone the complexity and moving parts that can fail...
dawnerd•4h ago
hammyhavoc•5h ago
Yes, Unraid can crash-and-burn in quite a lot of different ways. Ask me how I know! Why I'm all-in on ZFS now.
eddythompson80•4h ago
That's just your drive failure tolerance. It's the same risk/capacity trade as RAIDZ1, but with less performance and more flexibility on expanding. Which is exactly what I said.
If 1 drive failure isn't acceptable for you, you wouldn't use RAIDZ1 and wouldn't use 1 parity drive.
You can use 2 parity drives for RAIDZ2-like protection.
You can use 3 drives for RAIDZ3-like protection.
You can use 4 drives, 10 drives. Add and remove as many parity/capacity as you want. Can't do that with RAID/RAIDZ easily.
You manage your own risk/reward ratio
phoronixrly•4h ago
As hammyhavoc below noted, you can work around this by having cache, and 'by deferring the inevitable parity calculation until a later time (3:40 am server time, by default)'.
Which seems like a hell of a bodge -- both risky, and expensive -- now the unevenly balanced drive is the cache one, it is also not parity protected. So you need mirroring for it in case you don't want to lose your data, and the cache drives are still expected to fail before a drive in an evenly load-balanced array, so you're going to have to buy new ones?
Oh and btw you are still at risk of bit flips and garbage data due to cache not being checksum-protected.
eddythompson80•4h ago
On unraid/snapraid you need to spin 2 drives up (one of then is always the parity)
On zfs, you are always spinnin up multiple drives too. Sure the "parity" isn't always the same drives or at least it's up to zfs to figure that out.
Nonetheless, this is all not really likely to have a significant impact. Spinning disks failure rates don't exactly correlate with their utilization[1][2]. Between SSD cache, ZFS scrubs, general usage, I don't think the parity drives are necessarily more at risk. This is anectodal, but when I ran an unRAID box for few years myself, I only had 1 failure and it was a non-parity drive.
[1] Google study from 2007 for harddrive failure rates: https://static.googleusercontent.com/media/research.google.c...
[2] "Utilization" in the paper is defined as:
Dylan16807•4h ago
Good, it can be the canary.
> thus you are going to need to recalculate parity for the array more often, which is a risky and taxing operation for all drives in the array
This is not worth worrying about.
First off, if the risk is linear then your increased parity failure is offset by decreased other-drive failure and I don't think you'll have more rebuilds.
And even if you do get more rebuilds, it's significantly less than one per year, and one extra full-drive read per year is a negligible amount of load. If you're worried about it all hitting at once then A) you should be scrubbing more often and B) throttle the rebuild.
nodja•4h ago
dawnerd•4h ago
wongarsu•4h ago
I'd still recommend anyone to have two parity drives (which unraid does support)
riddley•2h ago
hammyhavoc•5h ago
But if the file system is corrupt then you're hosed and end up with a `lost+found`. It sounds great until it fails, and then you realize why ZFS with replication makes sense. Unraid doesn't do automatic repairs from replicated ZFS datasets yet either even if you use individual ZFS disks within your Unraid array.
Whatarethese•4h ago
xcrunner529•3h ago
eddythompson80•2h ago
bane•2h ago
I started mine with a spare NUC and some portable USB drives and its grown into a beast with over 100TB spread across a high performance SSD backed ZFS pool and an unRAID array, 24 cores, running about 20 containers and a few VMs without breaking a sweat and so far (knock on wood) zero data loss.
All at a couple hundred dollars every so often over the years.
One performance trick it supports is also letting you overlay fast SSD storage over the array, which is periodically moved onto the slower underlying disk. It's transparent, so when you write to the array you can easily get several hundred MB/sec which will automatically get moved onto warm storage periodically. I have two fast SSDs RAIDed there and easily saturate the network link when writing.
The server basically maintains itself, I only go in every so often and bump the docker containers at this point. But I also know that I can add another disk to it in a about 10 minutes and a couple hundred bucks.
benjiro•10m ago
The ability to sleep all / individual HDDs:
* only keep awake the drives that your actually read data from * only keep awake the drive that your write data too + n parity drives
For home users, that is a TON of energy saving. And no, your "poor" HDDs are not going to suffer from spinning up a few times per day.
You can spin up/down a HDD 10x per day, for 100 years before you come even close to the manufactures (lowest) hdd limits. Let alone if you have 4+ drives and have a bit of data spreading, or combined with unraids nvme/ssd caching layer.
So unlike mdraid or zfs where its a all or nothing situation, unraid / snapraid gives you a ton of energy saving.
And i understand the US folks here do not care when they pay maybe 6 to 12 cent / kwh, but the rest of the world has electricity prices in the 30 to 50 cent / kwh, and it stacks up very fast when you are using < 1watt vs 5/7Watt per HDD/24/7...
wongarsu•5h ago
unRaid takes multiple partitions, dedicates one or two of them to parity, and hands the other partitions through. You can handle those normally, putting different file systems on different partitions in the same array and treating them as completely separate file systems that happen to be protected by the same parity drives
This enables you to easily mix drives of different sizes (as long as the parity drives are at least as large as the largest data partition), add, remove or upgrade drives with relative ease, and means that every read operation only goes to one drive, and writes to that drive plus the parity drives. Depending on how you organize your files you can have drives that are basically never on, while in an md array every drive is used for basically every read or write.
The disadvantages are that you lose out on the performance advantages of a RAID, and that the raid only really protects against losing entire disks. You can't easily repair single blocks the way a zfs RAID could. Also you have a number of file systems you have to balance (which unRaid helps you with, but I don't know how much of that is in this module)
phoronixrly•5h ago
hammyhavoc•5h ago
phoronixrly•5h ago
hammyhavoc•5h ago
The magic of ZFS repairs isn't in RAID itself, IMO, it's in being able to take your cold replicated dataset, e.g., from LTO, an external disk, remote server etc, and repair any issues without needing to resilver, stress the whole array, interrupt access, or hurt performance.
RAID can correct issues, yes, but ZFS as a filesystem can repair itself from redundant datasets. Likewise, you can mount the snapshots like Apple Time Machine and get back specific versions of individual files.
I wish HN didn't limit comment depth as these are great questions and this is heavily under-discussed, but it's arguably the best reason to run ZFS, IMO.
Another way of putting this—you don't need a RAID array, you can do individual ZFS disks and still replicate and repair them. There's no limits to how many replicas or mediums you use either. It's quite amazing for self-healing problems with your datasets.
aspenmayer•3h ago
I think it’s actually a flamewar detector that you may be hitting. In any case, next time try selecting the timestamp of the comment which you wish to reply to; this works when the reply button is missing and the comment isn’t [dead] or [flagged][dead] iirc.
tomhow•2h ago
wongarsu•5h ago
Now a fairly common scenario is to use unRaid with zfs as the file system for each partition, having Y independent zfs file systems. In that case in theory the information to repair blocks exists: a zfs scrub will tell you which blocks are bad, and you could repair those from parity. And a unraid parity check will do the same for the parity drives. But there is no system to repair single blocks. You either have to dig in and do it yourself or just resilver the whole disk