It’s funny which things have changed and which haven’t. That server which was super impressive for 2007 had 4 cores and 16GB of ram. Which is a reasonable small laptop for today. The 24TB of disk would still be pretty big though.
bombcar•2h ago
24 TB is available on a single drive now, though.
temp0826•1h ago
I haven't kept up with spinning rust drives so I had to take a look. Seagate has a couple 30 tb models now, crazy. Lot of eggs in one basket...and through the same ol 6gbit sata interface these must be a nightmare to format or otherwise deal with. Impressive nonetheless
wongarsu•1h ago
If you have one drive it feels like that, but if you throw 6+2 drives into a RAID6/raidz2. Sure, a full format can take 3 days (at 100 Megabytes/second sustained speed), but it's not like you are watching it. The real pain is fining viable backup options that don't cost an arm and a leg
nubinetwork•1h ago
If your drives are only managing 100MB/s then something is wrong, SATA 3 should be at least 500MB/s.
wongarsu•26m ago
SATA 3 can move 500MB/s, but high-capacity drives typically can't. They are all below 300MB/s sustained even when shiny new. Look for example at the performance numbers quoted in these data sheets [1][2][3][4], all between 248 MiB/s and 272 MiB/s.
Now that's still a lot faster than 100MB/s. But I have a lot of recertified drives, and while some of them make the advertised numbers some of them have settled at 100MB/s. You could argue that is something wrong with them, but they are in a raid and I don't need them to be fast. That's what the SSD cache is for.
Btw, you can get refurbished ones for relatively cheap too. ~$350[0]. I wouldn't put that in an enterprise backup server, but pretty good deal for home storage if you're implementing raid and backups.
If anything the opposite has occurred. HDD scaling has largely flattened. Going from 1986 -> 2014, HDD size increased by 10x every 5.3 years [1]. If anything we should have 100Tb+ drives if scaling kept going. I say this not as a but there have been directly implications for ZFS.
All this data stuck behind an interface who's speed is (realistically after a file system & kernel involved) hard limited to 200MiB/s-300MiB/s. Recovery times sky rocket. As you simply cannot re-build parity/copy data. The whole reason stuff like draid [2] were created is so larger pools can recover in less than a day by doing sequential parity & hot-spairs loaded 1/N of each drives data ahead of time.
Not quite that level, but you can get 8TB nvmes. You'll pay $500 a pop though...[0]. Weirdly that's the cheapest NewEgg lists for anything above 8TB and even SSDs are more expensive. It's a gen4 PCIe M.2 but a SATA SSD is more? It's better going the next bracket down but still surprising to me that the cheapest 4TB SSD is just $20 cheaper than the cheapest NVMe[1] (a little more and you're getting recognizable names too!)
It kinda sucks that things have flatlined a bit, but still cool that a lot of this has become way cheaper. I think the NVMes at these prices and sizes really makes caching a reasonable thing to do for consumer grade storage
> Weirdly that's the cheapest NewEgg lists for anything above 8TB and even SSDs are more expensive.
Please don't perpetuate the weird misconception that "SSD" refers specifically to SATA SSDs and that NVMe SSDs aren't SSDs.
ghshephard•1h ago
Nowadays most cloud customers jam their racks full of absolutely vanilla (brand is mostly irrelevant) 128 vCores / 64 physical cores, 512 GByte servers for ~$18k - w/100 GBit NICs. That Sun 4500, maxed out (with 10 Gbit/Nics), sold for $70k. ($110K in 2025 dollars).
What's (still) super impressive was the 48 drives. Looking around -the common "Storage" nodes in rack these days seem to be 24x24TB CMR HDD + 2 7.68 TB NVME SDD (and a 960 GB Boot Disk) - I don't know if anyone really uses 48 drive systems commonly (outside the edge cases like Backblaze and friends)
davekeck•1h ago
Surprised that folks were still using RealPlayer in 2011
zenmac•14m ago
Can someone here on hn with more in deepth knowelege about ZFS commenting on why it is superior to EXT4 for example for file storage? Does each dir handle more children for example?
Last time I read here HN ZFS still seem have edge case bugs. Has it matured now? Why don't distro such as debian etc just ship ZFS as the default instead of ext4?
ksk23•8m ago
I can only imagine the ZFS license is not free enough for Debian (not a rant).
oofbey•2h ago
bombcar•2h ago
temp0826•1h ago
wongarsu•1h ago
nubinetwork•1h ago
wongarsu•26m ago
Now that's still a lot faster than 100MB/s. But I have a lot of recertified drives, and while some of them make the advertised numbers some of them have settled at 100MB/s. You could argue that is something wrong with them, but they are in a raid and I don't need them to be fast. That's what the SSD cache is for.
1: Page 3 https://www.seagate.com/content/dam/seagate/en/content-fragm...
2: Page 2 https://www.seagate.com/content/dam/seagate/en/content-fragm...
3: Page 2 https://www.westerndigital.com/content/dam/doc-library/en_us...
4: Page 7 https://www.seagate.com/content/dam/seagate/assets/products/...
godelski•1h ago
[0] https://www.ebay.com/itm/306235160058
CTDOCodebases•1h ago
https://www.seagate.com/au/en/innovation/multi-actuator-hard...
https://www.youtube.com/watch?v=5eUyerocA_g
CTDOCodebases•1h ago
There are 122TB SSD drives now, though.
UltraSane•34m ago
hugmynutus•1h ago
If anything the opposite has occurred. HDD scaling has largely flattened. Going from 1986 -> 2014, HDD size increased by 10x every 5.3 years [1]. If anything we should have 100Tb+ drives if scaling kept going. I say this not as a but there have been directly implications for ZFS.
All this data stuck behind an interface who's speed is (realistically after a file system & kernel involved) hard limited to 200MiB/s-300MiB/s. Recovery times sky rocket. As you simply cannot re-build parity/copy data. The whole reason stuff like draid [2] were created is so larger pools can recover in less than a day by doing sequential parity & hot-spairs loaded 1/N of each drives data ahead of time.
---
1. Not the most reliable source, but it is a friday afternoon https://old.reddit.com/r/DataHoarder/comments/spoek4/hdd_cap...
2. https://openzfs.github.io/openzfs-docs/Basic%20Concepts/dRAI... for concept, for motivations & implementation details see -> https://www.youtube.com/watch?v=xPU3rIHyCTs
godelski•1h ago
It kinda sucks that things have flatlined a bit, but still cool that a lot of this has become way cheaper. I think the NVMes at these prices and sizes really makes caching a reasonable thing to do for consumer grade storage
[0] https://www.newegg.com/western-digital-8tb-black/p/N82E16820...
[1] https://www.newegg.com/p/pl?N=100011693%20600551612&Order=1
nick__m•50m ago
wtallis•9m ago
Please don't perpetuate the weird misconception that "SSD" refers specifically to SATA SSDs and that NVMe SSDs aren't SSDs.
ghshephard•1h ago
What's (still) super impressive was the 48 drives. Looking around -the common "Storage" nodes in rack these days seem to be 24x24TB CMR HDD + 2 7.68 TB NVME SDD (and a 960 GB Boot Disk) - I don't know if anyone really uses 48 drive systems commonly (outside the edge cases like Backblaze and friends)