Synology Lost the Plot with Hard Drive Locking Move (servethehome.com) https://news.ycombinator.com/item?id=43734706
They must've had a massive brainfart in the management at that company.
Because I don't want to support them.
Your telling me that Synology is giving out apple levels of support in trade for vendor lock in. It sounds like the sort of thing I recommend to others because it wont be my problem.
Go ask a "car guy" who has a civic or something that is LS swapped what car you should get. He's not going to recommend anything he is going to buy... he's gonna tell you to go get a bog standard Toyota so it isnt his problem. Meanwhile he has the fun, project car that does cool things but he's always fiddling with.
Synology isn't for you any more... They want to be Toyota or apple or something not for nerds!
This is still a very "nerdy" take on the market. Though correct I dont think your seeing the other segments that are out there:
The "I want more storage im sick of paying rent every month" crowed is growing.
The designer/editor/youtuber who doesn't want to be their own IT department is growing.
The recent HDD drama is death for Synology's consumer appeal, but I imagine they'll shape-out a mid-market/small-business segment for themselves.
The thing is, the place they're moving a little dangerous. SOHO and SMB using 4-12 HDDs to serve a couple dozen people is a very small niche. Plus you can add professional photographers and videographers on top.
Then what? The upmarket is very, very crowded. Will they OEM their wares to big players as entry level devices?
And probably in that niche too, once people realize how cheap used hardware really is.
Get 6 boxes, daisy chain them as 2x3, connect to a powerful-ish NUC box. Install TrueNAS on it. Use the SATA port for the OS, leave the NVMe slot alone, add a 2-4 TB good SSD.
Set the SSD as a cache to that 30 disk zRAID2 or zRAID3 pool. You can have a kick-ass enthusiast level NAS box which will beat many Synology boxes with a big clue bat...
1) Established players are all overpriced and focus on value extraction, not customer service
2) By actually helping your customers and providing good solutions at an affordable price, you can quickly grow to be a big player in the space
3) Now that we are a big player, we could be making big bugs by squeezing the customers who can't easily switch away
4) Established players are all overpriced and focus on value extraction, not customer service
Long story short, I'll be buying an ASUSTOR AS6804T, and if I don't like the software, I'll just install TrueNAS on it. It's not only officially supported, they have a full length video showing the process. They don't provide tech support, but eh.
Icing on the cake? The eMMC storing the original firmware sits on its own USB port, so you disable that port, and both disable and protect the firmware from being overwritten.
If you want to return to original firmware, enable the port, remove the TrueNAS SSD, and viola!
However, I need to backup a lot of things, and ensure that they don't bitrot. A decade old photography archive, meticulously ripped CD libraries, a full cloud storage backup, etc. etc. Plus I don't want to dig disks to get a single file which I don't want to put on somebody else's computer (i.e. cloud storage).
This needs a two tiered solution. Flash based hot-data area for the running daemons and a spinning array for backups. Both RAID (to be able to scrub and repair bitrot).
The problem is, I'm a sysadmin. I see & use big storage systems and know what they are capable of. I want the personally useful subset of this at home. Plus I want to make it accessible to other people at home, so their files will be safe, too.
This means at least TrueNAS and 4-6 disks to begin with.
It looks like Deadbolt also hit QNAP and Terramaster.
Sort of. Accessible via Asustor's own software which they'd been promoting to users, which I'm pretty sure had some kind of hole punching / bridge node setup so that you could use it even if you were blocking all inbound connections to your NAS. Obviously if you completely disconnect it from the internet in both directions then you're safe (but also can't get updates etc.)
Sure, long term reputation is severely damaged, but why would decision makers care? Product owners interests are not aligned with interests of the company itself. Squeeze the customer, get your miniscule growth, call it "unlocking value", get your bonus, slap it onto your resume and move on to the next company. Repeat until retirement.
America has thousands of food brands but they're all owned by about 6 companies.
Serving the needs of customers (practically the quality of the product) sits down in the list of importance. Sales strategy, marketing, PR, organizational culture, company values, ..., basically the self-serving measures come all before that.
Better to have a heart, care more about your customers, don't put profits first, but still make enough to keep the lights on.
I think that would make everyone happier anyways.
Leaving products and commerce coupled is not considered good practice anymore. It's recommended in some places that you outsource so extremely to the point that your outsourced labor render services to receiving outsourced labor. And that's not considered insane.
I learned a lot in the process, but most important is that the special sauce NAS makers purport is usually years behind current versions.
The NAS finally bit the dust last year because of a design defect associated with Bay Trail systems that’s not limited to QNAP.
I would not be surprised to find out that Synology is seeing a smaller market year over year and becoming desperate to find new revenue per person who is shopping for a NAS today.
I’m in the latter group but Synology has locked themselves out of the market with this choice.
Uploading terabytes of content to the consumer cloud just isn’t practical, financially.
With the size of data we're dealing with, loading everything from cloud all the time would slow analyses down to a crawl. The Synology is networked with 10G Ethernet to most of our workstations.
The other is to fuck engineering. Sell what we currently have, until we can, as expensive as we can, and do not spend on engineering. That is only taking away the money! Can put on some AI glitter to dazzle, but that's it. No one knows what AI is in this narrow field anyway, we can position ourself as revolutionary inventors for anything weird or new. Some will eat up this s*t for sure. Short term is paramount!
But! That doesn't matter, most users are never going to be able to do that themselves, and DMCA protections potentially prevent anyone sharing knowledge of how to do so without putting themselves at risk. The truth is that vendors can, under US law, threaten anyone who tells someone how to make the device they bought work properly with federal offences. Buy something else instead.
(Edit: I have a very particular set of skills. Having put some time into making this work with tools I could put together myself and failing, I found that my Synology had a tool that did it perfectly and refused to do so for the number of cameras I had. I fixed that.)
Panicked, built a full-ass Fractal 804 case + Unraid setup to replace it.
Was looking around for That Guy who mails around a Synology box so I could get my data out and stumbled on a forum post(!) that said the external PSU just fails subtly sometimes. It gives enough power to start booting and then fails.
Bought a 3rd party PSU from Amazon and the Synology boots up.
Now the 918+ lives as an off site backup at my parents' house =)
And they clearly knew how to fix it at this point as the support in other countries DID fix people’s devices. Luckily, the Internet did its thing and I was able to solder in the missing resistor myself.
But that was the moment where I’ve decided that the next device won’t be a Synology again.
It is an easy fix (I had to do it too) but I agree Synology's poor support makes this the last of their products I'll use.
I have my NAS on a shelf in a mini-ITX case, but it only fits two 3.5" HDDs internally (as well as an SSD, but full-size HDDs are what matter for bulk data storage, the more the better)
Also, it takes a normal full-size ATX PSU because I was fed up a previous case that only had room for its own custom PSU, which kept failing under load. But I note there are now standardised small sizes like TFX12V and LFX12V, are there any efficient and reliable PSUs in these form factors?
[1]: https://www.printables.com/model/866109-200mm-fan-front-for-...
Go to your favorite computer parts retailer website. Go to the Computer Cases category. Filter by desired number of 3.5" bays. Pick from the lot.
Regarding mainboards - models from CWWK with lots of SATA ports have been trendy lately. But there are reports of problems. The other options are either using some obscure supermicro mainboards with lots of ports or using a HBA for expansion.
I want to mention a possible middle ground here: UGreen NAS Storage. All but the smallest model come the OS on a seperate M.2 drive. If you disable the watchdog in BIOS, you can use the models like a normal Server This would give you:
* 3x M.2 slots * 4, 6 or 8 SATA bays * N100 (4 bay), Pentium Gold 8505 (4 bay), i5-1235u (6 & 8 bay)
The M.2 slots are connected rather slow, but good enough for OS/app drives.
For example, my plan for the next NAS would be the 4-Slot N100 variant with TrueNAS. One M.2 SSD for boot, Two M.2 SSDs for Apps/Server duties in mirroring and the 4 drives in Raid-Z1.
Requires a bit of tinkering but the idea of plugging a 1L-format computer to turn it into a multi-disk NAS is quite attractive.
However, once my DS415+ dies, I’m currently more inclined to go with a TerraMaster F4-423 NAS and replace their OS with something else. I’ve read that this TerraMaster model is basically an Intel NUC with a SATA card. And their OS is on a flash drive plugged into an internal USB port - so, very easy to change/replace.
I’ve also read that UGREEN devices should be easy to replace the OS on. So, that’s another option I keep in mind.
It has 8 hot-swappable SAS bays (also SATA compatible) and I run a Ryzen 9 3900X in 65W eco mode on an AsRock Rack X470 board which has another 8-12 SATA ports (can't remember the exact number, not used because I use an HBA for the bays), so connectivity for storage is high. There's 2 spaces for SATA SSDs on top of the drive bays and you could fit more in various spots if you tried, and 2 NVMe slots on the motherboard.
Also got a single-slot nvidia GPU in there and a 4-port Gb NIC to supplement the 3 existing Gb ports on the board itself (one is dual-purpose for IPMI), some models of the AsRock rack have dual 10G ports too.
It runs most of the time at around 90W which I think is exceptionally low for the performance available, and can go to about double that when the GPU is in use, still very reasonable.
Should save a lot on power and have plenty of muscle for anything you throw at them if you're willing to gamble on the hardware quality.
I'm happy to see it—looks great, it's priced insanely well, and I can see myself switching from Synology in the future.
In other news, I've been a fan of LucidLink[2] for awhile, which you can use to avoid needing a NAS for video editing workflows, and a very slick competitor finally came onto the scene[3]. LucidLink totally works, but their software is frustratingly idiosyncratic.
These services offer some kind of chunked file streaming magic that lets you progressively download pieces of video files as you need them.
I was somewhat surprised to discover, however, that there doesn't appear to be an open source project that provides this functionality.
Anybody know of anything? And I wonder if anyone's looked into it and knows how it works?
[1] https://store.ui.com/us/en/products/unas-pro
If I want to work on one of these old projects, I have to download it locally so 4K editing works.
Meanwhile my old projects back when I used different software are impossible to open.
I have spent days setting up all this junk, HATE the Synology UI, and regret it all.
What’s the better solution? Just a bunch of RAIDs that I connect to with USB??
If you're local to your equipment (and can afford it), 10G local network with UNAS Pro. Search YouTube for "unas pro video editing" and there are various people discussing their setups. In this setup, your connection to the NAS is fast enough that file transfer speeds aren't such a problem, and the NAS software is nicer to deal with.
And, I know less about it, but you might want to investigate: https://www.blackmagicdesign.com/products/blackmagiccloudbac...
Finally, check https://www.reddit.com/r/editors/. Lots of good threads there.
I'm more shocked by the state of samba in macOS (without additional software). Having to go to the network and manually reconnect to the NAS share every time I come back home is ... poor.
To get my mini power up, connect SMB then start some containers I made a horrific Automator app, which runs a script and just tries, sleeps then tries again until my containers can boot and access their data. It’s disgusting. But it works.
I have bought a used DS920+ with 20GB or Ram - still a perfect combo of transcoding and docker. However since I started discovering the world of selfhosted apps, Synology has no unique selling point anymore. Their apps stalled in innovation and with this drama I would go for some dedicated linux hardware with docker and thats it. Most of the data fits on a simple 2Drive NAS today anyway.
When I outgrow my DS920+, I'm probably gonna build a custom Unraid machine to replace it. Most of my needs from Synology are being able to run Docker containers and mix-and-match drives.
The weird quirks of Synology Docker are painful. Eg containers that won’t stop, or won’t start. It’s not easy to get into the containers (docker exec), recreating is tricky compared to copying and pasting docker-compose.yml.
Personally, I mainly use the CLI to manage my Compose files even on Synology DSM.
Thank you.
On products you can buy TODAY, you find:
- Their Btrfs filesystem is a fork of a very old branch and doesn't have modern patches
- A custom, non standard, self built, ACL system for the filesystem
- Kernel 4.4
- PHP 7.4 (requirement for their Hyperbackup app)
- smbd 4.15
- PostgreSQL 11.11
- smbd 8.2p1
- Redis 6.2.8
- ...
They claim it's OK because they've backported all security fixes to their versions. I don't believe them. The (theoretical) huge effort needed for doing that would allow them to grow a way better product.And it's not only about security, but about features (well, some are security features too). We're missing new kernel features (network hardware offload, security, wireguard...), filesystem (btrfs features, performance and error patches...), file servers (new features and compatibility, as Parallel NFS or Multichannel CIFS/SMB), and so on...
I think they got stuck on 4.4 because of their btrfs fork, and now they're too deep on their own hole.
Also, their backend is a mess. A bunch of different apps developed on different ways that mostly don't talk to each other. They sometimes overlap with each other and have very essential features that don't work and don't plan to fix. Meanwhile, they're busy releasing AI stuff features for the "Office" app.
Edit note: For myself and some business stuff, I have a bunch of TrueNAS deployments, from a small Jonsbo box for my home, to a +16 disk rack server. This was for a client that wanted to migrate from another Synology they had on loan, and I didn't want to push a server on them, as they're a bit far away from me, and I wanted it to be serviceable by anyone. I regret it.
Edit: what they deploy on their NAS is an old version of a testing implementation of the KMIP protocol. PyKMIP: https://github.com/OpenKMIP/PyKMIP
As for full disk encryption, you can select where to store the key, which may be on the NAS itself (rendering FDE more or less useless) or on a USB key or similar.
As a KMIP server you use:
- Another Synology NAS with DSM >= 7.2
- A KMIP compatible key server
Except for the demo implementation that Synology uses (PyKMIP), all the KMIP compatible servers I've found have licenses in the tens of thousands a year. So if anybody has any suggestions to substitute PyKMIP...--
0: https://kb.synology.com/en-global/DSM/tutorial/Which_models_support_encrypted_volumes
The DSM itself lives in an unencrypted partition or volume. Applications with data in encrypted volumes will be inaccessible until the volumes are unlocked.
As usual, there is an easy workaround. You can run a KMIP server in a docker container and set up an external keystore. Once synology allows you to proceed with volume encryption, you can discard the KMIP server if you want and use the recovery keys.
I went down the rabbit hole and implemented the KMIP client and server, that pass the tests from OASIS.
Sidenote: please, somebody nuke the OASIS from orbit. To be sure.
Not to defend Synology, but popping a drive out of the NAS so that it won't be noticed (or noticed much later) is a much easier way to steal data than carrying off the whole NAS. I assume they're guarding against the kind of scenario where an employee steals steals drives rather than ski-masked thieves breaching the office and making off with the NAS.
The primary value of disk/volume encryption is actually for scenarios like end-of-life replacement, RMA, failure and disposal - even if someone later reconstructs the disk sectors, the bits remain unreadable. This is one layer of defense in depth, not a substitute for physical security.
Synology also supports KMIP, which I see addressing two situations:
1. Data center key governance and media mobility - Multiple hosts (including spares) can use KMIP for centralized key management, improving the mobility of drives within the data center and reducing the operational cost of moving drives between machines. When decommissioning hardware, keys can be revoked directly in KMIP with an audit trail.
2. Edge/branch sites with weaker physical controls - By using KMIP, keys are kept in the more secure data center rather than on the edge device itself. The edge hardware stores no keys, so if an entire machine is stolen, it cannot be unlocked, preserving confidentiality.
I got an issue where mind would randomly start writing disk like crazy and maxing cpu usage, to the point I was bothered by the noise. I’d stop all containers, leave it as close to idle as I could manage, still spiking.
There was no way I could learn what was causing it.
I would like to assume it was a disk maintenance process or something, but for all I know it could be mining bitcoin and I’d be none the wiser. It went on for some weeks then stopped.
Mine is in the basement for this reason. When it’s still and quiet after midnight I can still hear it grinding away. God I hate the sound.
May or may not be what you encountered, but had a customer caught by this and found out the hard way you can't stop it. My issue is not the processing, it's the throttling, it's so crazy how the entire NAS gets taken down for like ten minutes (and that was on a racked xeon model), no samba no nfs no nothing answering anymore.
And yes, the lack of trotting is an issue, since you can’t even reach an administration panel. When it’s bad even ssh struggles.
FWIW the new Ugreen NAS run Debian. I don't know a ton about it, but it's be great if they could stay a little more up to date. This Synology story with ancient forks & weird encryption sounds truly bogus.
I will say that the Ugreen NAS seems to offer more performance for less watts, so it's definitely something I will keep an eye on in the future if it pops up on Ebay.
> This Synology story with ancient forks & weird encryption sounds truly bogus.
It's not. My Synology is running Linux kernel v4, and I opted to use their "SHR" RAID configuration and can confirm that it's some weird BTRFS variant that is likely deadlocked due to the kernel.
The encrypted volumes I've made also look very much like the EcryptFS files I've been seeing on other setups.
I'm currently in the process of mainlining it to kernel v6 to reap the better power and idle / hibernation rewards, as well as just using a standard Ext4 FS with updates
SHR is mostly MD-RAID and LVM, and works with ext4 too.
Over time their advantage has eroded as upstream has caught up, to the point that it looks ridiculously out of date today.
But don't you love it when companies invent their own security instead of using battle-tested open-source systems?
It's confusing me after the preceding displeasure wrt Synology
They already had one Synology device, they don't have any IT employees on site, and I'd need to take a flight to go to their offices, so I thought that using another Synology device would be better for maintenance. They (and I) were also worried about the noise: it's an small office, and they needed at least 8*3.5" drives, and most of the decent solutions I found for 8 or more drives were big and noisy. The Jonsbo N5 appeared a bit later, that looks like a good candidate today.
Now I found that all their applications are half done, they don't upgrade or fix them regularly, security-wise is a mess, and everything on the backend is super old...
My DS918+ has multichannel SMB and possibly also parallel NFS. It only works if you have multiple NICs connected.
Other than that, i completely agree. Their tech stack is horribly outdated, and while i understand their reasoning for not upgrading, there's a limit to how long you can do that. Their reasoning is that they know the software that's currently running, warts and all, and can better guarantee stability across millions of devices with fewer moving parts.
I've a Ryzen Embedded system with lost of RAM as my NAS box and a small Intel N-series based system as my Plex server that pulls media off the NAS box.
(You might want to upgrade your transcoding box to a newer generation processor that supports, say, AV1 encoding.)
And FWIW my Ryzen Embedded system isn't especially low-power by design, it was just the most accessible way of getting ECC memory for me.
Can do the same with various GPUs, but Quick Sync tends to be the lowest-power and most well-supported at the software level.
This breaks both the 'store key locally' and the KMIP setup.
And for their file-based encryption you cannot change the password. You need to create a new folder with a new password and copy all files over.
- one device died, was EOL at that point, and newer ones no longer can read the disks - stupid limits for array size. Depending on your setup adding disks can mean "copy everything off, delete arrays, and then create new ones". Also, want one 200TB array with your disks? Depending on model size you'll have to do multiple arrays instead with a bit to way lower capacity - syncing a share to another instance is broken, with pretty much no useful debug information. Already the setup is stupid (doesn't let you select which array it goes on the target machine), and then seems to change access permissions of the sync user on the target box (i.e., you can do one sync, after that you'll need to reset the access permissions). I wanted to avoid doing my own sync script, but seems I'll have to do that in the end - stupid disk compatibility warnings (which currently you can disable when you have SSH access) - wireguard only via third party addons. It's 2025. I didn't even check before if those things can do wireguard - it didn't occur to me that a device sold nowadays might not be able to do that.
While debugging I also noticed that pretty much every software component is from the stone age.
There are NO low power NAS boards. I'm talking about something with an ARM CPU, no video, no audio, lots of memory (or SODIMM slot) and 10+ SATA ports.
Sure, anyone can buy a self-powered USB3 hub and add 7 external HDDs to a raspbery, but that level of performance is really really low, not to mention the USB random disconnects. And no, port replicators aren't much better.
[1] https://lowendbox.com/blog/are-you-recyling-old-hardware-for...
I want less than 10W idle for the whole system, maybe except HDDs, but even those will be in sleep much of the time. x86 boards are mostly ATX-powered and I don't think any ATX power source can go that low and still be efficient (not draw 20W while powering a 10W system).
And yes, mobile phone CPUs are good enough. I'm using a Turris Omnia now and Marvell 385 is OK, except I have to use an external DAS for hard drives which eats 10 times more than the Omnia with all drives sleeping.
If only the chinese didn't try to make good-for-everything-best-at-nothing ARM boards with lots of video outs, audio, discrete NIC and soldered wifi...
1 HDD consumes around 5-7W idle, so with 8 HDDs you get to 40-60W on HDDs alone (all idle); adding 6W with N100 seems like insignificant fraction. The moment you actually use any HDD the consumption per HDD shots up to 8-10W whereas N100 shots up to 14W so you end up with 64-80W from HDDs and 14W from N100. Why would you like to squeeze component that is the least important (CPU) while retaining lots of SATA HDDs as that's your priority? Optimizing the wrong thing? If you wanted to lower power, the easiest way is to replace HDDs with 16TB SATA SSDs, each consuming 0.08-2W. Then CPU might be a bottleneck.
For my typical usage, the hard drives are probably more than 80% in sleep mode. If I had more SATA ports, I could probably add a frequent-access cache on a SSD and then they would be 99% sleeping.
The drives I have, ST2000DL003, consume 0.5W in sleep, according to the spec sheet. So all 8 of them would consume ~4W.
You don't need a NAS for 16TB, you just need a RasPi with a 16TB USB HDD connected to it and a second one for backups that you keep mostly offline.
But you're right. In a few years it will become advantageous to switch to a couple of larger HDDs. I could probably do it right now, but I don't yet trust these new drives as much as I trust the old ones, especially since the refurbished scandal.
With a mirror of 2 disks, if one disk dies, you can still read; if two disks die, you're toast. And you lose 1 of 2 = 50% of capacity on redundancy.
A quite different balance.
If you look at some industrial boards (e.g. from ASRock) they're DC-powered. I haven't actually measured the power draw on mine - I'll try to remember to do so next time I power cycle it.
An extra 10W being consumed around the clock would cost 24 h/d * 365.25 d * 0.50 $/kWh 0.01 kW = $43.83. Indeed, saving 10W would save enough in 10 years to buy a whole new NAS! (Sans the disks.)
To save some clicks:
https://nas.ugreen.com/ https://www.minisforum.com/pages/n5_pro
What do you guys think about security concerns around minisform and ugreen being Chinese companies?
https://www.nbcnews.com/tech/security/china-used-three-priva...
- your NAS should not be allowed to talk to the outside world
- you should wipe any pre-installed software and install your own OS
- Max 48GB*2 DDR5 ECC
- 8 core PRO 8845HS
- 25W with nothing, doing nothing, realistically 50W
- 25G combined network
- 5 M.2 (3x2 and 2x1 lane) and 6 HDDs
- Oculink
https://aoostar.com/blogs/news/the-aoostar-wtr-max11bay-is-a...
Well that's certainly a claim.
It took me a week of fighting to get it to reliably power up, connect to network shares and then start some containers.
How could this be hard?
If it can't run linux, it's not going to make a good storage server on the software side of things.
I used a Fractal Node Case that has 6 drive bays. Installed TrueNAS Scale on an SSD. Swapping drives is a pain as I have to take the computer apart. But that is infrequent. So it is fine.
10w is... Nothing. There are only very specific cases where it's worth picking hardware on this constraint, and unless you're on a solar powered sail boat or something similarly niche, you probably shouldn't be prioritizing this.
In my region, 10w comes out to about 0.90 USD/month. Or roughly 3 pennies a day.
Over the entire lifetime of the device (5 years assumed) it's less than 50 USD in power costs.
I'll take basically any other quality of life improvement instead...
10W constant over 10 years would cost me 275 euro. My hard drives (7 ST2000DL003 and one ST2000DL001) are 10-15 years old now. They're all different batches, none failed yet, so I expect it to last at least 5 more years, possibly a lot more.
The current NAS setup I have (router + DAS + 2 USB) is around 50W. Over 20years it will cost me ~2800 euro in today's prices. So you see, the electricity is a very significant portion of TCO. In fact, it's more than half, because I bought everything second-hand.
I feel like you are making your setup more complicated than is even worth and searching for a weird solution when all you need is a RasPi with a big 16+TB USB HDD connected to it.
€2.30 per month. Which is almost nothing.
> Over 20years it will cost me ~2800 euro in today's prices.
That is €11 a month over 20 years. So the extra 10 watts is still next to nothing even taking into account your exaggerated time frames, which I am dubious as to whether this is real.
I am certainly not going to worry about €11 a month.
3k over 20 years is literally nothing.
It's lost change in the couch compared to almost any other financial decision you could make.
That is extra 10 watts, is less than £2 a month in the UK. Drives are about 5 watts idle and I have 6 of them.
The NAS costs me about £20/month. Which isn't too crazy IMO. The UK has some of the most expensive energy prices in Europe.
I will probably be upgrading the board to something better in a few years and see if I can put in a GPU for some AI bits and pieces.
Just to add a datapoint: I could also get a 25Gpbs connection in Switzerland. Actually checking that, I'm realizing that I could upgrade without paying more (maybe just a setup fee, less than 50USD).
Plus, we're most likely talking about Gigabit networking here, so unless your workload consists of very parallel random access, this is going to be the limiting factor anyway.
can ZFS do this today?
The closest thing available now would probably be a Radxa ROCK 5 ITX+, a motherboard with a Rockchip SoC and two M.2 slots, into which you could put their six-port SATA cards. No idea what that whole setup will draw, though.
EDIT: I have to complain about the article you linked. It's certainly true that one should account for power consumption, not just purchase cost, but some crucial mistakes make the article more harmful on the whole.
The author cites 84 W power consumption for an i5-4690, and 10 W for a J4125 CPU, but those figures are the TDP. For all we know, those CPUs could idle at around the same wattage, and from my experience they likely do.
Having done some measuring myself, I'd say the largest source of power draw in an idle NAS will be the PSU, motherboard, and disks. With any remotely recent Intel CPU, it idles so efficiently as to be negligible in a PC.
I have a Synology DS920+ 4-bay that averaged 20W total including 2 spinning drives with sleep disabled. I agonized about going with the closed product, and in many ways regret it. But at the time there was nothing I could find that came close, even without the drives. And that's before factoring my time administering the DIY option, or that it would be bigger and less ruggedized.
I went as far as finding the German low power motherboard forum spreadsheet and contemplating trying to source some of the years old SKUs there. You've gotta believe us when we say that before the n100s arrived, it was a wasteland for low power options.
In many ways it still is, although these n100 boards with many SATA are a sea change. Once you set out to define requirements, you pretty quickly get to every ZFS explainer saying that you are a fool to even attempt using less than 32 GB of ECC memory...
The Intel N100,etc series of machines seems popular with builders even if the RAM restrictions drives me nuts.
I think the major issue seems to be cases actually, there's tons of small cheap AMD machines from manufacturers like BeeLink that trounce most NUC setups for performance, but like the NUC's as soon as there's disc enclosures the price shoots away.
There are definitely low power ARM boards with a PCIe lanes. Typically its NVMe, but you can adapt that to 4x PCIe 3.0 which is a lot of bandwidth for HDDs. Not sure why you need a lot of memory for a NAS though, but they do have boards that have 32GB of memory.
What's wrong with this?
https://www.amazon.com/Radxa-5B-Connector-Computer-32GB/dp/B...
And connect a card like this to the NVMe PCIe which you can connect 8 SATA HDDs to with SATA breakout cables.
https://www.ebay.com/itm/155007176276
If you need more than 8 HDDs you can get a SAS2 expander to connect to the SAS2 card and then you could easily connect 24 HDDs with a 6 port SAS2 expander and breakout cables.
Or if you put this small board and card into a server case that has a SAS2 backplane with expander built in, then you can just connect all the disks that way.
Another option, not ARM, but still low power and neat.
https://www.lattepanda.com/lattepanda-sigma
This has Thunderbolt 4 which you can connect to a PCIe slot like this:
https://www.dfrobot.com/product-2832.html
They have a lot of neat stuff, you can get the tiny LattePanda Mu, and dock it in this:
6.1 Electrical Characteristics
The maximum power requirements for the LSI SAS 9200-8i HBA under normal operation are as follows:
PCIe 12.0 V = 0.74 A
Power
— Nominal = 7.92 W
— Worst Case = 13.20 W
Seems like it uses just a little more than 1 large capacity HDD.
I can't believe people are worrying about something less than 10 watts.
10 watts in constant use for a whole year is like $12 at the average electricity cost in the US.
I don't even let my HDDs sleep, the constant spin up and spin down and temperature cycle associated with puts way more wear and tear on them and would cause them to fail quicker and that is way more expensive to replace.
I have 20 HDDs connected to one of these SAS2 controllers in my home server.
I calculated that over the entire lifetime of my current system, the energy will cost me more than the system itself, all 10 HDDs included, and it's only 50 W or so.
The need for low power NAS boards is an entirely different matter. And so are advices on how to get rich.
for 10+ sata ports you might as well get an x86 motherboard as it's going to draw lots of power anyway
Unless you plan to power down most of the drives most of the time
I do. Read my other comments.
To run 10+ sata drives you'll either need to take a lot of care they're not spinning up at the same time, or getting used in parallel, or you'll need to dimension your power supply to cope. A beefy power supply will have a higher idle draw making the focus on getting the whole system idle down to 5w pretty hard
Why? There is no evidence that ARM is the only power efficient CPU. i5, i3 and n100 are all power efficient.
> no video, no audio
Why? Disable onboard video if you care that much.
> lots of memory (or SODIMM slot) and 10+ SATA ports This eats power, conflicting with the rest of your requests.
> Sure, anyone can buy a self-powered USB3 hub and add 7 external HDDs to a raspbery, but that level of performance is really really low, not to mention the USB random disconnects. And no, port replicators aren't much better.
No, that's not what you do for a power efficient NAS. You build an i3, i5 or n100, turn off all unneeded peripherals, and configure bios as needed to your level of desired power consumption. under 10W is achievable.
They are, but the motherboard is not, or at least not as much as an ARM board.
> Why? Disable onboard video if you care that much.
And it would boot... how? AFAIK, no UEFI system is capable of booting headless and very very few BIOS systems were.
> No, that's not what you do for a power efficient NAS. You build an i3, i5 or n100, turn off all unneeded peripherals, and configure bios as needed to your level of desired power consumption. under 10W is achievable.
I very much doubt that. N100 maybe, just maybe, could go lower than 20W if the power source is very very efficient, but I haven't seen any system with 10+ SATA ports. The commonly suggested solution here, to add a server SAS/SATA controller, would double or triple the idle power.
The N100 on its own can go quite low - my N305 system idles around 5W. (But laptop, so zero SATA ports.)
The particularly jarring thing in this article is the SMB concurrency limits. Those effectively gate your scalability in terms of storage. Even more than forcing their own drives to be used, the concurrent user limit is a clear enterprise upsell: charge people to get a higher limit. The byproduct, of course, is that elaborate home lab connections or setups will also be hit by this.
UI isn’t without their own faults but allowing their unifios to run on grey boxes has improved my opinion of them further.
https://taoofmac.com/space/blog/2024/12/26/2330 (includes all the steps I took to run Proxmox on it as well as an overview of their standard feature set and BIOS)
If you've got a QNAP, you can install Debian 10 on some of them <https://www.cyrius.com/debian/>
If you've got a Synology, it has been done on some older devices as well <https://wiki.debian.org/InstallingDebianOn/Synology>
So all is not necessarily lost, and I have one of each so will need to do some experiments!
A lot of the alternatives being proposed are not so easy to maintain. A full general purpose OS install doesn't really take care of itself. And I don't have (and don't want) a 19-inch rack at home. Ever.
So what's the set-up-and-forget-until-it-gets-kicked-over option?
I came to Synology after years of managing regular Linux (Debian) servers, then Unraid, and then Synology.
Synology was the most expensive thing I’ve used but I also _never_ think about it. The same could not be said for previous setups.
I want a stupid-easy NAS, plug-and-play, hotswapable bays. I’m not interested in having to shut down a tower and open it up to swap/add drives.
I have 2x12-bay Synology’s and I haven’t found an equivalent product yet (open to options).
How often do you actually do this? In 15 years of running my own NAS boxes, I so far had to do it once. I, of course, choose slow, middle of the range disks.
It’s partly the annoyance of being in a cramped space filled with drives but also the downtime. Family and friends use self-hosted software than runs on my main server and uses the NAS(es). Shutting down a NAS means shutting down the software. Yes, I can let them know ahead of time and “schedule downtime” but I dislike needing to do that unless I have to. It just makes the system feel less stable for them and I want to provide a great experience.
With hotswap bays externally accessible I don’t have to stop anything. I’d actually be fine with UnRaid (I still run it as my “app” server) if there was a case with 12 or more 3.5” hotswap bays.
I looked into large JBOD servers but between fan noise and rack mounted servers often being in a whole other class (one I have much less experience in compared to desktop tower builds) I’ve never been able to convince myself to get one.
My Synology 12-bays are quiet and easy to work with so even at ~$1.5K they were a steal in terms of maintenance and upkeep (for me).
its a great backup for all your important files.
There are also 3d-printale cases where you buy a SATA backplane and screw it in.
It doesn't solve your software problem (though maybe TrueNAS might work?).
But it's true that you could probably leave a desktop on "NAS duty" for years unattended without anything really major happening, especially if it's only accessible on a local network.
That’s not always for good reasons, though.
I want a small reliable box that I just put in the corner and I can forget about for months at a time, as long as it provides me the services I configured it for. I access my NAS UI maybe once every 3 months.
I know exactly how to roll my own NAS (and I'm already rolling my own router), but I just don't want to deal with operating it.
Synology still scores very high on this single metric.
And security could be an issue, but it's not like Synology is any better there with their old as dirt dependencies.
Snark aside, TrueNas is probably your best bet. Maybe Unraid? Still, with all of these, it's not like they require constant attention to operate.
There's no real auto-update for Arch, and it wasn't designed for it (IIRC according to their forums/wiki), so I have to login every now and then and run pacman (I'll admit I haven't invested more than a couple hours searching).
The `kea` package (DHCP server) got updates this summer, 3 weeks apart, that both broke configuration file parsing, and I had to discover the next day on reboot. "But you should have read the changelog!", "But you should have tested the config files on updates!", "But you should have restarted the service!" ...: no, no I don't normally test every config file or restart every service after an upgrade, and I don't normally read Changelogs of all software installed on my machines. I shouldn't have to do that in 2025.
`zfs` is out of kernel, I've dabbled with that for a while until I understood if you want your machine to reboot 100% of the times, you'd better stick with in-kernel filesystems, no matter how worse featureset they have.
Sometimes stuff will just break on package upgrades without notice or warning, and you're left to pick up the pieces, normally in a hurry because your partner is screaming at you.
Compare that with Synology, where I have never, ever needed to login to click "run updates" or put it in "maintenance mode" to fix it. It updates itself, it boots at the set time, it brings up all services, runs periodic scrubbing, informs me via mail, shuts down at the specified time. It has a 2x-SSD mirror for cache, and I don't need to care about the disk layout and configuration, and the cache configuration, .....
It's literally a set-and-forget auto-upgrading box that I can just use instead of maintain.
I understand that Synology is not in good shape anymore, see my other top-comment :)
And your experience of Arch with automatic updates is very different from their semi-suggestion of Arch with no updates.
As for ZFS, I don't use it for root, in part so that even when I mess with things I know it'll boot fine. But that only applies to root.
Things that maintain themselves are amazing and I want more of them in my life. Anything that requires shell commands is out out out. That is for younger people.
Everything else, I agree and Synology has delivered enough (such a lifetime of 10+ years with full updates) that I am not really happy to try my chances with something else unless the hardware dies.
So, no, my experience is that unattended Linux is not really suitable. Your experience may vary.
What I want is something that's more like an appliance than a project. You do not look at your toaster every day and wonder if it needs to perform updates. Or your washing machine. (If you do, please, seek help.) These are appliances. I want an appliance that serves me bits, lots of them, and lets me use multiple machines without caring too much which one has which file. My Synology has, to date, been excellent at that role.
You don't understand, in that post I'm talking about purposefully not updating. At all. Updates can't go wrong if they don't happen. If it refuses to update then you win.
> What I want is something that's more like an appliance
Yeah, you get it! Set it up and then leave it alone.
And don't expose it to the internet.
Yes, it is a pain versus having a NAS, but at least I don't have to deal with this kind of stuff.
But the problem is when you need to recover and have 20 Blu-ray Discs with important data scattered about, it takes days.
Or when there is a specific piece of data you want/need and only have a vague idea of where it is/was in history. Maybe if those ultra capacity discs took hold but it looks like the era of optical is ending
Search isn't helpful if the stuff wasn't properly indexed.
Synology indexes file contents similar to Spotlight on the Mac.
Hence why multiple copies.
I mean, I have one for handling an HDD with busted power circuit that cause system resets at regular intervals(likely brush sparks from a power steering motor went back up through USB and killed it). It's almost wrong that there isn't a pre-made solution for this.
Starters: Fractal define - mid tower- 8 official 3.5" bays. With plenty of open space for more.
Jonsbo cases are the most NAS-like.
OS: Easy button: FreeNAS. Maybe the newer TrueNAS Core rework. As long as you don't need the latest and greatest in features, and at this point probably a bunch of unfixed security.
Otherwise it's Truenas Scale- just avoid the docker/VM system. Its a complete cluster.
I dearly wish Cockpit Project was up to par for this.
I was looking for alternatives, but anything else didn't come close to Syno Photos+Drive+Surveillance+Active Backup package you get with the NAS.
There's alternatives to each, sure, but they mostly need massively more powerful hardware to run pile of docker containers and end up being alpha quality.
Debian and unattended upgrades might need a tweak if you want it to actually reboot by itself, but I think the option is there
A Debian stable mostly does except on upgrades, and that's rare and painless enough. Even with a Synology you still need to make sure you have proper monitoring in case the hard drives start failing.
The WebUI is responsive, it can be a bit brickish around the edges requiring you dive in to the logs if something doesn't work; turned out to be bad ram on my host refusing KVM to boot. Once it's up and working it sails.
GPU-PassThru in a Windows VM is proving incredibly smooth especially with using Moonshine on FreeBSD.
The docker ecosystem is a nice addition and the community seems fair. I can too throw all my old SSD drives without limitation (granted the basic licenses only allows six) is nifty in saving dust.
It being based off Slackware is pleasing. It is closed source but so is Synology and for $100 for a fully unlocked feature-rich NAS/OS - totally.
Comments in this thread are making me think twice. So what's good? Ugreen? I'd appreciate recommendations.
10g ports, latest amd, hopefully freebsd works okay on it…
Seriously, takes an hour to setup your own NAS and you can mix any drives, setup any encryption you want, seedbox etc. I totally understand convenience but this is not a email server you're setting up here, it's just a NAS.
But let's assume you don't have a clue and have to follow some tutorial and do some reading and it takes you 2 hours. That's amortised across a decade. Especially now when easy distro upgrades are basically unattended so you can use the same setup for a decade and stay up to date.
I have watched the software evolve from "quite good" to "very good" to "lets reimplement everything ourselves and close it off as much as possible".
It's sad because back in the day, at least for me, the brand was the perfect UX in many regards: small form factor and low power, price-accessible 4/5 bay NASes, a couple CPU tiers, upgradable hardware, regular software updates and a huge collection of software features.
For me they were the go-to choice for NAS because of the good web UI, the ease of setup and reliable operation that covered 99% of the prosumer usecases. They would just chug along forever, auto-updating themselves, never skipping a beat. Whenever I wanted to do special things with it via SSH I could, but the environment has become increasingly hostile to the point where I need to spend hours wondering how the heck the thing operates without bursting on fire.
I'm hoping that by the time I need to change my DS920 another good company like they were will have emerged, because building your own solution comes with operational maintenance and I want the thing to Just Work®.
One of the things that sold me on the Ugreen was that it is basically just a garden-variety N100 box, upgradeable RAM, supports SATA and M.2, etc.
According to this installing your own OS doesn't invalidate the warranty so if I decide their software is lousy I can install Debian https://wiki.debian.org/InstallingDebianOn/Ugreen
One has no linux or windows video drivers (Intel’s fault — no transcoding), and caught fire (not intel’s fault).
The other was one of the ones where the clock signal is basically a doomsday countdown timer. I had to swap it out for a warranty board for some other reason.
So, there’s no way I’d consider an N100. Other options?
I have a DS1823 for what it’s worth, but I also have a home built NAS from ten years ago and a Ugreen running nixos. I explicitly use the Synology stock for things that just need to work
Their custom software has its quirks (eg scp doesn’t work unless you apply the -O flag, for “security” reasons), and the quirks change sometimes after updates.
(What was amusing was that I kept finding it powered off, and spent quite a while trying to find why it could be shutting down. It turned out that, because I kept it on the floor under my desk, the Roomba would occasionally bump into it and hit the power button on the front)
I went to TrueNas and have been extremely happy and never looked back.
There’s really not a lot of reasons to use Synology anymore (only thing I miss was the sync solution they had. It was indeed better than Syncthing and the likes).
But you don’t really need ECC for ZFS. That’s a kind of myth that has been debunked several times. Sure, it’s always good, but ZFS works just fine without it.
(I’m on my phone so it’s dificult to find a good source for the claim, but if you search a bit, you’ll find some good blog posts talking about it).
https://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-y...
I've put some work into customizing my Synology NAS(s), and they're doing a lot for me right now, so it will be painful to transition, but I'm going to bite the bullet and do it anyway, because they've become irredeemable.
Used to do the same thing at EMC back in the 90s-early 2000s with Symmetrix. The cover story was that the EMc-branded drives, which were marked up like 1000x had “special firmware”.
The reality was that it was to prevent procurement guys from putting a BOM spreadsheet in front of the sales rep and saying “why are you charging me $2M for $100k of parts I can buy at Fry’s?
the replacement will NOT be from Synology
Once it’s running its just a breeze.
The Unifi UNAS Pro looks pretty cool (and my entire rack is Unifi) but I want more control over the OS and would prefer to run TrueNAS: https://store.ui.com/us/en/products/unas-pro
I am shopping to replace a DS918+ that has been in service for almost 8 years. Honestly it's been rock solid without a single hiccup - even upgraded the entire pool disk-by-disk a few years back without problems. But I was already losing interest in Synology's OS and after their recent shift to locking down drive options that is the nail in the coffin so I wanna build my own.
(I'm just going to run a stock Linux distro rather than anything particularly fancy, personally. Good enough for me. Maybe this is finally an excuse to run copyparty somewhere.)
for use as a home server, it has a ZFS RAID 1 where I replaced the drives once and I guess I should again soon if I care about the warrantee. I did have a challenging upgrade because Ubuntu made a /boot partition that was too small to reliably update the kernel but I had a spare SSD sitting around which let me just install the OS on the new SSD and swap it in without any risk. It runs the ubiquiti controller, plex, and a few other services. It is not a powerful server by any means but it is massively overpowered for what it does which is better than being underpowered. I messed around with appliances in the 2000s and they inevitably left a bad taste in my mouth. They had 10% of the power of a real computer but cost 70% as much or they used an i3 and you couldn’t install a GPU —- penny wise and pound foolish.
I found it almost impossible to find a case which has a fair number of bays (at least 6), is meant for an human environment (i.e. it is not ugly or meant for a rack) and doesn't cost a lot.
So I am now working on hanging an ikea skadis on the wall and attaching everything to it. The big panel has just the right space for everything (10 disks, mobo, extra sata controller, PSU), it is ridiculously cheap, I had fun designing the parts and 3d printing them (though there are many available for free on the web) and it gives easy access to the disks and can be a topic of discussion when friends come over. Not sure it will work properly though :-) (issues may arise with vibrations, temperatures etc).
As an alternative to make it completely visible, I though about doing the opposite, hiding inside regular furniture. You can easily find metal frames meant for an "open chassis" to properly arrange motherboard, disks etc, so it is doable. The easiest would be using a regular cupboard, but I would love to use an ottoman and be able sit on top of it, a la cray-1. I "only" need to solve the issue of hiding the cable, and giving proper ventilation.
As a dedicated personal backup for my family it's been perfect. The latest vendor lock in has me reconsidering how I'll upgrade when the time comes. Until this post I was considering a $1000 unit and transferring my SSD drives before buying more storage.
Sounds like it's time to build a proper storage+computer rack.
Synology DSM has all kinds of limitations and is slow. Your encryption cannot be like this, you cannot store data in your NVMes, the number of versions must be limited to that, you cannot search snapshots, Hyperbackup explorer provides just one button: Recover (compare that with restic or Borg!), encryption in HB is opaque, ABB is limited to specific kernel versions, information out of SMART is limited, cannot upgrade drives’ firmware, search options are too limited and search is bad, OpenVPN is old and has compatibility issues with newer systems, … I can go on and on!
You cannot compare that with a backup system with ZFS, restic/borg, syncthing and rclone, in the same class!
But the drive lock-in tipped me over the edge when I needed to expand space. I'm getting burned out with all of these corporate products gradually taking away your control (Synology, Sonos, Harmony, etc.).
Even though it takes more time and attention, I ended up building my own NAS with ZFS on Debian using the old desktops I use for VMs. I did enjoy the learning experience, but I can't see it's that reasonable a use of time.
That totally depends on if you enjoyed yourself and maybe learned something or not. Totally up to you!
But I will never buy a NAS or SAN from a company that uses custom firmware on drives. Some dipshit beancounter will come along, "save" some money by not renewing the service contract, and then I'm off hunting for replacement drives on eBay.
You can set the free disk space threshold percentage but it will accept number no lower than 5%. This is obviously a rudiment from old times, 20 years ago when volumes were 1-10GB, not TB.
I asked them to change threshold from percent to GB, over multiple times over few years - not going to.
I asked them to allow change threshold to lower value than 5% - not interested.
Not only you can’t know when disk drive runs out of space, but excessive notifications when there’s still lot of free space desensitises, so when actual disk failure happens you would ignore notification.
- low power cpu (arm?)
- uses latest kernel and flavor of Linux distributions
- makes own modular hardware in smallish form factor
- basic support package with slower SLAs to more professional SLAs for enterprises, I would happily pay per month to prevent enshitification if I had guaranteed support and even more so if it bundled some zero trust encrypted cloud storage.
2. agent based full machine backup. I can install the agent on a machine (but especially a windows machine) and it just backs up on my schedule and works, plus is easy to convert the resulting backup image to a vm to load on the synology or my proxmox cluster. There are lots of moderately to very expensive proprietary solutions that do this, but I am unaware of any decent opensource options.
Are you guys aware of anything that solves these two use cases well and is opensource?
DocTomoe•3d ago
I currently have a QNAP TS-451D2. I use it mainly with a MacBook Pro. Something in QNAPs Samba implementation makes it glacially slow in that configuration. While it still does AFP (and then becomes somewhat decent to use), it's only a question of time for apple to chop that protocol.
With QNAP having proven to be substandard and Synology going evil, what other options for a mid-range, local NAS for the tech guy who doesn't want to have another thing to tinker with do exist? I'm thinking 'appliance', not 'project'. Ideally, I want to just set it up once and then forget about it.
gbtw•3d ago
actionfromafar•3d ago
MarioMan•3d ago
There’s no need to proactively check in on anything if you’ve set up email alerts. It’s pretty straightforward to give the NAS permission to send you emails in case a drive dies on you rather than failing silently.
Docker containers are just a nice bonus. You don’t need to use them if you don’t want to, but it is awfully convenient to run things like media encoders, torrent clients, download managers, etc. directly on your storage.
zer00eyz•3d ago
Do you need just disks in a raid? Look at it once a month to make sure nothing stupid has happened and go on with your life. Do you want to run a bunch of services (arr stack, home assistant, full on home lab type stuff) then yes it may require some more "work" depending on what your running and how deep down the rabbit hole you want to go.
brnt•3d ago
theshrike79•3d ago
The Jonsbo cases are pretty compact and QNAP/Synology-ish.
As for Unraid: You pay for it, so you're the customer and can expect some kind of support. It's also pretty damn stable and supports casual "I'll just add this drive to get more space" usage compared to ZFS stuff.
fer•3d ago
>It's also pretty damn stable
Not my experience. shfs crashes like crazy, tuning some things might alleviate it but it still fails. From the dozens of workarounds recommended, the only one that seems to help (for me and some others, not for everyone) is to disable NFS, which kinda defeats the point of a NAS for me.
Also while memtest is needed to rule out a memory issue, I found some tendency to disregard these issues as hardware related... if it's only shfs crashing and not the kernel nor any other app, chances are it's an shfs issue.
Currently I think they pin it on a libfuse bug.
https://forums.unraid.net/bug-reports/stable-releases/683-sh...
https://forums.unraid.net/topic/189449-shares-keep-disappear...
https://forums.unraid.net/topic/137653-share-disappeared-aga...
https://forums.unraid.net/topic/161179-unraid-unstable-freez...
https://forums.unraid.net/topic/151605-mnt-user-is-gone/
nottorp•3d ago
gdevillers•3d ago
radicality•3d ago
https://support.apple.com/en-us/102064 https://support.apple.com/en-us/101442 https://gist.github.com/jbfriedrich/49b186473486ac72c4fe194a...
makeitdouble•3d ago
It was years ago but for whatever reason SMB was slow on my Mac even when connecting to Linux boxes. I mapped my user ID to the Synology user and switched to straight NFS instead, per wise it was night and day.
lostlogin•3d ago
I get more reliable speeds and connections from an Ubuntu VM that’s running on the same Mac than I do from the Mac. How can this happen?
m-s-y•3d ago
I fixed it by removing the virtual network switch that gets installed if you use the container services.
sexeriy237•2d ago