https://github.com/NeccoNeko/UNVR-diy-os/blob/main/IMAGES.md
We need more Thunderbolt/USB4-to-JBOD 40Gbps storage enclosure options, for use with Ryzen mini PC or Lenovo Tiny.
This is much larger I think (and it bugs me that it’s not an even number of drives, and the offset of the drives is unpleasant) but it’s rackable so that’s a plus.
The only thing I’d want to know is sound (and I’m sure I can find a YouTube video).
I’ve been looking for an excuse to go all-in on a Ubiquiti setup… Thanks for mention this, I wasn’t aware Ubiquiti had a NAS product.
Highly recommended Ubiquiti. I have it all throughout my house.
I only wish they do a 2 Bay or 4 Bay version. Or better yet something like Time Capsule. 2 Bay + Ubnt Express.
I currently plug a USB3 4-bay disk enclosure into my homelab server for this, but the cabling is messy and it doesn’t support 20TB drives. I could upgrade to a newer enclosure, but I’d rather have a “real” rack mount system with drive bays.
Their weird status with network speeds quicker than 1gb is irritating, but slowly improving.
I don't need 100% of my bytes to be instantly available to me on my network. The most important stuff is already available. I can wait a day for arbitrary media to thaw out for use. Local caching and pre-loading of read-only blobs is an extremely obvious path for smoothing over remote storage.
Other advantages should be obvious. There are no limits to the scale of storage and unless you are a top 1% hoarder, the cost will almost certainly be more than amortized by the capex you would have otherwise spent on all that hardware.
This is not the perspective of actors working on longer timescale. For a number is agencies, preserving some encrypted data is beneficial, because it will be possible to recover in N years, whether any classic improvements, bugs found in key generators, or advances in quantum.
Very few people here will be that interesting, but... worth keeping in mind.
20TB which u can keep in a 2-bay cute little nas will cost you $4k USD / year on S3 infrequent access tier in APAC (where I am). So "payback time" of local hardware is just 6 months vs S3 IA. That's before you pay for any data transfers.
For my use-case I'm OK with un-hedged risk and dollars staying in my pocket.
This is the same product.
> 20TB
I think we might be pushing the 1% case here.
Just because we can shove 20TB of data into a cute little nas does not mean we should.
For me, knowledge that the data will definitely be there is way more important than having "free" access to a large pool of bytes.
I'm the last person I know who buys DVDs, and they're 2/3s of the reason I need more space. The last third is photography. 45.7 megapixels x 20 FPS adds up quick.
S3's cost is extreme when you're talking in the tens of terabytes range. I don't have the upstream to seed the backup, and if I'm going outside of my internal network it's too slow to use as primary storage. Just the NAS on gigabit ethernet is barely adequate to the task.
Until Amazon inexplicably deletes your AWS account because your Amazon.com account had an expired credit card and was trying and failing to renew a subscription.
Ask me how I know
Confusingly "Glacier" is both its own product, which stores data in "vaults", and a family of storage tiers on Amazon S3, which stores data in "buckets". I think Glacier the product is deprecated though, since accessing the Glacier dashboard immediately recommends using Glacier the S3 storage tiers instead.
There are no recurring costs to this setup except electricity. I don't think S3 can beat that.
Okay, I'm curious now. When you were talking about "a bunch of local disks", what size disk did you have in mind?
Right now the best price per TB is found on disks in the 14-24TB range.
Bandwidth to get all of that back down to your system is much pricier, depending on how much you use that data.
You can’t be serious.
Most modern, especially software companies, choose not to fix relatively small but critical problems, yet they actively employ sometimes hundreds of customer support yes-people whose job seems to be defusing customer complaints. Nothing is ever fixed anymore.
But 5% free is very low. You may want to use every single byte you feel you paid for, but allocation algorithms really break down when free space gets so low. Remember that there's not just a solid chunk of that 5% sitting around at the end of the space. That's added up over all the holes across the volume. At 20-25% free, you should already be looking at whether to get more disks and/or deciding what stuff you don't actually need to store on this volume. So a hard alarm at 5% is not unreasonable, though there should also be a way to set a soft alarm before then.
But 5% of a 5 TB volume is 250 GB, that's the size of my whole system disk! Probably not so understandable by the lay person.
Some filesystems do stake out a reservation but I don't think any claim one as large as 5% (not counting the effect of fixed-size reservations on very small volumes). Maybe they ought to, as a way of managing expectations better.
For people who used computers when the disks were a lot smaller, or who primarily deal in files much much smaller than the volumes they're stored on, the absolute size of a percentage reservation can seem quite large. And, in certain cases, for certain workloads, the absolute size may actually be more important than the relative size.
But most file systems are designed for general use and, across a variety of different workloads, spare capacity and the impact of (not) keeping it open is more about relative than absolute sizes. Besides fragmentation, there's also bookkeeping issues, like adding one more file to a directory cascading into a complete rearrangement of the internal data structures.
I don't think this is correct. At least btrfs works with slabs in the 1 GB range IIRC.
One of my current filesystmes is upwards of 20 TB. Reserving 5% of that would mean reserving 1 TB. I'll likely double it in the near future, at which point it would mean reserving 2 TB. At least for my use case those numbers are completely absurd.
Change the word to "seek" and it may make more sense.
And "unexpected" failure paths like that are often poorly tested in apps.
Basically, the recommendation was to always have 5% free space, so this isn't just Synology saying this.
https://archive.kernel.org/oldwiki/btrfs.wiki.kernel.org/ind...
It tries to mitigate this by reserving some space for metadata, to be used in emergencies, but apparently it's possible to exhaust it and get your filesystem into a read-only state.
There was some talk about increasing the reservations to prevent this but can't recall if changes were made.
This is the same kind of issue that Linux root filesystems had - a % based limitation made sense when disks were small, but now they don't make a lot of sense anymore when they restrict usage of hundreds of GB (which are not actually needed by the filesystem to operate).
I’ve used (and still use) UnRaid before but switched to Synology for my data a while back due to both the plug-and-play nature of their systems (it’s been rock solid for me) and easily accessible hard drive trays.
I’ve built 3 UnRaid servers and while I like the software, hardware was always an issue for me. I’d love a 12-bay-style Synology hardware device that I could install whatever I wanted on. I’m just not interested in having to halfway deconstruct a tower to get at 1 hard drive. Hotswap bays are all I want to deal with now.
I have an unraid on a usb stick somewhere in my rack, but overtime it started feeling limited, and when they began changing their license structure I decided it was time to switch, though I run it on a Dell r720xd instead of one of their builds (my only complaint is the fan noise - i think 730 and up are better in this regard)
Proxmox was also on my short list for hypervisors if you dont want TrueNAS.
I have found workarounds for the read-only root file system. But they aren't great. I have installed Gentoo with a prefix inside the home directory, which provides me with a working compiler and I can install and update packages. This sort of works.
For running services, I installed jailmaker, which starts a lxc debian, with docker-compose. But I am not so happy about that, because I would rather have an atomic system there. I couldn't figure out how to install Fedora CoreOS inside a lxc container, and if that is even possible. Maybe NixOS would be a another option.
But, as I said, for those services I would rather just run them in Proxmox and only use the TrueNAS for the NAS/ZFS management. That provides more flexibility and better system utilization.
The deprecation caused me to move to something more neutral and stay away from all 'native' apps of TrueNAS and migrated to ordinary docker-compose, because that seem to be the most approachable.
I was also looking into running a Talos k8s cluster, but that didn't seem to be as approachable to me and a bit overkill for a single-node setup.
It isn't really the case. TrueNAS wants you to look at it as an appliance so they make it work that way out of the box.
On the previous release, they had only commented out the apt repos but you could write to the root filesystem.
On the latest release, they went a little further and did lock the root filesystem by default but using a single command (`sudo /usr/local/libexec/disable-rootfs-protection`), root becomes writable and the commented out apt repos are restored. It just works.
If it were now, I'd probably look deeper into Asus, QNap or a DIY TrueNAS.
I say "mostly" happy because I almost returned it. The USB connection between mini PC and Terramaster would be fine for a few days and then during intense operations like parity checks would disconnect and look like parity error/disk failure, except the disks were fine. Eventually realised the DAS uses power from the USB port as well as the adapter plug and the mini PC wasn't supplying enough power. Since attaching a powered USB hub it's been perfect.
Explanation of symptoms and solution, in case anyone is considering one or has the same problem: https://forum.terra-master.com/en/viewtopic.php?t=5830
It works well, but USB connection could be faster, and it bogs down when doing writes with soft-raid. I've been thinking about going for a DAS solution connected directly via SAS, instead. Still musing about what enclosure to use, though.
I will still have to come accross something like Hyper Vault for backup and Drive for storage that works (mostly) smoothless. I would be happy to self-host, but the No Work Needed (tm) products of Synology are just great.
Sad to see them taking this road.
It was simple, it just worked, and I didn't have to think about it.
* TB SDDS - a multi-type phenomenon of Drobo units suddenly failing. There were three 'types' of SDDS I and a colleague discovered - "Type A" power management IC failures, "Type B" unexplainable lockups and catatonia, and "Type C" failed batteries. Type B units' SOCs have power and clock go in and nothing going out.
Assuming I want 4 drives and something that can transcode multiple files in real time.
Managing it is fine. It expects you to understand ZFS more than a turnkey RAID 5 + btrfs job, but has an OK UI and seems born out of the fact ZFS people want that customization, not forcing you fend for yourself. I read a 15 minute explainer, built a pool, and didn't have to think about it at all other than replacing a failed drive last month. And all that took was a quick google to make sure I was doing it right, which is exactly how I replaced drives in a standalone nas.
How do you expand the storage to 100+ TB?
The web gui is nice to use, what does Windows/Linux or the Mac offer? I’d prefer a Mac option, but I don’t believe there is a good one.
Why not? If you are not going to do transcoding, Plex should sit fine on a NAS.
It's hard to actually buy a computer that bad these days, although Synology still makes it easy lol.
All you need is one of those sticks that plug into the hdmi of your tv and can run vlc. I use a chromecast. Vlc can play mkvs directly off samba shares. Done.
I can transcode just fine on a 1821 (though use a separate machine to do it for other reasons). The units seems fine, what am I missing out on?
Let's look at your NAS for an example (it's actually worse for mine cause I always loved the Slims lmao).
DS1821 has no GPU transcoding because it uses the AMD Ryzen 1500B, what you've observed is this processor is just powerful enough to brute force some transcoding but it has no hardware transcoding:
https://www.synology.com/en-global/products/DS1821+#specs
DS1821 successor was recently revealed to be the DS1825. So if your NAS died in a few months, that would be your obvious choice for a replacement. They decided to continue using the AMD Ryzen 1500B, which is now seven years old. TBD if they bother releasing it this year. In a few months they could call it DS1826 instead.
https://nascompares.com/2025/03/13/synology-ds525-ds1525-ds4...
Meanwhile DSM still lacks support for NVMe volumes, in fact there are still no NVMe-only models at all, you can't even install DSM to NVMe, they cut support for USB drives, they cut support for HEVC, and they almost never update docker and some of the other tools.
The hardware's going nowhere and the software's going backwards.
I guess I’ve avoided these issues by accepting that solid state storage in a NAS is still too expensive.
My ultimate would be the Synology with an Apple silicon heart. One can dream.
That said, the person you're replying to is right. Synology has mostly stopped supporting their apps, they've removed features in cost-cutting features (media codecs), hardware is now hopelessly outdated and both kernel and docker are completely out of date.
It feels like any technical leadership completely disappeared and now only bean counters who don't understand the product or their target market are making decisions.
Then there is no need for transcoding at all.
What i do, and will continue to do, is use a USB-C disk box (Level 1 also recently reviewed some they quite liked, despite the usual fears around USB) and whatever PC i have laying around. 5 years strong running ZFS over USB 3 with a 4770K, regularly serving 4 Plex streams at once without complaints and no failures (i mean, usual disks wearing out, but nothing caused by USB).
So if a 4770 can transcode 2x1080 and direct play 2x more, any old anything with hardware transcoding these days should be just fine.
If it was a ZFS NAS, I could ZFS send to another system.
I want to get the historical data out to an open portable system.
always just rsync/scp things and it is less problematic and faster on my weekend tests.
Is there something with 6-8 drives slots on which i could install whatever OS i want ? Ideally with a small form factor. I don’t want to have a giant desktop again for my nas purposes.
All these NAS manufacturers a spending time developing their own OS, when TrueNAS is well established.
However...
Long long ago I worked for a major NAS vendor. We had customers with huge NAS farms [1] and extremely valuable data. We were, I imagine, very exposed from a reputation or even legal standpoint. Drive testing and certification was A Very Big Deal. Our test suites frequently found fatal firmware bugs, and we had to very closely track the fw versions in customer installations. From a purely technical viewpoint there's no way we wanted customers to bring their own drives.
[1] Some monster servers had tripple-digit GBs of storage, or even a TB! (#getoffmylawn)
In a way this is a valid point, but it also feels a bit silly. Do people really make use of devices like this and then try to overnight a drive when something fails? You're building an array-- you're designing for failure-- but then you don't plan on it? You should have spare drives on hand. Replenishing those spares is rarely an emergency situation.
They build the array to support a drive failure but as home power users without unlimited funds don’t have a hot spare or store room they can run to. It’s completely reasonable to order a spare on failure unless it’s mission critical data needing 24/7 uptime.
They completely planned for it. They’ve planned for if there is a failure they can get a new drive within 24 hours which for home power users is generally enough, especially when likely get a warning before complete failure.
I've never heard of anyone doing that for a home nas. I have one and I don't keep spare drives purely because it's hard to justify the expense.
i budget 300usd each, for 2 or 3 drivers. that is always the sweet spot since forever. get the largest enterprise model for exactly that price.
that was 2tb 10yrs ago. 10tb 5yrs ago.
so 5yrs ago i rebuilt storage on those 10tb drivers but only using 2tb volumes (coulda be 5, but i was still keeping the last gen size as data haven't grow), now my old drivers are spares/monthly off-machine copies. i used one when getting a warranty for a failed new 10tb one btw.
now i can get 20tb drivers for that price, i will probably still only increase the volumes to 10tb at most and have two spares.
I'm not made of money. I just don't want to make excuses over some $90 bit of junk. So I have have spare wifi, headset, ATX PSU, input devices, and a low cost "lab" PSU to replace any dead wallwart. That last one was a life saver: the SMPS for my ISPs "business class" router died one day, so I cut and stripped the wires, set the volts+amps and powered it that way for a few days while they shipped a replacement.
I did end up with a spare before at the 3 year mark but the bathtub curve of failure has held true and now that so-called spare is 6 years old, unused, too small of a drive, and so never planned to be used in any way.
The conventional wisdom is that you should not store drives that don't get spun up infrequently, so what does it mean to have spares unless you are spinning them up once a month and expecting them to last any longer once actually used?
Also worth noting that I don't think I experienced hard fails, it's often the unrecoverable error count shooting up in more than one event, which tells me it's time to replace. So I don't wait for the array to be degraded.
But I guess that's an important point, monitor your drives. Synology will do that for you, but you should monitor all your other drives. I have a script that uploads all the smart data off all my drives across all my machines to a central location, to keep an eye on SSD wear levels, SSD bytes written (sometimes you have surprises), free disk space and smart errors.
Within my NAS, I have 2 different pool, 1 is for important data, it's 2 hard disk with SHR1 replicated to an offsite NAS. Another pool is for less important data (movies, etc), it's SHR1 with 5 hard disks, 75TB total capacity, none of the hard disks are the same batch or production date. Not having the data immediately is not a problem. Losing that data would suck but I'd rebuild so I'm fine not having a spare drive on hand.
When you need to replace a drive, it’s better to purchase one new. It was manufactured recently and not sitting for very long.
For the same hardware cost I got a random mATX box that can hold 2.5x more hard drives, a much much beefier CPU, 10x the RAM, and an nvme. And yea it took an hour to set up trueNAS in a docker image, but w/e.
Same exact hard drives working perfectly fine in fedora. If it weren't for hard drive locking I'd have stuck with the Synology box out of laziness.
Last but not least, they seem to have Docker support which was restricted to more powerful Synology models, and it's a nice bonus for self-hosting nowadays.
I think self-built is the best bang for buck you're going to get and not have any annoying limitations.
There's plenty of motherboards with integrated CPUs (N100 same as cheaper Ugreen ones) for roughly 100. Buy a decent PSU and get an affordable case. For my configuration with a separate AMD CPU I'm looking at right around 400 Euros but I get total control.
And as far as software is concerned, setting up a modern OS like TrueNAS I find about the same difficulty as an integrated one from Ugreen.
For a NAS, I don't think I'd need more than 1-2 lanes for any single device. That sounds fine.
Ended up buying a terrmaster DAS instead and connected it with usb to my NUC.
Also considered a NAS enclosure with an n110 mini itx board, would allow you to upgrade it in the future.
Shoutout to openmediavault. Just yesterday I installed it on my DXP8800 and now it works like a charm. But to install another OS you have to deactivate the watchdog timer in the BIOS, otherwise it resets the NAS every three minutes. Press CRTL + F12 to get into the BIOS and look for something like "watchdog" and disable it.
I'm likely to go down the BYO NAS path going forward. Just a stupid customer punishing policy. A real slap in the face.
I bought a N100 based device from AliExpress that supports two drives for my backup server. its a cracker and runs debian wonderfully. Very smooth and responsive. runs quietly and transfers data fairly quickly.
The killer feature for me is the app ecosystem. I have a very old 8-bay Synology NAS and have it setup in just a few clicks to backup my dropbox, my MS365 accounts, my Google business accounts, do redundant backup to external drive, backup important folders to cloud, and it was also doing automated torrent downloads of TV series.
These apps, and more (like family photos, video server, etc), make the NAS a true hub for everything data-related, not just for storing local files.
I can understand Synology going this way, it puts more money in their pocket, and as a customer in professional environment, I'm ok to pay a premium for their approved drives if it gives me an additional level of warranty and (perceived) safety.
But enforcing this accross models used by home or soho users is dumb and will affect the good will of so many like me, who both used to buy Synology for home and were also recommending/purchasing the brand at work.
This is a tech product, don't destroy your tech fanbase.
I would rather Synology kept a list of drives to avoid based on user experience, and offer their Synology-specific drives with a generous warranty for pro environments. Hel, I would be ok with sharing stats about drive performance so they could build a useful database for all.
They way they reduce the performance of their system to penalise non-synology rebranded drives is bascially a slap in the face of their customers. Make it a setting and let the user choose to use the NAS their bought to its full capabilities.
It doesn’t address the mandatory nature of drives when at most dell and hp have put their part number on drives for the most part.
The number of times I’ve broken things on QNAP systems doing what should be normal functionality, only to find out it’s because of some dumb implementation detail is over a dozen. Synology, maybe 1-2.
Roughly the same number of systems/time in use too.
Some QNAP devices can be coaxed into running Debian.
Debian offers flexibility and control, at the cost of time and effort. PhotoSync mobile apps will reliably sync mobile devices with NAS over standard protocols, including SSH/SFTP. A few mobile apps do work with self-hosted WebDAV and CalDAV. XPenology attempts to support Synology apps on standard Linux, without excluding standard Debian packages.
In theory, one could fit an Arm RK3588 SBC with NVME-to-PCIe-to-HBA or NVME-to-SATA into half-depth JBOD case. That would give up 2x10G SFP, 2xNVME and ECC RAM.
As I understand, migrating to other hardware wouldn't be an issue if availability becomes an issue.
I did the procedure on my (now 15yo) TS-410, mostly because the vendored Samba is not compatible with Windows 11 (I had turned-off all secondary services years ago). It took a few days to backup around 8TB of data to external drives. And AROUND 2 WEEKS to restore them (USB2 CPU overhead + RAID5 writes == SLOOOOOW).
Even to get the time down to 2 weeks, I really had to experiment with different modes of copying. My final setup was HDD <-USB3-> RPi4 <-GbE-> TS-410. This relieved TS-410 CPU from the overhead of running the USB stack. I also had to use rsync daemon on TS-410 to avoid the overhead of running rsync over SSH.
So, it's definitely not for the faint of heart, but if you go through the trouble, you can keep the box alive as off-site backup for a few more years.
Having said that, I have to commend QNAP for providing security updates for all this time. The latest firmware update for TS-410 is dated 2024-07-01 [1]. This is really going beyond and above supporting your product when it comes to consumer-level devices.
[1] https://www.qnap.com/en/download?model=ts-410&category=firmw...
Once i added a 4th drive to a RAID 5 set and i was impressed that it performed the operation on-line. Neat.
Oh, there was one issue: A while ago my Timemachine backups were unreliable, but i haven't had that issue since three years or so.
At one time, Drobo was the only manufacturer that did that, but I have had very bad luck with Drobos.
I’ve been running a couple of Synology DS cages for over five years, with no issues.
Well, that sounds like a great way to get sued.
I'd rather have the flexibility offered by TrueNAS, in addition to the robust community. Yes, Synology hardware is convienent in some use cases, but you can generally build yourself a more powerful and versatile home server with TrueNAS Scale. There is a learning curve, so it is not for everyone.
I switched to Synology about six years ago (918+). The box is small, quiet, and easy to put in the rack together with the network gear. I started with 4TB drives, gradually switched to 8TB over time (drive by drive). I don't use much of their apps (mostly download station, backup, and their version of Docker to run Syncthing, plus Tailscale). But the box acts like an appliance - I basically don't need to maintain it at all; it just works.
I don't like all this stuff with vendor lock-in, so when the time comes for replacing the box, what are alternatives on par with the experience and quality I currently have with Synology?
If you heavily rely on apps/services. I've just gone to self managed docker environments for things like that. A very simple script runs updates.
- Minisforum N5 Pro NAS
- AOOSTAR WTR MAX
Good compute power as they know users will be running Docker and other services on them, using the NAS as a mini server.
OS agnostic allowing users to install TrueNas, Unraid, favourite Linus distro of choice.
The Minisforum and AOOSTAR look to be adding all the features power users and enthusiasts are asking for.
If you just want a NAS as a NAS and nothing else, the new Ubiquiti NAS looks great value as well.
At this point, I'm not that convinced that there's anything that synology offers that isn't handled much better by an app running on docker. This wasn't true 10 years ago.
That's it. For the actual viewing / sorting / album you need something like immich or photoprism, the photos app actually sucks.
Video station has been removed in the latest minor update, not even a major update, they just took it out no warning no replacement. But then again it was not that good, jellyfin is the way to go for me.
Their crown jewels are active backup, hyper backup and synology office. That's where they own their space.
I still appreciate how easy and maintenance-free was their implementation of the core NAS functionality. I do have a Linux desktop for experiments and playing around with, but I prefer to have all of my actually important data to be on a separate rock solid device. Previously, Synology fulfilled this role and was worth paying for, but if this policy goes live, I wouldn’t consider them fro my next NAS.
It's a bit more convenient than how other solutions, like Unraid, handle this, where you manually configure a Docker container.
This however is a deal breaker for me as I'd hate to be locked in to their drives for all the reasons in TFA but also as a matter of principle.
I hope Synology will reconsider!
The first one I bought is still in service at my parents' place, silently and reliably backing up their cloud files and laptops.
I was fully expecting to buy more in the future, but this is a dealbreaker. If a disk goes bad, I want to go to the local store, pick one up, and have the problem fixed half an hour later. I do not want to figure out where I can get approved disks, what sizes are available, how long it will take to ship them, etc.
I've recently installed Unraid on an old PC, and the experience has been surprisingly good. It's not as nice as a Synology, but it's not difficult, either. It's just a bit more work. I've also heard that HexOS plans to support heterogeneous disks, and I plan to check it out once that is available.
So that's the direction I'll be going in instead.
Sounds like this is the problem with Synology... How are they going to make money when their products are so good!
Which is along the same trend line I'm seeing for my purchases.
That's pretty solid for hardware sales.
My guess is that they've over invested in things like their "drive" office software suite, and don't know how to monetize it or recoup costs.
I like Synology, but locking me to their drives is a hard "no thanks" from me.
Next NAS won't be from them if that's their play...
We (in the tech space) can scream privacy and risks of the cloud all day long but most consumers seem to just not care.
I have 2 Synology NAS and the only app that I actually use is Synology Drive thanks to the sync app, but there are open source alternatives that would work better and not require a client on the NAS side to work.
I can't imagine any enterprise would be using these features.
Been in the market for a new NAS myself and I am going to be looking into truenas or keep an eye on what Ubiquity is doing in this space (but its a no go until they add the ability to communicate with a UPS).
I’d add that mandatory drives when they aren’t the experts in it making drives a bad move.
Maybe other manufacturers are the way.
They really have to sell it by minimising the price differential and reducing the lead time.
This is the same old tired argument Apple made about iPhone screens - complain about inferior aftermarket parts while doing everything in their power to not make the original parts available anywhere but AASPs. Except here we have the literal same parts with only a difference in the firmware vendor string.
I'm wondering if anybody has any better recommendations given the requirement of being able to add storage capacity without having to completely recreate the FS.
However I did notice that the performance was substantially worse when using heterogeneous drives, which makes SHR somewhat less valuable to me.
Snapshots are available, but a little more work to deal with since you have to learn about subvolumes. It's not that hard.
Edit: TIL, SHR is just mdadm + btrfs.
pvcreate /dev/sdz
vgextend NAS /dev/sdz
Now we want to add additional space to an existing LV "backup": lvextend --size +128G --resizefs NAS/backup
*note: --resizefs only works for file-systems supported by 'fsadmn' - its man-page says:"fsadm utility checks or resizes the filesystem on a device (can be also dm-crypt encrypted device). It tries to use the same API for ext2, ext3, ext4, ReiserFS and XFS filesystem."
If using BTRFS inside the LV, and the LV "backup" is mounted at /srv/backup, tell it to use the additional space using:
btrfs filesystem resize max /srv/backup
Not used either but these were 2 options that came up when I was researching few years ago.
The only drawback, if I can call it that, is that syncs are done on-demand, so the data is technically unprotected between syncs. But for my use case this is acceptable, and I actually like the flexibility of being in control of when this is done. Automating that with a script would be trivial, in any case.
The problem is - I've formatted my drives with SHR(Synology Hybrid RAID - essentially another exclusive lock-in) and this would mean a rather painful transition to the new drive, since this now involves getting a whole new drives to format and move data to, rather than a simple lift-and-drop.
Ugh.
Not sure why people are saying SHR is proprietary in some of the comments I read, it's effectively a wrapper for mdadm — though I suppose the GUI itself could be called proprietary.
That’s why I’m hoping Synology rethinks its position. Swapping out trusted, validated drives for unknowns introduces risk we’d rather avoid, especially since most of our client setups now run on SSDs. If compatibility starts breaking, that’s a serious operational concern.
The locking-down is disappointing and unnecesary. Sure, give me the option of using "certified" drives, but don't take away the option of using any drive I have.
People aren't stupid, they know that yet they do it.
I believe Amazon became so popular because it treated its customer fairly well until recently, now that they are in extremely dominant position.
But Synology is far from it.
AdGuard Home (DNS filtering)
Scrypted (bridging CCTV to Apple Home)
Jellyfin (media streaming)
Immich (photos)
WireGuard (secure VPN)
The 2TB SSD is handling everything pretty well, but I’ve got a 2.5" SSD slot left unused. Thinking about adding a second SSD for either storage or backups, or maybe caching for media.
Any cool apps or tools people would recommend for setups like this? Also, curious about how others are using that extra SSD space in a home lab/NAS setup.
So lots of customers thought they were buying a drive that's perfect for NAS, only to discover that the drives were completely unsuitable and took days to restore, or failed alltogether. Synology had to release updates to their software to deal with the fake NAS drives, and their support was probably not happy to deal with all the angry customers who thought the problem was with Synology, and not Western Digital for selling fake NAS drives.
If you buy a drive from Synology, you know it will work, and won't secretly be a cheaper drive that's sold as NAS compatible even though it is absolutely unsuitable for NAS.
But that's not what Synology did.
Also, if the image on the Synology page is accurate, they are relabeled Toshiba drives. Which doesn't really seem a good choice for SMB/SOHO NAS devices, because the Toshiba "Machine Gun" MGxx drives are the loudest drives on the market.
That said, I've since added an ssd and moved almost everything to the ssd (docker, the database and all apps) and it's much nicer in term of noise.
I wouldn't be sure why Toshiba M-prefixed 7K2 drives would be bad for NAS use cases. They're descendants of what were used in high performance SPARC servers. Hot, dense, obnoxious, but that's just Fujitsu. They're plenty reliable, performant, and perfect for your all-important on-line/near-line data! You just have to look away from bills(/s).
0: https://www.techpowerup.com/265841/some-western-digital-wd-r...
1: https://www.techpowerup.com/265889/seagate-guilty-of-undiscl...
Synologoy SMB/SOHO NAS devices should not be affected by the drive lockdown (for now).
I have been a happy enough Synology user since 2014, even though I had to hardware repair my main DS1815+ twice in that time (Intel CPU bug and ATX power-on transistor replacement).
Other than two hardware failures in 10 years (not good), the experience was great, including two easy storage expansions and the replacement of one failed drive. These were all BYOD, and the current drives are shucked WD reds with a white sticker (still CMR).
I happily recommended them prior to this change and now will have to recommend against.
The counterargument is, people won’t listen and then blame Synology when their data is affected. At which point it may be too late for anything but attempting data recovery.
Sufficiently nontechnical users may blame the visible product (the NAS) even if the issue is some nuance to the parts choice made by a tech friend to keep it within their budget.
Synology is seen as the premium choice in the consumer NAS argument, so vertically integrating and charging a premium to guarantee “it just works” is not unprecedented.
There are definitely other NAS options as well, if someone is willing to take on more responsibility for understanding the tech.
I have a DS1515+ which has an SSD cache function that uses a whitelisted set of known good drives that function well.
If you plug in a non whitelisted ssd and try to use it as a cache, it pops up a stern warning about potential data loss due to unsupported drives with a checkbox to acknowledge that you’re okay with the risk.
So…there’s really no excuse why they couldn’t have done this for regular drives.
Everyone will understand it costing more, fewer people will understand why the NAS ate their data without the warning it was supposed to provide, because cheap drives that didn’t support certain metrics were used.
If Synology wants to have there be only one way that the device behaves, they have to put constraints on the hardware.
As long as Synology is up front in the requirement and has a return policy for users who buy one and are surprised, I think they’re well within their rights to decide they’re tired of dealing with support costs from misbehaving drives.
As long as they don’t retroactively enforce this policy on older devices I don’t understand the emotionality here. Haven’t you ever found yourself stuck supporting software / systems / etc that customers were trying to cheap out on, making their problems yours?
I see this kind of arguments “X had to do Y otherwise customers would complain” a lot every time a company does something shady and some contrarian wants to defend them, but it really isn't as smart as you think it is: the company doesn't care if people complain, otherwise they wouldn't be doing this kind of move either, because it raises a lot more complaints. Company only care if it affects their bottom line, that is if they can be liable in court, or if the problem is big enough to drive customers away. There's no way this issue would do any of those (at least not as much as what they are doing right now, by a very large margin).
It's just yet another case of an executive doing a shady move for short terms profits, there's no grand reasoning behind it.
Is it though? Most (consumer) NAS systems are probably sold without the drives which are bough separately. When there is an issue with the drive and it breaks, I’m pretty sure most people technical enough to consider the need for a NAS would attribute that failure to the manufacturer of the drives, not to the manufacturer of the computer they put their drives into
It's fine to have 'Synology supported drives' which guarantee compatability, but requiring them is absolute bollocks.
After some time, people started to post about problems with the new WD Red drives. People had troubles restoring failed drives, I had a problem where I think the drives never stopped rewriting sectors (you could hear the hard drives clicking 24/7 even when everything was idle)
Then someone figured out that WD had secretely started selling SMR drives instead of CMR drives. The "updated" models with more cache were much cheaper disks, and they only added more cache to try and cover up the fact that the new disks suffored from catastrophic slowdowns during certain workloads (like rebuilding a NAS volume).
This was a huge controversy. But apparently selling SMR drives is profitable, so WD claims the problem is just that NAS software needs to be made compatible with SMR drives, and all is well. They are still selling SMR drives in their WD Red line.
Edit: Here's a link to one of the many forum threads where people discovered the switch: https://community.synology.com/enu/forum/1/post/127228
That's half of it ... maybe? Last time I looked drives that offer host managed SMR still weren't available to regular consumers. In theory that plus a compatible filesystem would work flawlessly. In practice you can't even buy the relevant hardware.
WD Red -> SMR technology, slightly cheaper, not suitable for NAS
This would explain why they'd only want to support HDD models the Synology OS can flash firmware updates to.
(It's also convenient to get more margins.)
To be quite blunt: After choosing my NAS, the act of choosing hard drives is actually harder and somewhat overwhelming. To be quite honest, knowing that I can choose from a narrower set of drives that are guaranteed to work correctly is going to really tip my scale in favor of Synology the next time I'm in the market for a NAS.
Basically, Synology drives are not only more expensive, they're also statistically speaking less reliable when building a RAID with them, negating the very purpose of the product. What a dumb move.
<https://news.ycombinator.com/item?id=32048148>
Resulting in, FWIW, my top-rated-ever HN comment, I think:
I don't think this will work the way Synology imagines it.
Some of us are using that with great success to eliminate the locking situation.
It is very hard to get of that list and I will warn everybody wbo asks me about tech advice (so literally everybody in my vicinity) about vendors on that list. Good luck Synology.
I have had a Solaris ZFS filer that I've ran for a long time (due to historical reasons, I jumped on OpenSolaris when it came out and never had a chance to move off Oracle's lineage). I moved to Synology about three years ago b/c I was sick and tired of managing my own file server. Yet, I feel like at this point the cons of Synology are starting to outweigh the manageability advantages that drew me in.
[1] https://www.reddit.com/r/synology/comments/1feqy62/synology_...
But there is actually one reason I am going DIY next time. 'uname -a'. They ship with very old kernels. I suspect the other utilities are in the same shape. They have not updated their base system in a long time. I suspect they have basically left all of the amazing changes the kernels have had over the past decade out. They are randomly cherry picking things. Which is fine. But it has to be creating a 'fun' support environment.
Synology’s market is the intersection of:
People who have lots of pirated media, who want generous storage with torrent capabilities.
People who want to store CCTV feeds.
People who find the cloud too expensive, or feel it isn’t private enough.
People with a “two is one, one is none” backup philosophy, for whom the cloud alone is not enough.
Tiny businesses that just need a windows file share.
Their answer to your drives no longer mounting is "connect them to a PC, pull the files and reformat". Where you are supposed to intermediate 20TB of data is left up to the customer to deal with.
Fuck Synology.
Synology’s whole business model (arguably QNAP’s too) depends on you wanting more drive bays than 2 and wanting to host apps and similar services. The premium they ask is substantial. You can spec out a beefy Dell PowerEdge with a ton of drive bays for cheap and install TrueNAS, and you’ll likely be much happier.
But the fundamental suggestion I make is to consider a NAS a storage-only product. If you push it to be an app and VM server too, you’re dependent on these relatively closed ecosystems and subject to the whims of the ecosystem owner. Synology choosing to lock out drives is just one example. Their poor encryption support (arbitrary limitations on file filenames or strange full-disk encryption choices) is another. If you dive into any system like Synology long enough, you’ll find warts that ultimately you wouldn’t face if you just used more specialized software than what the NAS world provides.
Yeah, but then you have a PowerEdge with all the noise and heat that goes along with it. I have an old Synology 918 sitting on my desk that is so quiet I didn't notice when the AC adapter failed. I noticed only because my (docker-app-based) cloud backups failed and alerted me.
Unless Synology walks back this nonsense, I'll likely not buy another one, but I do think there is a place for this type of box in the world.
I would recommend a mini-ITX NAS enclosure or a prebuilt system from a vendor that makes TrueNAS boxes. iXSystems does sell prebuilt objects but they’re still pricey.
I have a synology because I got tired of running RAID on my personal linux machines (had a drobo before that for the same reasons) - but as things like drive locking occur and arguably better OSS platforms available, I'm not sure I'd make the same decision today.
Investors want bigger returns. They know they will not get away at this point by selling a monthly license. A large percentage would not buy anymore.
What other options do you have for recurring revenue? Cloud storage, but I don't think that's a great success.
And then... yes, harddisks. They are consumable devices with a limited lifespan. Label them as your own and charge a hefty fee.
The disks in a (larger) NAS setup are more than what the NAS costs. They want a piece of that pie by limitting your options.
No more syno for me in the future
I don't know what sect of leadership (MBAs?) sees continual enshittification as the strategy but I'll fight this economic warfare forever.
It starts with "Synology's storage systems have been transitioning to a more appliance-like business model." As a long-time user, all of this collectively moves Synology from "highly recommended" to "avoid."
benoau•3d ago
The only parts of Synology I really like are some of their media apps are a very tidy package, I've previously written a compatible server using NodeJS that can use their apps so I think I'll have to pursue that idea further given the vastly superior consumer hardware options that exist for NAS.
joshstrange•3d ago
If I could get that form factor, but with a custom NAS software solution, I’d be very interested.
benoau•3d ago
https://liliputing.com/?s=nas
joshstrange•3d ago
CobaltFire•2d ago
Add in a Mikrotik CCR2004-1G-2XS-PCIE [1] for high speed networking. Choose your own HBA.
0: https://www.sliger.com/products/rackmount/storage/cx3701/
1: https://mikrotik.com/product/ccr2004_1g_2xs_pcie
lostlogin•5h ago
I think this too harsh. I bought a 1821 a few years back. It takes a generic 10gb card and does what it said on the box. It’s quick and reliable. What am I missing?