Still to be seen how that works out in long run but so far so good.
Reading the linked post, it's not a Linux kernel issue. Rather, the Linux kernel was forced to disable queued TRIM and maybe even NCQ for these drives, due to issues in the drives.
I have an old Asus with a M.2 2280 slot that only takes SATA III.
I recall 840 EVO M.2 (if my memory serves me right) is the current drive but looking for a new replacement seems not to be straightforward as most SATA is 2.5 in. Or if its the correct M.2 2280, its for NVMe.
No issues were found on either of them.
Glad for the guy, but here are a bit different view on the same QVO series:
Device Model: Samsung SSD 870 QVO 1TB
User Capacity: 1,000,204,886,016 bytes [1.00 TB]
== /dev/sda
9 Power_On_Hours 0x0032 091 091 000 Old_age Always - 40779
177 Wear_Leveling_Count 0x0013 059 059 000 Pre-fail Always - 406
241 Total_LBAs_Written 0x0032 099 099 000 Old_age Always - 354606366027
== /dev/sdb
9 Power_On_Hours 0x0032 091 091 000 Old_age Always - 40779
177 Wear_Leveling_Count 0x0013 060 060 000 Pre-fail Always - 402
241 Total_LBAs_Written 0x0032 099 099 000 Old_age Always - 354366033251
== /dev/sdc
9 Power_On_Hours 0x0032 091 091 000 Old_age Always - 40779
177 Wear_Leveling_Count 0x0013 059 059 000 Pre-fail Always - 409
241 Total_LBAs_Written 0x0032 099 099 000 Old_age Always - 352861545042
== /dev/sdd
9 Power_On_Hours 0x0032 091 091 000 Old_age Always - 40778
177 Wear_Leveling_Count 0x0013 060 060 000 Pre-fail Always - 403
241 Total_LBAs_Written 0x0032 099 099 000 Old_age Always - 354937764042
== /dev/sde
9 Power_On_Hours 0x0032 091 091 000 Old_age Always - 40779
177 Wear_Leveling_Count 0x0013 059 059 000 Pre-fail Always - 408
241 Total_LBAs_Written 0x0032 099 099 000 Old_age Always - 353743891717
NB you need to look at the first decimal number in 177 Wear_Leveling_Count to get the 'remaining endurance percent' value, ie 59 and 60 hereWhile overall it's not that bad, losing only 40% after 4.5 years - it means what in another 3-4 years it would be down to 20% if the usage pattern wouldn't change and the system wouldn't hit the write amplification. Sure, someone had that "brilliant" idea ~5 years ago to use a desktop grade QLC flash as a ZFS storage for PVE...
vardump•1d ago
magicalhippo•1d ago
[1]: https://www.tomshardware.com/pc-components/storage/unpowered...
0manrho•1d ago
The only insight you can gleam from that is that bad flash is bad, and worn bad flash is even worse, and that's frankly a stretch given the lack of sample sizes or a control group.
The reality is that its non trivial to determine data retention/resilience in a powered off state, at least as it pertains to a coming to a useful and reasonably accurate generalism of "X characteristics/features result in poor data retention/endurance when powered off in Y types of devices," and being able to provide the receipts to back that up. There are far more variables than most people realize going on under the hood with flash and how different controllers and drives are architected(hardware) and programmed(firmware). Thermal management is a huge factor that is often overlooked or misunderstood and that has substantial impact on flash endurance (and performance). I could go into more specifics if interested (storage at scale/speed is my bread and butter), but this post is long enough.
All that said, the general mantra remains true: more layers per cell generally means data per cell is more fragile/sensitive, but that's generally in the context of write cycle endurance.
ffsm8•48m ago
Can you elaborate wrt the reason for your critique considering they're pretty much just testing from the perspective of the consumer? I thought their explicit goal is not to provide highly technical analysis and niche preferences but instead look at it for John Doe that's thinking about buying X, and what it would mean for his usecases. From my mental model of that perspective, they're reporting was pretty spot on and not shoddy, but I'm not an expert on the topic