The difference between slc and mlc is just that mlc has four different program voltages instead of two, so reading back the data you have to distinguish between charge levels that are closer together. Same basic cell design. Honestly I can’t quite believe mlc works at all, let alone qlc. I do wonder why there’s no way to operate qlc as if it were mlc, other than the manufacturer not wanting to allow it.
You can run an error-correcting code on top of the regular blocks of memory, storing, for example (really an example; I don’t know how large the ‘blocks’ that you can erase are in flash memory), 4096 bits in every 8192 bits of memory, and recovering those 4096 bits from each block of 8192 bits that you read in the disk driver. I think that would be better than a simple “map low levels to 0, high levels to 1” scheme.
Loads of drives do this(or SLC) internally. Though it would be handy if a physical format could change the provisioning at the kernel accessible layer.
Manufacturers often do sell such pMLC or pSLC (p = pseudo) cells as "high endurance" flash.
tlc/qlc works just fine, it's really difficult to consume the erase cycles unless you really are writing 24/7 to the disk at hundred of megabytes a second
This is somewhat confused writing. Consumer SSDs usually do not have a data retention spec, even in this very detailed Micron datasheet you won't find it: https://advdownload.advantech.com/productfile/PIS/96FD25-S2T... Meanwhile the data retention spec for enterprise SSDs is at the end of their rated life, which is usually a DPWD/TBW intensity you won't reach in actual use anyway - that's where numbers like "3 months @ 50 °C" or whatever come from.
In practice, SSDs don't tend to loose data over realistic time frames. Don't hope for a "guaranteed by design" spec on that though, some pieces of silicon are more equal than others.
One concern I have is B2's downloading costs means verifying remote snapshots could get expensive. I suppose I could use `restic check --read-data-subset X` to do a random spot-check of smaller portions of the data, but I'm not sure how valuable that would be.
I like how it resembles LUKS encryption, where I can have one key for the automated backup process, and a separate memorize-only passphrase for if things go Very Very Wrong.
[0] https://restic.readthedocs.io/en/latest/080_examples.html#ba...
[Edit: LOL, I see someone else posted literally the same example within the same minute. Funny coincidences.]
That said, they could also be storing relatively small amounts. For example, I back up to Backblaze B2, advertised at $6/TB/month, so ~300 MB at rest will be a "couple" bucks.
The more interesting thing to note from those standards is that the required retention period differs between "Client" and "Enterprise" category.
Enterprise category only has power-off retention requirement of 3 months.
Client category has power-off retention requirement of 1 year.
Of course there are two sides to every story...
Enterprise category standard has a power-on active use of 24 hours/day, but Client category only intended for 8 hours/day.
As with many things in tech.... its up to the user to pick which side they compromise on.
[1]https://files.futurememorystorage.com/proceedings/2011/20110...
Specifically in JEDEC JESD218. (Write endurance in JESD219.)
do I just plug it in and let the computer on for a few minutes? does it needs to stay on for hours?
do I need to run a special command or TRIM it?
The problem is the test will take years, be out of date by the time its released and new controllers will be out with potentially different needs/algorithms.
https://www.tomshardware.com/pc-components/storage/unpowered...
The data on this SSD, which hadn't been used or powered up for two years, was 100% good on initial inspection. All the data hashes verified, but it was noted that the verification time took a smidgen longer than two years previously. HD Sentinel tests also showed good, consistent performance for a SATA SSD.
Digging deeper, all isn't well, though. Firing up Crystal Disk Info, HTWingNut noted that this SSD had a Hardware ECC Recovered value of over 400. In other words, the disk's error correction had to step in to fix hundreds of data-based parity bits.
...
As the worn SSD's data was being verified, there were already signs of performance degradation. The hashing audit eventually revealed that four files were corrupt (hash not matching). Looking at the elapsed time, it was observed that this operation astonishingly took over 4x longer, up from 10 minutes and 3 seconds to 42 minutes and 43 seconds.
Further investigations in HD Sentinel showed that three out of 10,000 sectors were bad and performance was 'spiky.' Returning to Crystal Disk Info, things look even worse. HTWingNut notes that the uncorrectable sectors count went from 0 to 12 on this drive, and the hardware ECC recovered value went from 11,745 before to 201,273 after tests on the day.No idea if that's enough, but it seems like a reasonable place to start.
Like does a SSD do some sort of refresh on power-on, or every N hours, or you have to access the specific block, or...? What if you interrupt the process, eg, having a NVMe in an external case that you just plug once a month for a few minutes to just use it as a huge flash drive, is that a problem?
What about the unused space, is a 4 TB drive used to transport 1 GB of stuff going to suffer anything from the unused space decaying?
It's all very unclear about what all of this means in practice and how's an user supposed to manage it.
That depends on the SSD controller implementation, specifically whether it proactively moves stuff from the SLC cache to the TLC/QLC area. I expect most controllers to do this, given that if they don't, the drive will quickly lose performance as it fills up. There's basically no reason not proactively move stuff over.
My desktop computer is generally powered except when there is a power failure, but among the million+ files on its SSD there are certainly some that I do not read or write for years.
Does the SSD controller automatically look for used blocks that need to have their charge refreshed and do so, or do I need to periodically do something like "find / -type f -print0 | xargs -0 cat > /dev/null" to make sure every file gets read occasionally?
paulkrush•3h ago
dpoloncsak•3h ago
joezydeco•1h ago
Yokolos•56m ago
PunchyHamster•35m ago
pluralmonad•1h ago
gosub100•1h ago
ggm•58m ago
I'm unsure if dd if=/the/disk of=/dev/null does the read function.
fragmede•25m ago