https://flashdba.com/category/storage-for-dbas/understanding...
For SOHO yes, where no serious database usage is expected. But server/datacenter SSDs are categorized: read-intensive, write-intensive and mixed-usage.
Through metrics I noticed that some SSD in a cluster were much slower than others despite being uniform hardware. After a bit of investigation it was found that the slow devices had been in service longer, and we were mot sending DISCARDs to the SSDs due to a default in dm-crypt: https://wiki.archlinux.org/title/Dm-crypt/Specialties#Discar...
The performance penalty for our drives (Samsung DC drives) was around 50% if TRIM was never run. We now run blkdiscard when provisioning new drives and enable discards on the crypt devices and things seem to be much better now.
Reflecting a bit more, this makes me more bullish on system integrators like Oxide as I have seen so many times software which was misconfigured to not use the full potential of the hardware. There is a size of company between a one person shop and somewhere like facebook/google where they are running their own racks but they don’t have the in house expertise to triage and fix these performance issues. If for example you are getting 50% less performance out of your DB nodes, what is the cost of that inefficiency?
Switched to some Intel 480GB DC drives and performance was in the low milliseconds as I would have thought any drive should be.
Not sure if I was hitting the DRAM limit of the Samsungs or what, spent a bit of time t-shooting but this was a home lab and used Intel DCs were cheap on eBay. Granted, the Samsung EVOs weren't targeted to that type of work.
jeffbee•4h ago
djoldman•3h ago