frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

Fast and cheap bulk storage: using LVM to cache HDDs on SSDs

https://quantum5.ca/2025/05/11/fast-cheap-bulk-storage-using-lvm-to-cache-hdds-on-ssds/
89•todsacerdoti•5h ago

Comments

gopalv•4h ago
As always the YMMV of caching is access patterns, but the more consistent cacheable pattern has been the ext4 journals for me.

They are tiny and often hit with a huge number of IOPS.

Ext4 supported external journals and moving it to a single SSD for a large number of otherwise slow SMR disks has worked great in the past.

However, when you hit a failure that SSD becomes a single root cause of data loss from several disks when losing that SSD (unlike a read cache).

Where I was working that didn't matter as I was mostly working with HDFS which both likes a JBOD layout of several disks instead of RAID (no battery backed write caches), tolerant to a single node failing completely and having a ton more metadata operations thanks to writing a single large file as many fixed-size files named blk_<something> with a lot of directories containing thousands of files.

SSDs were expensive then, but it's been a decade of getting cheaper from that.

trinsic2•3h ago
This reminds me of the hybrid drives. When the NVM failed its was a nightmare to deal with. IMHO it's a bad idea from a stability perspective to be caching off drive to Non-volatile memory.
wtallis•1h ago
Your last sentence does not follow from the preceding one. Hybrid drives were doomed by having truly tiny caches, making them not particularly fast (you need a lot of flash chips in parallel to get high throughput), prone to cache thrashing, and easy to wear out the NAND flash. These days, even if you try, it's hard to build a caching system that bad. There just aren't SSDs small and slow enough to have such a crippling effect. Even using a single consumer SSD as a cache for a full shelf of hard drives wouldn't be as woefully unbalanced as the SSHDs that tried to get by with only 8GB of NAND.
GauntletWizard•2h ago
The same for ZFS; there's provisioning to make a "zil" device - ZFS Intent Log, basically the journal. ZFS is a little nicer in that this journal is explicitly disposable - If you lose your ZIL device, you lose any writes since it's horizon, but you don't lose the whole array.

The next step up is building a "metadata" device, which stores the filesystem metadata but not data. This is dangerous in the way the ext4 journal is; lose the metadata, and you lose everything.

Both are massive speedups. When doing big writes, a bunch of spinning rust can't achieve full throughput without a SSD ZIL. My 8+2 array can write nearly two gigabits, but it's abysmal (roughly the speed of a single drive) without a ZIL.

Likewise, a metadata device can make the whole filesystem feel as snappy as SSD, but it's unnecessary if you have enough cache space; ZFS prefers it, so if your metadata fits into your cache SSD, most of it will stay loaded

Szpadel•2h ago
I just want to mention that ZIL is just to speed up sync writes, as it ends syscall when data are written to ZIL, but might be still in progress on slower storage.

ZIL is also basically write only storage, therefore sad without very significant over provisioning will die quickly (you only read from ZIL after unclean shutdown)

if you don't really case about latest version of file (risk of loosing recent chances is acceptable) you might set sync=disabled for that dataset and you can have great performance without ZIL

JonChesterfield•1h ago
There's a configuration option that amounts to putting a directory (or maybe a volume) entirely into the metadata drive.

It's been a long time since I set that up, but the home storage has spinning rust plus a raid 1 of crucial ssd (sata! But ones with a capacitor to hopefully handle writes after power loss), where the directory I care about performance for lives on the ssd subarray. Still presents as one blob of storage. Metadata on the ssd too, probably no ZIL but could be wrong about that. Made ls a lot more reasonable.

Thinking about it that system must be trundling towards expected death, it might be a decade old now.

bjt12345•3h ago
Oh I miss Optane drives.
ggm•3h ago
The logic for not zfs cited reduces to two things: FUD and not in baseline Linux.

The pro case for BTRFS is being able to do JBOD with a bit of additional comfort around mirror state over drives.

Szpadel•3h ago
something that people forget with raid1 is that this only protect from catastrophic disk failure.

this means your your drive need to be dead for raid to do it's protection and this is usually the case.

the problem is when starts corrupting data it reads of writes. in that case raid have no way to know that and can even corrupt data on the healthy drive. (data is read corrupted and then written to both drives)

the issue is that there are 2 copies of the data and raid have no way of telling with one is correct so it's basically flips a coin and select one of them, even if filesystem knows that content makes no sense.

that's basically biggest advantage of filesystems like zfs or btrfs that manage raid themselves, they have checksums and that know with copy is valid and are able to recover and say that one drive appears healthy but it's corrupting data so you probably want to replace it

iforgotpassword•2h ago
Made that experience once ca. 2011. I hosted a Minecraft server ona box with raid1.

The "cool" part was that I ran a cronjob that rendered the map to a png file once and hour, and at some point a friend asked why there were holes in the map. Back then, Minecraft stored every 16x16 chunk of the map in an individual gzipped file. When the raid1 decided to read the chunk from the bad drive, it couldn't unzip it. If that happened to the renderer, there was a hole on the map. If that happened to the game server, it would regenerate the chunk, and overwrite the old one on both drives, even the healthy one. Luckily as far a I remember that only happened on random terrain, otherwise someone would have ended up with half their house missing.

iam-TJ•1h ago
When using LVM one can use the dm-integrity target to detect data corruption.
HumanOstrich•26m ago
Reading your comments is rather painful with all the typos. I recommend improving your typing and proofreading habits.
riedel•3h ago
Does someone know what the technology behind the tiering on QNAP NAS Systems is? I use an SSD RAID 1 in front of an RAID 10, which seems to work great.

IMHO flexible tiering rather than caching would be very nice for many Systems as it is rather difficult to teach users to separate rather stale data from changing data. Often does not have to be perfect.

rzzzt•10m ago
Bcachefs supports both caching and tiering: https://wiki.archlinux.org/title/Bcachefs#SSD_caching

A FUSE-based solution is autotier: https://github.com/45Drives/autotier

rsync•3h ago
A reminder that zfs recently (past ~5 years) implemented dedicated metadata cache devices ... which allows you to cache either filesystem metadata or even small files to a blazing fast SSD mirror:

https://www.rsync.net/resources/notes/2021-q3-rsync.net_tech...

This is a quick and easy way to add thousands of iops to even something very slow like a raidz3 zpool.

As always:

"Let's repeat, and emphasize: unlike an SLOG or L2ARC which are merely inconvenient to lose, if you lose your metadata vdev (your "special" vdev) you will lose your entire zpool just as surely as if you lost one of the other full vdevs ..."

sitkack•2h ago
I would hope ZFS has a way to mirror metadata from the pool into an ssd, so it is actually a cache but doesn't increase the probability of dataloss.
wongarsu•2h ago
If you set up a normal L2arc (read cache device) that will cache both data and metadata. However you can configure it to only cache one of the two. Set it to metadata only and size it appropriately and you have basically a read-only metadata mirror.

If you also want to have fast writes you can get a second SSD and set up a mirrored metadata device (storing metadata on mirrored SSDs, and regular data on whatever the rest of your pool uses)

Padriac•3h ago
RAID is great but without monitoring and alerting you can still have a problem. Better still is the automatic creation of incident records and escalation.
iam-TJ•1h ago
When using LVM there is no need to use separate mdadm (MD) based RAID - just use LVM's own RAID support.

I have a workstation with four storage devices; two 512GB SSDs, one 1GB SSD, and one 3TB HDD. I use LUKS/dm_crypt for Full Disk Encryption (FDE) of the OS and most data volumes but two of the SSDs and the volumes they hold are unencrypted. These are for caching or public and ephemeral data that can easily be replaced: source-code of public projects, build products, experimental and temporary OS/VM images, and the like.

  dmsetup ls | wc -l 
reports 100 device-mapper Logical Volumes (LV). However only 30 are volumes exposing file-systems or OS images according to:

  ls -1 /dev/mapper/${VG}-* | grep -E "${VG}-[^_]+$" | wc -l
The other 70 are LVM raid1 mirrors, writecache, crypt or other target-type volumes.

This arrangement allows me to choose caching, raid, and any other device-mapper target combinations on a per-LV basis. I divide the file-system hierarchy into multiple mounted LVs and each is tailored to its usage, so I can choose both device-mapper options and file-system type. For example, /var/lib/machines/ is a LV with BTRFS to work with systemd-nspawn/machined so I have a base OS sub-volume and then various per-application snapshots based on it, whereas /home/ is RAID 1 mirror over multiple devices and /etc/ is also a RAID 1 mirror.

The RAID 1 mirrors can be easily backed-up to remote hosts using iSCSI block devices. Simply add the iSCSI volume to the mirror as an additional member, allow it to sync 100%, and then remove it from the mirror (one just needs to be aware of and minimising open files when doing so - syncing on start-up or shutdown when users are logged out is a useful strategy or from the startup or shutdown initrd).

Doing it this way rather than as file backups means in the event of disaster I can recover immediately on another PC simply by creating an LV RAID 1 with the iSCSI volume, adding local member volumes, letting the local volumes sync, then removing the iSCSI volume.

I initially allocate a minimum of space to each volume. If a volume gets close to capacity - or runs out - I simply do a live resize using e.g:

  lvextend --resizefs --size +32G ${VG}/${LV}
or, if I want to direct it to use a specific Physical Volume (PV) for the new space:

    lvextend --resizefs --size +32G ${VG}/${LV} ${PV}
One has to be aware that --resizefs uses 'fsadmn' and only supports a limited set of file-systems (ext*, ReiserFS and XFS) so if using BTRFS or others their own resize operations are required, e.g:

  btrfs filesystem resize max /srv/NAS/${VG}/${LV}
ecef9-8c0f-4374•29m ago
Mdadm raid is rock solid. Lvm raid is not at the same level. There was a bug for years that made me doubt anybody even uses lvm-raids. I could not fix a broken raid without unmounting it. Mdadm and ext4 is what I use in production with all my trust. Lvm and btrfs for hobby projects.
rzzzt•8m ago
XFS could only grow in size for quite a while (using xfs_growfs), don't know if that changed in recent times.
whazor•1h ago
Theoretically, since you have three drives you want one drive to be with writeback. This way you could double the speed of your writes.

Linux on Snapdragon X Elite: Linaro and Tuxedo Pave the Way for ARM64 Laptops

https://www.linaro.org/blog/linux-on-snapdragon-x-elite/
46•MarcusE1W•2h ago•14 comments

When We Get Komooted

https://bikepacking.com/plog/when-we-get-komooted/
64•atakan_gurkan•2h ago•18 comments

Chemical process produces critical battery metals with no waste

https://spectrum.ieee.org/nmc-battery-aspiring-materials
103•stubish•4h ago•6 comments

Sapients paper on the concept of Hierarchical Reasoning Model

https://arxiv.org/abs/2506.21734
26•hansmayer•1h ago•5 comments

Fast and cheap bulk storage: using LVM to cache HDDs on SSDs

https://quantum5.ca/2025/05/11/fast-cheap-bulk-storage-using-lvm-to-cache-hdds-on-ssds/
89•todsacerdoti•5h ago•22 comments

Smallest particulate matter sensor revolutionizes air quality measurement

https://www.bosch-sensortec.com/news/worlds-smallest-particulate-matter-sensor-bmv080.html
66•Liftyee•5h ago•21 comments

The future is not self-hosted, but self-sovereign

https://www.robertmao.com/blog/en/the-future-is-not-self-hosted-but-self-sovereign
40•robmao•4h ago•36 comments

A low power 1U Raspberry Pi cluster server for inexpensive colocation

https://github.com/pawl/raspberry-pi-1u-server
47•LorenDB•3d ago•20 comments

Implementing dynamic scope for Fennel and Lua

https://andreyor.st/posts/2025-06-09-implementing-dynamic-scope-for-fennel-and-lua/
9•Bogdanp•3d ago•0 comments

Beyond Food and People

https://aeon.co/essays/nietzsches-startling-provocation-youre-edible-and-delicious
7•Petiver•2h ago•0 comments

Reading QR codes without a computer

https://qr.blinry.org/
6•taubek•3d ago•1 comments

Resizable structs in Zig

https://tristanpemble.com/resizable-structs-in-zig/
125•rvrb•11h ago•54 comments

How we rooted Copilot

https://research.eye.security/how-we-rooted-copilot/
303•uponasmile•17h ago•119 comments

16colo.rs: ANSI/ASCII art archive

https://16colo.rs/
42•debo_•3d ago•11 comments

Low cost mmWave 60GHz radar sensor for advanced sensing

https://www.infineon.com/part/BGT60TR13C
72•teleforce•3d ago•27 comments

4k NASA employees opt to leave agency through deferred resignation program

https://www.kcrw.com/news/shows/npr/npr-story/nx-s1-5481304
77•ProAm•3h ago•63 comments

Purple Earth hypothesis

https://en.wikipedia.org/wiki/Purple_Earth_hypothesis
222•colinprince•3d ago•61 comments

Janet: Lightweight, Expressive, Modern Lisp

https://janet-lang.org
42•veqq•7h ago•10 comments

Rust running on every GPU

https://rust-gpu.github.io/blog/2025/07/25/rust-on-every-gpu/
540•littlestymaar•23h ago•178 comments

Show HN: QuickTunes: Apple Music player for Mac with iPod vibes

https://furnacecreek.org/quicktunes/
74•albertru90•9h ago•21 comments

Coronary artery calcium testing can reveal plaque in arteries, but is underused

https://www.nytimes.com/2025/07/26/health/coronary-artery-calcium-heart.html
88•brandonb•11h ago•76 comments

Cable Bacteria Are Living Batteries

https://www.asimov.press/p/cable-bacteria
26•mailyk•3d ago•2 comments

Personal aviation is about to get interesting (2023)

https://www.elidourado.com/p/personal-aviation
100•JumpCrisscross•10h ago•87 comments

What went wrong for Yahoo

https://dfarq.homeip.net/what-went-wrong-for-yahoo/
177•giuliomagnifico•14h ago•169 comments

Paul Dirac and the religion of mathematical beauty (2011) [video]

https://www.youtube.com/watch?v=jPwo1XsKKXg
66•magnifique•10h ago•4 comments

Getting decent error reports in Bash when you're using 'set -e'

https://utcc.utoronto.ca/~cks/space/blog/programming/BashGoodSetEReports
117•zdw•3d ago•32 comments

The natural diamond industry is getting rocked. Thank the lab-grown variety

https://www.cbc.ca/news/business/lab-grown-diamonds-1.7592336
203•geox•20h ago•238 comments

Arvo Pärt at 90

https://www.theguardian.com/music/2025/jul/24/the-god-of-small-things-celebrating-arvo-part-at-90
81•merrier•12h ago•20 comments

Torqued Accelerator Using Radiation from the Sun (Tars) for Interstellar Payload

https://arxiv.org/abs/2507.17615
60•virgildotcodes•10h ago•6 comments

Teach Yourself Programming in Ten Years (1998)

https://norvig.com/21-days.html
80•smartmic•11h ago•35 comments