Author clarifies there are flags to change that behaviour, to make it actually compare file contents, and then shares those names.
It seems like you didn't read the article.
This should never be a surprise to people unless this is their first time using Unix.
“Assume these two files are the same, check if they either system is saying they have modified it or check if the size has changed and call it a day” is pretty fair of an assumption and something even I knew RSync is doing and I’ve only used it once in a project 10 years ago. I am sure Rachel also knows this.
So, what is the problem? Is data not being synced? Is data being synced too often? And why do these assumptions lead to either happening? What horrors is the author expecting the reader to see when running the suggested command?
That is what is not explained in the article.
Probably because she/he doesn't know. Could be lots of things, because FYI mtime can be modified by the user. Go `touch` a file.
In all likelihood, it happens because of a package installation, where a package install sets the same mtime, on a file which has the same sized, but has different file contents. That's where I usually see it.
`httm` allows one to dedup snapshot versions by size, then hash the contents of identical sized versions for this very reason.
--dedup-by[=<DEDUP_BY>] comparing file versions solely on the basis of size and modify time (the default "metadata" behavior) may return what appear to be "false positives". This is because metadata, specifically modify time and size, is not a precise measure of whether a file has actually changed. A program might overwrite a file with the same contents, and/or a user can simply update the modify time via 'touch'. When specified with the "contents" option, httm compares the actual file contents of same-sized file versions, overriding the default "metadata" only behavior...
And -- this makes it quite effective at proving how often this happens:
> httm -n --dedup-by=contents /usr/bin/ounce | wc -l
3
> httm -n --dedup-by=metadata /usr/bin/ounce | wc -l
30
1) pages from the shared library are lazily loaded into memory so if you try and access a new page you are going to get it from the new binary which is likely to cause problems
2) pages from the shared library might be 'swapped' back to disk due to memory pressure. not sure whether the pager will just throw the page away and try to swap back in from disk from the new file contents or if it will notice the disk page is dirty and use the swap for write back to preserve the original page.
also, i remember it used to be possible to trigger some error if you tried to open a shared library for writing while it was in use but I can't seem to trigger that error anymore.
Degradation won't get worse except when a file changed without metadata having been modified, but then that's exactly what you want
And the speed difference is only tiny for tiny files.
BorgBackup is clearly quite good as an option.
After one enables rsync with checksums, doesn't Borg have the same issue? I believe Borg needs to do the same rolling checksum over all the data, now, as well?
ZFS sounds like the better option -- just take the last local snapshot transaction, then compare to the transaction of the last sent snapshot, and send everything in between.
And the problem re: Borg and rsync isn't just the cost of reading back and checksumming the data -- for 100,000s of small files (1000s of home directories on spinning rust), it is the speed of those many metadata ops too.
...but isn't that the problem described in the article? If that is the case, Borg would seem to the worst of all possible worlds, because now one can't count on its checksums?
If one worries about silent file modifications that alters content but keep timestamp and length, then this sounds like malware and, as such, the backup tools are not the right tool to deal with that.
Backup tools should deal with file changes lacking corresponding metadata changes despite it being more convenient to say the system should just always work ideally. At the end of the day the goal of a backup tool is to backup the data, not to skip some of the data because it's faster.
Agreed. But I think that elides the point of the article which was "I worry about backing up all my data with my userspace tool."
As noted above, Borg and rsync seem to fail here, because it's wild how much the metadata can screw with you.
> If one worries about silent file modifications that alters content but keep timestamp and length, then this sounds like malware and, as such, the backup tools are not the right tool to deal with that.
Seen this happen all the time in non-malware situations, in what we might call broken software situations, where your packaging software or your update app tinker with mtimes.
I develop an app, httm, which prints the size, date and corresponding locations of available unique versions of files residing on snapshots. And -- this makes it quite effective at proving how often this can happen on Ubuntu/Debian:
> httm -n --dedup-by=contents /usr/bin/ounce | wc -l
3
> httm -n --dedup-by=metadata /usr/bin/ounce | wc -l
30
Or, since installing is such a pain, perhaps better consider everything user files ;) ;)
Maybe someone else will have better luck than me.
You'd need to match compression settings on both ends. A different number of threads used will change the result too, would probably change depending on the hardware.
Would also apply to encryption. Probably shouldn't be using the same encryption key on different filesystems.
Or if you're using bcachefs with background compression, compression might not even happen till later.
If you want them to store the checksum of the POSIX object as an attribute (we can argue about performance later) great, but using the checksums intrinsic to the zfs technology to avoid bitflips directly is a bad call.
As you will note from my request and discussion, I'm perfectly willing to accept I might want something silly.
Would you care to explain why you think this feature is wrongheaded?
> using the checksums intrinsic to the zfs technology to avoid bitflips directly is a bad call.
You should read the discussion. I was requesting for a different purpose, although this "rsync" issue is an alternative purpose. I wanted to compare file versions on snapshots and also the live version to find all unique file versions.
I have. I didn't need to. But I have.
And agree with the experts there and here... If you're struggling to follow I'm happy to explain in _great_ detail how you're off the mark. You have a nice idea, but it's unfortunately too naïve and is probably built on hearing "the filesystem stores checksums". Everything that is said as to why this is a bad idea is the same for btrfs too.
As I said, clear as day:
> If you want them to store the checksum of the POSIX object as an attribute...
This is what you _should_ be asking for. There are ways of building this even which _do_ recycle cpu cycles. But it's messy, oh god is it awkward, and by god it makes things so difficult to follow that the filesystem would suffer for want of this small feature.
If you're looking to store the checksum of the complete POSIX object _at write_, _as it's stored on disk_ for _that internal revision of the filesystem_ then it kinda by definition is turning into an extended POSIX attribute associated with that POSIX object.
Even if implemented, this is messy as it needs to be revised and amended and checked and updated and there will be multiple algorithms with different advantages/draw-backs.
I know because I work in a job where we replicate and distribute multiple 100s of PB globally. The only way this has been found to work and scale the way you want is to store the checksums alongside the data, either as additional POSIX objects on the filesystem, or in a db which is integrated and kept in sync with the filesystem itself.
People will and do burn a few extra cycles to avoid having unmaintainable extensions and pieces of code.
If you are worrying about data within individual records changing and replicating/transmitting/storing record-level changes (which would be the articles main complaint about rsync) ZFS has this in send/recv.
Again, as is being stated elsewhere here:
If you're concerned about data integrity handle it in the FS. If you're concerned about transfer integrity, handle it over the wire.
> Don't mix these up, it just leads to a painful view of the world.
TIL that btrfs's checksums are per block, not per file. There's a dump-csum command, but doesn't seem likely to be very useful. https://unix.stackexchange.com/questions/191754/how-do-i-vie...
This deserves being in all caps.
> "ooh, bit rot" and other things where one of the files has actually become corrupted while "at rest" for whatever reason. Those observers are right!
Yep. This is why you verify your backups occasionally. And perhaps your local “static” resources too depending on your accident/attack threat models and “time to recovery” requirements in the event of something going wrong.
> the first time you do a forced-checksum run, --dry-run will let you see the changes before it blows anything away, so you can make the call as to which version is the right one!
That reads like someone is not doing snapshot backups, or at least not doing backups in a way that means the past snapshots are not protected from being molested by the systems being backed up. This is a mistake, and not one rsync or any other tool can reliably save you from.
But yes, --dry-run is a good idea before any config change. Or just generally in combination with a checksum based run as part of a sanity check procedure (though as rsync is my backup tool, I prefer to verify my backups with a different method, checksums generated by another tool, or direct comparison between snapshots/current by another tool, just-in-case a bug in rsync is causing an issue that verification using rsync cannot detect because the bug affects said verification in the same way).
baobun•1d ago
I find "-apHAX" a sane default for most use and memorable enough. (think wardriving)
Very common contextuals:
-c (as mentioned, when you care about integrity)
-n (dryrun)
-v (verbose)
-z (compress when remoting)
Where it applies I usually do the actual syncing without -c and then follow up with -cnv to see if anything is off.
pabs3•1d ago
imtringued•1d ago
pabs3•1d ago