One thing that worked for me was, with a particular big database, to use borg instead of restic. Most of the database was historical data that usually don't change, the mysqldump file is almost identical with the exception of the new and old modified data. And there is where borg deduplication and compression works, the new dump will have most blocks identical to the old one so with it I could have several days of backups taking little extra space in the borg repository.
And I was able to rclone that borg repository to S3 intelligent tier so I could keep long time backups that way that for most of the space it would end in glacier storage transparently.
Of course, it is not a general solution, but knowing what the data is and how it changes may let you take more efficient approaches than what is usually recommended.
gmuslera•20m ago
Of course, it is not a general solution, but knowing what the data is and how it changes may let you take more efficient approaches than what is usually recommended.