Anyone knows when will it come out of beta?
https://github.com/restic/restic
https://github.com/restic/rest-server
which has to be started with --append-only. I use this systemd unit:
[Unit]
After=network-online.target
[Install]
WantedBy=multi-user.target
[Service]
ExecStart=/usr/local/bin/rest-server --path /mnt/backups --append-only --private-repos
WorkingDirectory=/mnt/backups
User=restic
Restart=on-failure
ProtectSystem=strict
ReadWritePaths=/mnt/backups
I also use nginx with HTTPS + HTTP authentication in front of it, with a separate username/password combination for each server. This makes rest-server completely inaccessible to the rest of the internet and you don't have to trust it to be properly protected against being hammered by malicious traffic.Been using this for about five years, it saved my bacon a few times, no problems so far.
rclone serve restic --stdio
You add something like this to ~/.ssh/authorized_keys: restrict,command="rclone serve restic --stdio --append-only backups/my-restic-repo" ssh-rsa ...
... and then run a command like this: ssh user@rsync.net rclone serve restic --stdio ...
We just started deploying this on rsync.net servers - which is to say, we maintain an arguments allowlist for every binary you can execute here and we never allowed 'rclone serve' ... but now we do, IFF it is accompanied by --stdio.There used to be append-only, they've removed it and suggest using a credential that has no 'delete' permission. The question asked here is whether this would protect against data being overwritten instead of deleted.
Borg has the issue that it is in limbo, i.e. all the new features (including object storage support) are in Borg2, but there's no clear date when that will be stable. I also did not like that it was written in Python, because backups are not always IO blocked (we have some very large directories, etc.).
I really liked borgmatic on Borg, but we found resticprofile which is pretty much the same thing (it is underdiscussed). After some testing I think it is important to set GOGC and read-concurrency parameters, as a tip. All the GUIs are very ugly, but we're fine a CLI.
If rustic matures enough and is worth a switch we might consider it.
They are so similar in features. How do they compare? Which to choose?
All three have a lot of commands to work with repositories. Each one of them is much better than closed source proprietary backup software that I have dealt with, like Synology hyperbackup nonsense.
If you want a better solution, the next level is ZFS.
The fact that Kopia has a UI is awesome for non-technical users.
I migrated off restic due to memory usage, to Kopia. I am currently debating switching back to restic purely because of how retention works.
I was setting up PCs for unsophisticated users who needed to be able to do their own restores. Most OSS choices are only appropriate for technical users, and some like Borg are *nix-only.
It wasn't perfect, but it did protect against some scenarios in which a device could be majorly messed up, yet the server was more resistant to losing the data.
For work, the backup schemes include separate additional protection of the data server or media, so append-only added to that would be nice, as redundant protection, but not as necessary.
This should be simpler still:
My low value backups go into a cheap usb hdd from Best Buy.
Google Cloud Store Archive Tier is a tiny bit more.
I don't see what value this provides that rsync, tar and `aws s3 cp` (or AWS SDK equivalent) provides.
I even wrote python scripts to automatically cleanup and unmount if something goes wrong (not enough space etc). On openbsd I can even Double encrypt with blowfish(vnconfig -K) and then a diff alg for bioctl.
Every once in a while things gets sparsed out, so that for example I have daily backups for the recent past, but only monthly and then even yearly for further back.
LeoPanthera•2h ago
topato•2h ago
ajb•2h ago
I guess some people might have been relying on this feature of borgbackup to implement that requirement
philsnow•2h ago
Are you talking about using ZFS snapshots on the remote backup target? Trying to solve the same problem with local snapshots wouldn't work because the attack presumes that the device that's sending the backups is compromised.
LeoPanthera•2h ago
Yes.
homebrewer•2h ago
nijave•2h ago
globular-toast•1h ago
Would be interested to know what others have set up as I'm not really happy with how I do it. I have zfs on my NAS running locally. I backup to that from my PC via rsync triggered by anacron daily. From my NAS I use rclone to send encrypted backups to Backblaze.
I'd be happier with something more frequent from PC to NAS. Syncthing maybe? Then just do zfs sync to some off site zfs server.
gaadd33•1h ago
aeadio•32m ago