I also recommend pg_repack[2] to squash tables on a live system and reclaim disk space. It has saved me so much space.
do you export the data with this and then import it in the other db with it?
or do you work with existing postgres backups?
If things go truly south, just hope you have a read replica you can use as your new master. Most SLAs are not written with 72h+ of downtime. Have you tried the nuclear recovery plan, from scratch? Does it work?
hadlock•6h ago
forinti•6h ago
SoftTalker•6h ago
And, of course, your disaster recovery plan is incomplete until you've tested it (at scale). You don't want to be looking up Postgres documentation when you need to restore from a cold backup, you want to be following the checklist you have in your recovery plan and already verified.
zie•2h ago
bityard•10m ago
nijave•5h ago
pg_restore will handle roles, indexes, etc assuming you didn't switch the flags around to disable them
If you're on EC2, hopefully you're using disk snapshots and WAL archiving.
pgwhalen•4h ago
Arbortheus•4h ago
ants_everywhere•3h ago
benreesman•3h ago
anonymars•3h ago
High availability is different from disaster recovery
WJW•4h ago
Assuming you mean that range to start at 100GB, I've worked with databases that size multiple times but as a freelancer it's definitely not been "most" businesses in that range.
zie•2h ago
We encourage staff to play with both, and they can play with impunity since it's a copy that will get replaced soon-ish.
This makes it important that both work reliably, which means we know when our backups stop working.
We haven't had a disaster recovery situation yet(hopefully never), but I feel fairly confident that getting the DB back shouldn't be a big deal.