The Timeline of Failure:
1. We performed a Postgres version upgrade on our instance. 2. For an unknown reason, this upgrade triggered an unexpected downgrade of our disk size. 3. We ran a standard REINDEX: REINDEX DATABASE postgres; Because the disk space was severely limited by the bug in step 2, the disk ran out of space entirely. 4. This out-of-space event caused the entire database to wipe. 5. We immediately attempted a Point-in-Time Recovery (PITR), but the restore process is failing on Supabase's end.
Our project is now completely inaccessible.
We have an open critical support ticket (#SU-342355), posted on GitHub discussions, and reached out on X, but have received zero response from a human.
If @kiwicopple, @antwilson, or any Supabase infra engineers are reading this: please do not delete the underlying AWS EBS volume. We need an engineer to manually mount the volume and extract the WAL or raw data pages before the blocks are overwritten.
Any advice from the community on escalating this further is appreciated.