Most alternative PG storage engines have stumbled, and OrioleDB touches a lot of core surfaces. The sensible order is: first make OrioleDB rock-solid and bug-free; ( https://github.com/orioledb/orioledb/issues?q=is%3Aissue%20s... ) then, using real-world feedback and perf data, refactor and carve out patches that are upstream-ready. That’s a big lift for both the OrioleDB team and core PG.
From what I understand, they’re aiming for a release candidate in the next ~6 months; meaningful upstreaming would come after that.
In other words: make it work --> validate the plan --> upstream the final patches.
I love PostgreSQL, but this is just utterly annoying and discouraging to use it…
The issue with PostgreSQL which the Debian packages handle fine but the Docker Hub's image does not is that you need both the executables for the old and the new major version of PostgreSQL when upgrading. This just works with Debian packages but not with the Docker image.
They do handle minor versions upgrade so the code handling upgrading is there but devs seems to be quite adamant against adding major version upgrade. I (well, a lot of people judging from votes and comments in https://github.com/docker-library/postgres/issues/37 and stars in https://github.com/tianon/docker-postgres-upgrade) would love that, and only between subsequent version would be more than enough…
Minor versions of PostgreSQL have constraints that major versions don't have; in that minor versions in principle don't see new features added. This allows minor versions of the same major release to run against the same data directory without modifications.
However.
Major versions add those new features, at the cost of changes to internals that show up in things like catalog layout. This causes changes in how on-disk data is interpreted, and thus this is incompatible, and unlike minor releases this requires specialized upgrade handling.
The issue causing it is how PostgreSQL has implemented the system catalog.
I wish Docker would stop calling their unofficial Postgres images “official”. :-/ (They're “official Docker”, but not “official Postgres”. The naming is deeply misleading for everyone who is not a Docker expert.)
Edit: Oh, for reference, I'm not using docker, just an AWS EC2 Amazon Linux instance, with a mirror on an Alpine Linux server. PostgreSQL installed via package repo, very little custom configuration.
once the latest pgautoupgrade is available, you can do it with this:
docker run --rm \
-v YOUR_DATA_VOLUME:/var/lib/postgresql/data \
-e POSTGRES_USER=USER \
-e POSTGRES_PASSWORD=PW \
-e POSTGRES_DB=DB \
-e PGAUTO_ONESHOT=yes \
pgautoupgrade/pgautoupgrade:18-bookworm
I recommend copying the volume first, in case any mistakes occur you can rollback.
mattashii•4mo ago
Personally, I'm very happy to see parallel builds for GIN indexes get released - (re)index time for those indexes is always a pain. I'm looking forward to further improvements on that front, as there are still some relatively low-hanging fruits that could improve build times even more.