From what I can tell, this doesn't replace the lower level page heap storage, but instead actually provides new implementation of table and indexes through a plugin mechanism called table access methods and index access methods, respectively. This looks very similar to the virtual table mechanism in SQLite, for example, but the C API looks much nicer.
I've not paid much attention to the Postgres extension API, but I'm pleasantly surprised it's that flexible. I've been hearing for years about how Postgres pluggable engine interface isn't flexible enough to implement certain features, but it actually looks really rich. Maybe some of those improvements come from recent work by the OrioleDB people, and others like Citus who develop various alternative table engines?
These seem contradictory. If the data is stored in FoundationDB, then it won't be stored in the filesystem as blocks, right?
For example, mvsqlite implements a SQLite VFS that maps page to FoundationDB keys. This means that once the VFS is in place, everything is in FDB. But this means that you have to deal with page conflicts if you want multiple writers to mount the same database, so it's a somewhat coarse-grained system.
What this project is doing allows one to represent each row (tuple) as a FDB key/value, giving you finer-grained control. But it comes at a cost of having to implement all the access methods, including index scans, in terms of FDB.
This is presumably why database metadata (DDL) isn't currently persisted, because those structures aren't normal tables.
That's absolutely right! mvsqlite is a cool project but it doesn't make good use of FoundationDB in a way, which I find a bit unfortunate what a nice piece of software it is!
> This is presumably why database metadata (DDL) isn't currently persisted, because those structures aren't normal tables.
Yes, although perhaps a fun fact that all database metadata in Postgres is actually stored as standard tables, the so called system catalog: https://www.postgresql.org/docs/current/catalogs.html.
I'm still mulling over how to implement DDL persistence but one possible way would be to change the actual system catalog tables to be backed by FDB instead, and rely on some cache on the nodes to avoid round tripping to FDB to get metadata for each query.
As you can imagine though the system catalog is quite deeply intertwined with Postgres as a whole so remains to be seen if this is even doable. The alternative would be a more complicated design where the data is stored in some custom format in FDB and then synced by each node into the system catalog.
With some things like statistics, maybe not a big problem, but with things like dropping columns from tables, you could end up in a situation where a query sees the old table definition.
It will probably end up being something similar to online schema changes in Cockroach where the changes that can be made are limited for safety and run as a background job that can be tracked: https://www.cockroachlabs.com/docs/stable/online-schema-chan...
I've got a bit of experience with schema changes from building Reshape: https://github.com/fabianlindfors/reshape. I'm hoping to transfer over some concepts from there into pgfdb eventually!
So this is wrong. The heap storage is replaced.
The author replied to this thread and confirmed this.
I was also pleasantly surprised by this! I've actually been working on this project for more than a year now with a few false starts trying to find the "right" way to integrate with Postgres that fits well enough with FoundationDB.
Yes! There is a lot of ongoing work on Postgres extensibility and it keeps getting better. The ecosystem is really amazing. I'm for example excited about work being done by Enterprise DB to make custom index access method more generically pluggable, which would allow them to be used for primary keys amongst other things: https://www.postgresql.org/message-id/flat/E72EAA49-354D-4C2...
fabianlindfors•8mo ago
dev-ns8•8mo ago
fabianlindfors•8mo ago
You don't need to recompile it actually! It's enough to compile your extension and Postgres can then dynamically link to it, which makes for a perfectly good feedback loop. pgfdb is built with pgrx: https://github.com/pgcentralfoundation/pgrx, which is a framework for building Postgres extensions in Rust that handles all the compiling and linking for you. Highly recommend that if you want to try writing extensions and are also a fan of Rust!
amw-zero•8mo ago
fabianlindfors•8mo ago
> Also, how do the deployment semantics of FDB affect this. If I remember correctly, you typically run one FDB process per CPU on a machine. How do PG processes map to those
You would run the two separately, so PG processes would run and scale separately from FDB processes. Postgres spawns a new process for each connection so you would not want to map Postgres itself 1:1 with CPUs!