Looking at those benchmarks, I think you must be using a local disk to sync writes before uploading to S3?
I’m kinda surprised someone hasn’t integrated the buildbarn nfs v4 stuff into docker/podman - the virtiofs stuff is pretty bad on osx and the buildbarn nfs 4.0 stuff is a big improvement over nfs v3.
Anyhow I digress. Can’t wait to take it for a spin.
[0] https://github.com/Barre/zerofs_nfsserve
[1] https://github.com/buildbarn/bb-remote-execution/tree/master...
Questions:
- I see there is a diagram running multiple Postgres nodes backed by this store, very similar to horizontally distributed web-server. Doesn't Postgres use WAL replication? Or is it disabled and they are they running on same "views" of the filesystem?
- What does this mean for services that handle geo-distribution on app layer? e.g. CockroachDB?
Sorry if this sounds dumb.
I am in no way affiliated with JuiceFS, but I have done a lot of benchmarking and testing of it, and the numbers claimed here for JuiceFS are suspicious (basically 5 ops/second with everything mostly broken).
What is the durability model? The docs don't talk about intermediate storage. Slatedb does confirm writes to S3 by default, but I assume that's not happening?
jauntywundrkind•6h ago
Incredible performance figures, rocketing to probably the best way to use object storage in an fs like way. There's a whole series of comparisons, & they probably need a logarithmic scale given the scale of the lead slatedb has! https://www.zerofs.net/zerofs-vs-juicefs
Speaks 9p, NFS, or NBD. Some great demos of ZFS with l2arc caches giving a near local performance while having s3 persistence.
Totally what I was thinking of when in the Immich someone mentioned wanting a way to run it on cheap object storage. https://news.ycombinator.com/item?id=45169036