For example, it doesn't really make sense that "92% of data modification operations" would fail on JuiceFS, which makes me question a lot of the methodology in these tests.
The benchmark suite is trivial and opensource [1].
Is performing benchmarks “putting down” these days?
If you believe that the benchmarks are unfair to juicefs for a reason or for another, please put up a PR with a better methodology or corrected numbers. I’d happily merge it.
EDIT: From your profile, it seems like you are running a VC backed competitor, would be fair to mention that…
I don't want to see the cloud storage sector turn as bitter as the cloud database sector.
I've previously looked through the benchmarking code, and I still have some serious concerns about the way that you're presenting things on your page.
The actual code being benchmarked is trivial and open-source, but I don't see the actual JuiceFS setup anywhere in the ZeroFS repository. This means the self-published results don't seem to be reproducible by anyone looking to externally validate the stated claims in more detail. Given the very large performance differences, I have a hard time believing it's an actual apples-to-apples production-quality setup. It seems much more likely that some simple tuning is needed to make them more comparable, in which case the takeaway may be that JuiceFS may have more fiddly configuration without well-rounded defaults, not that it's actually hundreds of times slower when properly tuned for the workload.
(That said, I'd love to be wrong and confidently discover that ZeroFS is indeed that much faster!)
JuiceFS scales out horizontally as each individual client writes/reads directly to/from S3, as long as the metadata engine keeps up it has essentially unlimited bandwidth across many compute nodes.
But as the benchmark shows, it is fiddly especially for workloads with many small files and is pretty wasteful in terms of S3 operations, which for the largest workloads has meaningful cost.
I think both have their place at the moment. But the space of "advanced S3-backed filesystems" is... advancing these days.
> The AGPL license is suitable for open source projects, while commercial licenses are available for organizations requiring different terms.
I was a bit unclear on where the AGPL's network-interaction clause draws its boundaries- so the commercial license would only be needed for closed-source modifications/forks, or if statically linking ZeroFS crate into a larger proprietary Rust program, is that roughly it?
Indeed.
If I was a company I know which one I'd prefer.
> The Sprite storage stack is organized around the JuiceFS model (in fact, we currently use a very hacked-up JuiceFS, with a rewritten SQLite metadata backend). It works by splitting storage into data (“chunks”) and metadata (a map of where the “chunks” are). Data chunks live on object stores; metadata lives in fast local storage. In our case, that metadata store is kept durable with Litestream. Nothing depends on local storage.
* poor locking support (this sounds like it works better)
* it's slow
* no manual fence support; a bad but common way of distributing workloads is e.g. to compile a test on one machine (on an NFS mount), and then use SLURM or SGE to run the test on other machines. You use NFS to let the other machines access the data... and this works... except that you either have to disable write caches or have horrible hacks to make the output of the first machine visible to the others. What you really want is a manual fence: "make all changes to this directory visible on the server"
* The bloody .nfs000000 files. I think this might be fixed by NFSv4 but it seems like nobody actually uses that. (Not helped by the fact that CentOS 7 is considered "modern" to EDA people.)
Unfortunately, NFSv4 also has the silly rename semantics...
The meta store is a bottleneck too. For a shared mount, you've got a bunch of clients sharing a metadata store that lives in the cloud somewhere. They do a lot of aggressive metadata caching. It's still surprisingly slow at times.
I want to go ahead and nominate this for the understatement of the year. I expect that 2026 is going to be filled with people finding this out the hard way as they pivot towards FUSE for agents.
When we tried it at Krea we ended up moving on because we couldn't get sufficient performance to train on, and having to choose which datacenter to deploy our metadata store on essentially forced us to only use it one location at a time.
The TL;DR relevant to your comment is: we tore out a lot of the metadata stuff, and our metadata storage is SQLite + Litestream.io, which gives us fast local read/write, enough systemwide atomicity (all atomicity in our setting runs asymptotically against "someone could just cut the power at any moment"), and preserves "durably stored to object storage".
Plasmoid•1h ago