https://canonical-microceph.readthedocs-hosted.com/stable/tu...
Impressive as hell software and I am so glad to have it. But man! The insistence on mountains of ram per TB, on massive IO is intimidating.
It strikes me as a classic case of "we need all the interested people to pull in one project, not each start their own". AI may have made this worse then ever.
Every month there's a post of "I just want a simple S3 server" and every single one of them has a different definition of "simple". The moment any project overlaps between the use cases of two "simple S3" projects, they're no longer "simple" enough.
That's probably why hosted S3-like services will exist even if writing "simple" S3 servers is so easy. Everyone has a different opinion of what basic S3 usage is like and only the large providers/startups with business licensing can afford to set up a system that supports all of them.
Or like lodash custom builds.
And every few weeks in the cooking subreddit we get a new person talking about a new soup they made. Just think if we put all 1000 of those cooks in one kitchen with one pot, we'd end up with the best soup in the world.
Anyway, we already have "the one" project everyone can coalesce on, we have CephFS. If all the redditors actually hopped into one project, it would end up as an even more complex difficult to manage mess I believe.
Disclaimer: I work at HF
It's a little like asking why you'd use SQL.
And SQL is also very important. And yet, if somebody said "I need to store data, but it's not relational, and I just need 1000 rows, what's the best SQL solution," I would still ask why exactly they needed SQL. The might have a good reason (for example, SQLite can be a weirdly good way to persist application data), but I don't know it yet. That's why I asked.
And I want things like backup, replication, scaling, etc. to be generic.
I wrote a git library implementation that uses S3 to store repositories for example.
It doesn't need to care about POSIX mess but there is whole swathes of features many implementations miss or are incomplete, both on frontend side (serving files with right headers, or with right authentication) and backend (user/policy management, legal hold, versioning etc.)
It gets even more messy when migrating, for example migrating your backups to garagefs will lose you versioning, which means that if your S3 secret used to write backups gets compromised, your backups are gone vs on implementation that supports versioning you can just rollback.
Similarly with password, some will give you secret and login but won't allow setting your own so you'd have to re-key every device using it, some will allow import, but only in certain format so you can restore from backup, bot not migrate from other software.
You failed to answer why you even need s3... Why not a filesystem? Full stop. The entire point of s3 is distributed.
I just need something that can do S3 and is reliable and not slow.
Oh, simply that.I'm a simple man, I just need edge delivered cdn content that never fails and responds within 20ms.
Edit: Minio is written in Go, and is AGPL3... fork it (publicly), strip out the parts you don't want, run it locally.
S3 is simple for the users, not the operators. For replicating something like S3 you need to manage a lot of parts and take a lot of decisions. The design space is huge:
Replication: RAID, distributed copies, distributed erasure codes...
Coordination: centralized, centralized with backup, decentralized, logic in client...
How to handle huge files: nope, client concats them, a coordinator node concats them...
How will be the network: local networking, wan, a mix. Slow or fast?
Nature of storage: 24/7 or sporadically connected.
How to handle network partitions, pick CAP sides...
Just for instance: network topology. In your own DC you may say each connection has the same cost. In AWS you may want connections to stay in the same AZ, use certain IPs for certain source-destination to leverage cheaper prices and so on...
I’ve been at a couple companies where somebody tried putting an S3 interface in front of an NFS cluster. In practice, the semantics of S3 and NFS are different enough that I’ve had to then deal with software failures. Software designed to work with S3 is designed to work with S3 semantics and S3 performance. Hook it up to an S3 API on what is otherwise an NFS server and you can get problems.
“You can get replication with RAID” is technically true, but it’s just not good enough in most NFS systems. S3 style replication keeps files available in spite of multiple node failures.
The problems I’m talking about arise because when you use an S3-compatible API on your NFS system, it’s often true that you’re rolling the dice with three different vendors—you have the storage appliance vendor, you have the vendor for the software talking to S3, and you have Amazon who wrote the S3 client libraries. It’s kind of a nightmare of compatibility problems in my experience. Amazon changes how the S3 client library works, the change wasn’t tested against the storage vendor’s implementation, and boom, things stop working. But your first call is to the application vendor, and they are completely unfamiliar with your storage appliance. :-(
NFS is just an interface. At the end of the day it's on top of an FS. It's entirely possible and sometimes done in practice to replicate the underlying store served by NFS. As you would expect there are several means of doing this from the simple to the truly "high-availability."
ChromaticPanic•2d ago
leosanchez•2d ago
evil-olive•5h ago
last year they had a security vulnerability where they allowed a hardcoded "rustfs rpc" token to bypass all authentication [0]
and even worse, if you read the resulting reddit thread [1] someone tracked down the culprit commits - it was introduced in July [2] and not even reviewed by another human before being merged.
then the fix 6 months later [3] mentions fixing a different security vulnerability, and seemingly only fixed the hardcoded token vulnerability by accident. that PR was also only reviewed by an LLM, not a human.
0: https://github.com/rustfs/rustfs/security/advisories/GHSA-h9...
1: https://www.reddit.com/r/selfhosted/comments/1q432iz/update_...
2: https://github.com/rustfs/rustfs/pull/163/
3: https://github.com/rustfs/rustfs/pull/1291
PunchyHamster•5h ago
* create rustfs user * run the rustfs from root via systemd, but with bunch of privileges removed * write logs into /var/logs/ instead of /var/log
Looks like someone told some LLM to make docs about running it as service and never looked at output
nikeee•5h ago
That test matrix uncovered that post policies were only checked for exsitence and a valid signature, not if the request actually conforms to the signed policy. That was an arbitrary object write resulting in CVE-2026-27607 [2].
In the very first issue for this bug [3], it seemed that the authors of the S3 implementation didn't know the difference between the content-length of GetObject and content-length-range of a PostObject. That was kind of a bummer and leads me to advise all my friends not to use rustfs, though I like what they are doing in principal (building a Minio alternative).
[1]: https://github.com/nikeee/lean-s3 [2]: https://github.com/rustfs/rustfs/security/advisories/GHSA-w5... [3]: https://github.com/rustfs/rustfs/issues/984
rezonant•5h ago
0x457•6h ago