Other alternatives:
https://github.com/deuxfleurs-org/garage
https://github.com/rustfs/rustfs
https://github.com/seaweedfs/seaweedfs
https://github.com/supabase/storage
https://github.com/scality/cloudserver
Among others
I'm using seaweedfs for a single-machine S3 compatible storage, and it works great. Though I'm missing out on a lot of administrative nice-to-haves (like, easy access controls and a good understanding of capacity vs usage, error rates and so on... this could be a pebcak issue though).
Ceph I have also used and seems to care a lot more about being distributed. If you have less than 4 hosts for storage it feels like it scoffs at you when setting up. I was also unable to get it to perform amazingly, though to be fair I was doing it via K8S/Rook atop the Flannel CNI, which is an easy to use CNI for toy deployments, not performance critical systems - so that could be my bad. I would trust a ceph deployment with data integrity though, it just gives me that feel of "whomever worked on this, really understood distributed systems".. but, I can't put that feeling into any concrete data.
Overall great philosophy (target at self-hosting / independence) and clear and easy maintenance, not doing anything fancy, easy to understand architecture and design / operation instructions.
Let's hope the editor has second thoughts on some parts
This was written to store many thousands of images for machine learning
garaged:
image: dxflrs/garage:v2.2.0
ports:
- "3900:3900"
- "3901:3901"
- "3902:3902"
- "3903:3903"
volumes:
- /opt/garage/garage.toml:/etc/garage.toml:ro
- /opt/garage/meta:/var/lib/garage/meta
- /opt/garage/data:/var/lib/garage/dataThe frustrating part isn't the business decision itself. It's that every pivot creates a massive migration burden on teams who bet on the "open" part. When your object storage layer suddenly needs replacing, that's not a weekend project. You're looking at weeks of testing, data migration, updating every service that touches S3-compatible APIs, and hoping nothing breaks in production.
For anyone evaluating infrastructure dependencies right now: the license matters, but the funding model matters more. Single-vendor open source projects backed by VC are essentially on a countdown timer. Either they find a sustainable model that doesn't require closing the source, or they eventually pull the rug.
Community-governed projects under foundations (Ceph under Linux Foundation, for example) tend to be more durable even if they're harder to set up initially. The operational complexity of Ceph vs MinIO was always the tradeoff - but at least you're not going to wake up one morning to a "THIS REPOSITORY IS NO LONGER MAINTAINED" commit.
While I loath the moves to closed source you also can't fault them the hyperscalers just outcompete them with their own software.
I don't expect free shit forever.
From my experience, Ceph works well, but requires a lot more hardware and dedicated cluster monitoring versus something like more simple like Minio; in my eyes, they have a somewhat different target audience. I can throw Minio into some customer environments as a convenient add-on, which I don't think I could do with Ceph.
Hopefully one of the open-source alternatives to Minio will step in and fill that "lighter" object storage gap.
On AWS S3, you have a storage level called "Infrequent Access", shortened IA everywhere.
A few weeks ago I had to spend way too much time explaining to a customer that, no, we weren't planning on feeding their data to an AI when, on my reports, I was talking about relying on S3 IA to reduce costs...
Working for free is not fun. Having a paid offering with a free community version is not fun. Ultimately, dealing with people who don't pay for your product is not fun. I learnt this the hard way and I guess the MinIO team learnt this as well.
Just be honest since the start that your product will eventually abandon its FOSS licence. Then people can make an informed decision. Or, if you haven't done that, do the right thing and continue to stand by what you originally promised.
When you start something (startup, FOSS project, damn even marriage) you might start with the best intentions and then you can learn/change/loose interest. I find it unreasonable to "demand" clarity "at the start" because there is no such thing.
Turning it around, any company that adopts a FOSS project should be honest and pay for something if it does not accept the idea that at some point the project will change course (which obviously, does not guarantee much, because even if you pay for something they can decide to shut it down).
Obviously you cannot "demand" stuff but you can do your due dilligence as the person who chooses a technical solution. Some projects have more clarity than others, for example the Linux foundation or CNCF are basically companies sharing costs for stuff they all benefit from like Linux or Prometheus monitoring and it is highly unlikely they'd do a rug pull.
On the other end of the spectrum there are companies with a "free" version of a paid product and the incentive to make the free product crappier so that people pay for the paid version. These should be avoided.
"An informed decision" is not a black or white category, and it definitely isn't when we're talking about risk pricing for B2B services and goods, like what MinIO largely was for those who paid.
Any business with financial modelling worth their salt knows that very few things which are good and free today will stay that way tomorrow. The leadership of a firm you transact with may or may not state this in words, but there are many other ways to infer the likelihood of this covertly by paying close attention.
And, if you're not paying close attention, it's probably just not that important to your own product. What risks you consider worth tailing are a direct extension of how you view the world. The primary selling point of MinIO for many businesses was, "it's cheaper than AWS for our needs". That's probably still true for many businesses and so there's money to be made at least in the short term.
Like with software development, we often lack the information on which we have to decide architectural, technical or business decisions.
The common solution for that is to embrace this. Defer decisions. Make changing easy once you do receive the information. And build "getting information" into the fabric. We call this "Agile", "Lean", "data driven" and so on.
I think this applies here too.
Very big chance that MinIO team honestly thought that they'd keep it open source but only now gathered enough "information" to make this "informed decision".
FOSS is not a moral contract. People working for free owe nothing to no one. You got what's on the tin - the code is as open source once they stop as when they started.
The underlying assumption of your message is that you are somehow entitled to their continued labour which is absolutely not the case.
I find it the other way around. I feel a bit embarrassed and stressed out working with people who have paid for a copy of software I've made (which admittedly is rather rare). When they haven't paid, every exchange is about what's best for humanity and the public in general, i.e. they're not supposed to get some special treatment at the expense of anyone else, and nobody has a right to lord over the other party.
People who don't pay are often not really invested. The relationship between more work means more costs doesn't exist for them. That can make them quite a pain in my experience.
I highly recommend SeaweedFS. I used it in production for a long time before partnering with Wasabi. We still have SeaweedFS for a scorching hot, 1GiB/s colocated object storage, but Wasabi is our bread and butter object storage now.
Sofar I've switched to Rustfs which seems like a very nice project, though <24hrs is hardly an evaluation period.
Object storage has advantages over regular block storage if it is managed by cloud, and if it has a proven record on durability, availability and "infinite" storage space at low costs, such as S3 at Amazon or GCS at Google.
Object storage has zero advantages over regular block storage if you run it on yourself:
- It doesn't provide "infinite" storage space - you need to regularly monitor and manually add new physical storage to the object storage.
- It doesn't provide high durability and availability. It has lower availability comparing to a regular locally attached block storage because of the complicated coordination of the object storage state between storage nodes over network. It usually has lower durability than the object storage provided by cloud hosting. If some data is corrupted or lost on the underlying hardware storage, there are low chances it is properly and automatically recovered by DIY object storage.
- It is more expensive because of higher overhead (and, probably, half-baked replication) comparing to locally attached block storage.
- It is slower than locally attached block storage because of much higher network latency compared to the latency when accessing local storage. The latency difference is 1000x - 100ms at object storage vs 0.1ms at local block storage.
- It is much harder to configure, operate and troubleshoot than block storage.
So I'd recommend taking a look at other databases for logs, which do not require object storage for large-scale production setups. For example, VictoriaLogs. It scales to hundreds of terabytes of logs on a single node, and it can scale to petabytes of logs in cluster mode. Both modes are open source and free to use.
Disclaimer: I'm the core developer of VictoriaLogs.
That said my first instinct when I saw minio's status was to use filestorage but the rustfs setup has been pretty painless sofar. I might still remove it, we'll see.
3r7j6qzi9jvnve•1h ago
It was pretty clear they pivoted to their closed source repo back then.
paulkre•1h ago
jychang•1h ago
black3r•21m ago
unless a security issue is reported it does feel very much the same...
embedding-shape•1h ago