- Code written by the Minio team, which they have full ownership of and can relicense as they wish
- Code written by third party contributors, where Minio required the contributors to provide Minio a BSD license to use the contributions but only published it to other people under AGPL.
So the AGPL doesn't bind Minio themselves because of their licensing policy. (Which is why while pure AGPL might be the open source maximalist license, AGPL + CLA is almost at the opposite end of the scale)
If I get my patches upstream, then I don’t have to waste time reintegrating patches and rebuilding packages when I could instead be doing productive things.
Whats the situation for a AGPL fork , were one to use it can the company assert rights like they did to Nutanix.
Could you not have a CLA that only allows the project to use a specific license?
If Minio just wanted to use the changes under AGPL, the contributor could just license them under AGPL, no CLA needed.
I'm a fan of that model. IIt allows for a path to funding, a legal framework to keep contributed code open, and also allows them license agility to more permissive license ass needed. I've started using that for my own larger projects too.
https://element.io/blog/synapse-now-lives-at-github-com-elem...
(this is also why the Pentium was called the Pentium instead of the numbers that processors used to be called.. and why the gameboy copyright text was embedded into the ROMs)
That said I increasingly have a very strong distaste of these AI generated articles. They are long and tedious to read and it really makes me doubt that what is written there is actually true at all. I much prefer a worse written but to the point article.
It’s not encouraging for the future of a project when the maintainer can’t even announce it without having AI do the work.
It would be great if this turns into a high effort, carefully maintained fork. At the moment I’m highly skeptical of new forks from maintainers who are keen on using a lot of AI.
The important part is the human who will do more than just try to get the LLM to do the hard work for them, though. Once software matures the bugs and edge cases become more obscure and require more thoughtful input. AI is great at getting things to some high percentage of completeness, but it takes a skilled human to keep it all moving in the right direction.
I would cite this blog post as an example of lazy LLM use: It's over-dramatic, long, retains all of the poor LLM output styling that most human editors remove, and suggests that the maintainer isn't afraid to outsource everything to the LLM.
I mean, I'm more worried about the AI writing itself than people calling it out.
The AI articles on HN are an absolute disease. Just write your own damn articles if you're asking the rest of us to read them.
I was searching for a fairly simple replacement for s3 for testing. I'd been using Minio for a while now, and simply ended up implementing my own on top of Postgres. Fun intersection given the post. (Note, I know it isn't optimal, but as I always have Postgres available it fits well, and I don't have high storage needs, just the api compatibility)
I'll probably give GarageHQ a more serious look again.
Same goes for AWS markup on rented hardware. ;)
Man I sometimes miss having physical servers.
https://www.chainguard.dev/unchained/secure-and-free-minio-c...
You wouldn't get the other changes in this post (e.g., restoring the admin console) but that's a bit orthogonal.
But there are a bunch of changes to docs, CI workflows and issue templates. Which is what is the easy part of managing a fork, and I've seen a bunch of forks that ended up only updating readme-s, CI, etc.
I'll have more faith in the fork when the maintainers do actual fixes.
This blog post is extremely heavy on LLM written content, which isn’t a promising early sign
> Normally this is where the story ends — a collective sigh, and everyone moves on.
> But I want to tell a different story. Not an obituary — a resurrection.
I’ve seen several announcements of forked open source projects from people who thought that maintaining a fork is easy now that they can have an LLM do all the work. Then their interest trails off when they encounter problems the AI can’t handle for them or the community tires of doing all of the testing and code review for a maintainer who just wants to prompt the LLM and put their name on the project. When someone can’t even write their own announcement without an LLM it’s not an encouraging sign.
For a web GUI, I had been using this project: https://github.com/huncrys/minio-console
I switched to rustfs this week though and am not looking back. I'd recommend it to others as well for small scale usage. Its maturing rapidly and seems promising.
In fact, if you run software in production, assume security is compromised.
Edit:
https://hub.docker.com/r/pgsty/minio
From the OP's link
I don't see how these two lines can be written together.
The goal is either to remain S3-compatible or to freeze the current interface of the service forever.
As it stands this fork's compatibility with S3, and with the official MinIO itself, will break as soon as one of them pushes an API update. Which works fine for existing users, maybe, but over time as the projects drift further apart no new ones will be able to onboard.
With so many things offering S3 compatibility, I’d say it’s de-facto standardized.
E.g. the last implementation I saw was by DuckDB https://github.com/duckdb/duckdb-httpfs/blob/main/src/s3fs.c...
Would be even nicer if the official Docker image would support initializing a default bucket and access key from env variables instead of having to exec into the container and follow https://garagehq.deuxfleurs.fr/documentation/quick-start/ but that's not a dealbreaker.
Note: I only needed the single-node install, it was either this or SeaweedFS. Also used MinIO and Zenko in the past, but even the latter seems pretty much dead.
For the single node use-case, I'm working on https://github.com/uroni/hs5 . The S3 API surface is large, but at this point it covers the basics. By limiting the goals I hope it will be maintainable.
And I say this because minIO started to actively engage on the ugly parts of the license
Companies like MinIO extending the virality beyond the single software/work, even though not intended by license, gives it a bad reputation. They have fixed https://min.io/compliance now, but I guess it does not matter anymore.
LLM generated TL;DR: The factual sections read like a real person who knows what they're doing. The rhetorical flourishes read like someone pasted their draft into Claude and said "make it more compelling." The work deserves better than the prose it got.
LLM output given "<DOC>X</DOC> Identify parts written by an LLM"
Here are the passages that read as LLM-generated rather than naturally written:
*Overwrought dramatic pivots (LLMs love the "Not X — Y" antithesis):* - "Not an obituary — a resurrection." - "Not 'unmaintained' — officially, irreversibly, done." - "That demand doesn't disappear — it just finds its way out."
*Explicitly labeling rhetoric that should speak for itself:* - "The ironic part:" — just show the irony, don't announce it. - "The consensus in the international community is clear:" — "international community" is overbearing. "is clear" is LLM throat-clearing. - "That's the beauty of open-source licensing by design" — "That's the beauty of" is a hallmark LLM filler phrase.
*Grandiose one-liners that try too hard:* - "git clone is the most powerful spell in open source." - "a digital tombstone" - "If December was the clinical death, this February commit was the death certificate." — the metaphor was already established in the heading; extending it here is overworked.
*LLM vagueness / filler:* - "Things are different now." — says nothing. - "Consider:" as a standalone transition into the Elon/Twitter example. - "I believe the maintenance workload is manageable." — the hedging "I believe" adds nothing; just say it's manageable.
*Cliché deployment:* - "the dragon-slayer has become the dragon" (in the related-article blurb) - "Eating your own dog food is the best QA." — explaining the idiom ("dogfooding") one sentence before, then restating it as a maxim, is the LLM pattern of using a phrase and then making sure you understood it.
*The AI-hype paragraph is the worst offender:* > "With tools like Claude Code, the cost of locating and fixing bugs in a complex Go project has dropped by *more than an order of magnitude*. What used to require a dedicated team to maintain a complex infrastructure project can now be handled by *one experienced engineer with an AI copilot*."
This reads like an LLM writing about itself — vague quantification ("order of magnitude"), the buzzword "copilot," and the utopian framing are all telltale. The Elon/Twitter analogy that follows ("Consider:") makes it worse, not better.
*Overall pattern:* The technical/factual sections (the timeline table, the build instructions, the console revert explanation) read like a real person. The editorializing and rhetorical flourishes — especially the intro, the "But Open Source Endures" section, and the "AI Changed the Game" section — are where the LLM voice creeps in most heavily.
Sounds like Puppet's story. $180M raised, ~$1B valuation ca. 2019, sold to Perforce in 2022, public repo taken private and builds commercialized by Perforce in 2024, community fork shipped early 2025.
* Yes you can absolutely spin up a DIY S3 server
* When you run your server against a credible bench suite it throws a bunch of issues (ceph s3 - is disheartening 5 pass out of 800)
* Vibe coding can address the core issues & make significant progress on the 800 issues. Most of those 800 don't actually matter
* Low trust in resulting outcome, but I do plan on running some personal infra off DIY s3 - shopping list etc.
* Planning to roll some personal infra onto said S3, but with low confidence on
seneca•2h ago
He seems to believe AI will help lessen the burden. I hope he's able to find other maintainers.
Best luck!
dijit•1h ago
The most famous one I can think of right now is xz.
InsideOutSanta•1h ago
dijit•1h ago
But we have to rally around something.