frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Beyond Agentic Coding

https://haskellforall.com/2026/02/beyond-agentic-coding
1•todsacerdoti•35s ago•0 comments

OpenClaw ClawHub Broken Windows Theory – If basic sorting isn't working what is?

https://www.loom.com/embed/e26a750c0c754312b032e2290630853d
1•kaicianflone•2m ago•0 comments

OpenBSD Copyright Policy

https://www.openbsd.org/policy.html
1•Panino•3m ago•0 comments

OpenClaw Creator: Why 80% of Apps Will Disappear

https://www.youtube.com/watch?v=4uzGDAoNOZc
1•schwentkerr•7m ago•0 comments

What Happens When Technical Debt Vanishes?

https://ieeexplore.ieee.org/document/11316905
1•blenderob•8m ago•0 comments

AI Is Finally Eating Software's Total Market: Here's What's Next

https://vinvashishta.substack.com/p/ai-is-finally-eating-softwares-total
1•gmays•8m ago•0 comments

Computer Science from the Bottom Up

https://www.bottomupcs.com/
1•gurjeet•9m ago•0 comments

Show HN: I built a toy compiler as a young dev

https://vire-lang.web.app
1•xeouz•10m ago•0 comments

You don't need Mac mini to run OpenClaw

https://runclaw.sh
1•rutagandasalim•11m ago•0 comments

Learning to Reason in 13 Parameters

https://arxiv.org/abs/2602.04118
1•nicholascarolan•13m ago•0 comments

Convergent Discovery of Critical Phenomena Mathematics Across Disciplines

https://arxiv.org/abs/2601.22389
1•energyscholar•13m ago•1 comments

Ask HN: Will GPU and RAM prices ever go down?

1•alentred•14m ago•0 comments

From hunger to luxury: The story behind the most expensive rice (2025)

https://www.cnn.com/travel/japan-expensive-rice-kinmemai-premium-intl-hnk-dst
2•mooreds•15m ago•0 comments

Substack makes money from hosting Nazi newsletters

https://www.theguardian.com/media/2026/feb/07/revealed-how-substack-makes-money-from-hosting-nazi...
5•mindracer•16m ago•1 comments

A New Crypto Winter Is Here and Even the Biggest Bulls Aren't Certain Why

https://www.wsj.com/finance/currencies/a-new-crypto-winter-is-here-and-even-the-biggest-bulls-are...
1•thm•16m ago•0 comments

Moltbook was peak AI theater

https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/
1•Brajeshwar•17m ago•0 comments

Why Claude Cowork is a math problem Indian IT can't solve

https://restofworld.org/2026/indian-it-ai-stock-crash-claude-cowork/
1•Brajeshwar•17m ago•0 comments

Show HN: Built an space travel calculator with vanilla JavaScript v2

https://www.cosmicodometer.space/
2•captainnemo729•17m ago•0 comments

Why a 175-Year-Old Glassmaker Is Suddenly an AI Superstar

https://www.wsj.com/tech/corning-fiber-optics-ai-e045ba3b
1•Brajeshwar•17m ago•0 comments

Micro-Front Ends in 2026: Architecture Win or Enterprise Tax?

https://iocombats.com/blogs/micro-frontends-in-2026
2•ghazikhan205•19m ago•0 comments

These White-Collar Workers Actually Made the Switch to a Trade

https://www.wsj.com/lifestyle/careers/white-collar-mid-career-trades-caca4b5f
1•impish9208•20m ago•1 comments

The Wonder Drug That's Plaguing Sports

https://www.nytimes.com/2026/02/02/us/ostarine-olympics-doping.html
1•mooreds•20m ago•0 comments

Show HN: Which chef knife steels are good? Data from 540 Reddit tread

https://new.knife.day/blog/reddit-steel-sentiment-analysis
1•p-s-v•20m ago•0 comments

Federated Credential Management (FedCM)

https://ciamweekly.substack.com/p/federated-credential-management-fedcm
1•mooreds•20m ago•0 comments

Token-to-Credit Conversion: Avoiding Floating-Point Errors in AI Billing Systems

https://app.writtte.com/read/kZ8Kj6R
1•lasgawe•21m ago•1 comments

The Story of Heroku (2022)

https://leerob.com/heroku
1•tosh•21m ago•0 comments

Obey the Testing Goat

https://www.obeythetestinggoat.com/
1•mkl95•22m ago•0 comments

Claude Opus 4.6 extends LLM pareto frontier

https://michaelshi.me/pareto/
1•mikeshi42•22m ago•0 comments

Brute Force Colors (2022)

https://arnaud-carre.github.io/2022-12-30-amiga-ham/
1•erickhill•25m ago•0 comments

Google Translate apparently vulnerable to prompt injection

https://www.lesswrong.com/posts/tAh2keDNEEHMXvLvz/prompt-injection-in-google-translate-reveals-ba...
1•julkali•25m ago•0 comments
Open in hackernews

Garage – An S3 object store so reliable you can run it outside datacenters

https://garagehq.deuxfleurs.fr/
722•ibobev•1mo ago

Comments

SomaticPirate•1mo ago
Seeing a ton of adoption of this after the Minio debacle

https://www.repoflow.io/blog/benchmarking-self-hosted-s3-com... was useful.

RustFS also looks interesting but for entirely non-technical reasons we had to exclude it.

Anyone have any advice for swapping this in for Minio?

dpedu•1mo ago
I have not tried either myself, but I wanted to mention that Versity S3 Gateway looks good too.

https://github.com/versity/versitygw

I am also curious how Ceph S3 gateway compares to all of these.

zipzad•1mo ago
I'd be curious to know how versitygw compares to rclone serve S3.
skrtskrt•1mo ago
When I was there, DigitalOcean was writing a complete replacement for the Ceph S3 gateway because its performance under high concurrency was awful.

They just completely swapped out the whole service from the stack and wrote one in Go because of how much better the concurrency management was, and Ceph's team and codebase C++ was too resistant to change.

jiqiren•1mo ago
Unrelated, but one of the more annoying aspects of whatever software they use now is lack of IPv6 for the CDN layer of DigitalOcean Spaces. It means I need to proxy requests myself. :(
Implicated•1mo ago
> but for entirely non-technical reasons we had to exclude it

Able/willing to expand on this at all? Just curious.

NitpickLawyer•1mo ago
Not the same person you asked, but my guess would be that it is seen as a chinese product.
lima•1mo ago
RustFS appears to be very early-stage with no real distributed systems architecture: https://github.com/rustfs/rustfs/pull/884

I'm not sure if it even has any sort of cluster consensus algorithm? I can't imagine it not eating committed writes in a multi-node deployment.

Garage and Ceph (well, radosgw) are the only open source S3-compatible object storage which have undergone serious durability/correctness testing. Anything else will most likely eat your data.

KevinatRustFS•1mo ago
Hi there, RustFS team member here! Thanks for taking a look.

To clarify our architecture: RustFS is purpose-built for high-performance object storage. We intentionally avoid relying on general-purpose consensus algorithms like Raft in the data path, as they introduce unnecessary latency for large blobs.

Instead, we rely on Erasure Coding for durability and Quorum-based Strict Consistency for correctness. A write is strictly acknowledged only after the data has been safely persisted to the majority of drives. This means the concern about "eating committed writes" is addressed through strict read-after-write guarantees rather than a background consensus log.

While we avoid heavy consensus for data transfer, we utilize dsync—a custom, lightweight distributed locking mechanism—for coordination. This specific architectural strategy has been proven reliable in production environments at the EiB scale.

lima•1mo ago
Is there a paper or some other architecture document for dsync?

It's really hard to solve this problem without a consensus algorithm in a way that doesn't sacrifice something (usually correctness in edge cases/network partitions). Data availability is easy(ish), but keeping the metadata consistent requires some sort of consensus, either using Raft/Paxos/..., using strictly commutative operations, or similar. I'm curious how RustFS solves this, and I couldn't find any documentation.

EiB scale doesn't mean much - some workloads don't require strict metadata consistency guarantees, but others do.

dewey•1mo ago
What is this based on, honest question as from the landing page I don't get that impression. Are many committers China-based?
NitpickLawyer•1mo ago
https://rustfs.com.cn/

> Beijing Address: Area C, North Territory, Zhongguancun Dongsheng Science Park, No. 66 Xixiaokou Road, Haidian District, Beijing

> Beijing ICP Registration No. 2024061305-1

dewey•1mo ago
Oh, I misread the initial comment and thought they had to exclude Garage. Thanks!
misnome•1mo ago
They seem to have gone all-in on AI, for commits and ticket management. Not interested in interacting with that.

Otherwise, the built in admin on one-executable was nice, and support for tiered storage, but single node parallel write performance was pretty unimpressive and started throwing strange errors (investigating of which led to the AI ticket discovery).

scottydelta•1mo ago
From what I have seen in the previous discussions here (since and before Minio debacle) and at work, Garage is a solid replacement.
klooney•1mo ago
Seaweed looks good in those benchmarks, I haven't heard much about it for a while.
chrislusf•1mo ago
Disclaim: I work on SeaweedFS.

Why skipping SeaweedFS? It rank #1 on all benchmarks, and has a lot of features.

dionian•1mo ago
can you link benchmarks
chrislusf•1mo ago
It is in the parent comment.
meotimdihia•1mo ago
I confirm this, I used SeaweedFS to serve 1M users daily with 56 million images / ~100TB with 2 servers + HDD only, while Minio can't do this. Seaweedfs performance is much better than Minio's. The only problem is that SeaweedFS documentation is hard to understand.
magicalhippo•1mo ago
SeaweedFS is also so optimized for small objects, it can't store larger objects (max 32GiB[1]).

Not a concern for many use-cases, just something to be aware of as it's not a universal solution.

[1]: https://github.com/seaweedfs/seaweedfs?tab=readme-ov-file#st...

chrislusf•1mo ago
Not correct. The files are chunked into smaller pieces and spread to all volume servers.
magicalhippo•1mo ago
Well, then I suggest updating the incorrect readme. It's why I've ignored SeaweedFS.
ted_dunning•1mo ago
SeaweedFS is very nice and takes quite an effort to lose data.
elvinagy•1mo ago
I’m Elvin from the RustFS team in the U.S. Thanks for sharing the benchmark; it’s helpful to see how RustFS performs in real-world setups.

We know trust matters, especially for a newer project, and we try to earn it through transparency and external validation. we were excited to see RustFS recently added as an optional service in Laravel Sail’s official Docker environment (PR #822). Having our implementation reviewed and accepted by a major ecosystem like Laravel was an encouraging milestone for us.

If the “non-technical reasons” you mentioned are around licensing or governance, I’m happy to discuss our long-term Apache 2.0 commitment and path to a stable GA.

ai-christianson•1mo ago
I love garage. I think it has applications beyond the standard self host s3 alternative.

It's a really cool system for hyper converged architecture where storage requests can pull data from the local machine and only hit the network when needed.

singpolyma3•1mo ago
I'd love to hear what configuration you are using for this
Powdering7082•1mo ago
No erasure coding seems like a pretty big loss in terms of how much resources do you need to get good resiliency & efficiency
munro•1mo ago
I was looking at using this on an LTO tape library, it seems the only resiliency is through replication, but this was my main concern with this project, what happens with HW goes bad
lxpz•1mo ago
If you have replication, you can lose one of the replica, that's the point. This is what Garage was designed for, and it works.

Erasure coding is another debate, for now we have chosen not to implement it, but I would personally be open to have it supported by Garage if someone codes it up.

hathawsh•1mo ago
Erasure coding is an interesting topic for me. I've run some calculations on the theoretical longevity of digital storage. If you assume that today's technology is close to what we'll be using for a long time, then cross-device erasure coding wins, statistically. However, if you factor in the current exponential rate of technological development, simply making lots of copies and hoping for price reductions over the next few years turns out to be a winning strategy, as long as you don't have vendor lock-in. In other words, I think you're making great choices.
Dylan16807•1mo ago
I question that math. Erasure coding needs less than half as much space as replication, and imposes pretty small costs itself. Maybe we can say the difference is irrelevant if storage prices will drop 4x over the next five years? But looking at pricing trends right now... that's not likely. Hard drives and SSDs are about the same price they were 5 years ago. The 5 years before that SSDs were seeing good advancements, but hard drive prices only advanced 2x.
fabian2k•1mo ago
Looks interesting for something like local development. I don't intend to run production object storage myself, but some of the stuff in the guide to the production setup (https://garagehq.deuxfleurs.fr/documentation/cookbook/real-w...) would scare me a bit:

> For the metadata storage, Garage does not do checksumming and integrity verification on its own, so it is better to use a robust filesystem such as BTRFS or ZFS. Users have reported that when using the LMDB database engine (the default), database files have a tendency of becoming corrupted after an unclean shutdown (e.g. a power outage), so you should take regular snapshots to be able to recover from such a situation.

It seems like you can also use SQLite, but a default database that isn't robust against power failure or crashes seems suprising to me.

igor47•1mo ago
I've been using minio for local dev but that version is unmaintained now. However, I was put off by the minimum requirements for garage listed on the page -- does it really need a gig of RAM?
archon810•1mo ago
The current latest Minio release that is working for us for local development is now almost a year old and soon enough we will have to upgrade. Curious what others have replaced it with that is as easy to set up and has a management UI.
mbreese•1mo ago
I think that's part of the pitch here... swapping out Minio for Garage. Both scale a lot more than for just local development, but local dev certainly seems like a good use-case here.
lxpz•1mo ago
It does not, at least not for a small local dev server. I believe RAM usage should be around 50-100MB, increasing if you have many requests with large objects.
dsvf•1mo ago
I always understood this requirement as "garage will run fine on hardware with 1GB RAM total" - meaning the 1GB includes the RAM used by the OS and other processes. I think that most current consumer hardware that is a, potential garage host, even on the low end, has at least 1GB total RAM.
moffkalast•1mo ago
That's not something you can do reliably in software, datacenter grade NVMe drives come with power loss protection and additional capacitors to handle that gracefully. If power is cut at the wrong moment the partition may not be mountable afterwards otherwise.

If you really live somewhere with frequent outages, buy an industrial drive that has a PLP rating. Or get a UPS, they tend to be cheaper.

crote•1mo ago
Isn't that the entire point of write-ahead logs, journaling file systems, and fsync in general? A roll-back or roll-forward due to a power loss causing a partial write is completely expected, but surely consumer SSDs wouldn't just completely ignore fsync and blatantly lie that the data has been persisted?

As I understood it, the capacitors on datacenter-grade drives are to give it more flexibility, as it allows the drive to issue a successful write response for cached data: the capacitor guarantees that even with a power loss the write will still finish, so for all intents and purposes it has been persisted, so an fsync can return without having to wait on the actual flash itself, which greatly increases performance. Have I just completely misunderstood this?

Nextgrid•1mo ago
> ignore fsync and blatantly lie that the data has been persisted

Unfortunately they do: https://news.ycombinator.com/item?id=38371307

btown•1mo ago
If the drives continue to have power, but the OS has crashed, will the drives persist the data once a certain amount of time has passed? Are datacenters set up to take advantage of this?
Nextgrid•1mo ago
> will the drives persist the data once a certain amount of time has passed

Yes, otherwise those drives wouldn't work at all and would have a 100% warranty return rate. The reason they get away with it is that the misbehavior is only a problem in a specific edge-case (forgetting data written shortly before a power loss).

unsnap_biceps•1mo ago
Yes, the drives are unaware of the OS state.
unsnap_biceps•1mo ago
you actually don't need capacitors for rotating media, Western Digital has a feature called "ArmorCache" that uses the rotational energy in the platters to power the drive long enough to sync the volatile cache to a non volatile storage.

https://documents.westerndigital.com/content/dam/doc-library...

toomuchtodo•1mo ago
Very cool, like the ram air turbine that deploys on aircraft in the event of a power loss.
patmorgan23•1mo ago
Good I love engineers
Aerolfos•1mo ago
> but surely consumer SSDs wouldn't just completely ignore fsync and blatantly lie that the data has been persisted?

That doesn't even help if fsync() doesn't do what developers expect: https://danluu.com/fsyncgate/

I think this was the blog post that had a bunch more stuff that can go wrong too: https://danluu.com/deconstruct-files/

But basically fsync itself (sometimes) has dubious behaviour, then OS on top of kernel handles it dubiously, and then even on top of that most databases can ignore fsync erroring (and lie that the data was written properly)

So... yes.

lxpz•1mo ago
If you know of an embedded key-value store that supports transactions, is fast, has good Rust bindings, and does checksumming/integrity verification by default such that it almost never corrupts upon power loss (or at least, is always able to recover to a valid state), please tell me, and we will integrate it into Garage immediately.
BeefySwain•1mo ago
(genuinely asking) why not SQLite by default?
lxpz•1mo ago
We were not able to get good enough performance compared to LMDB. We will work on this more though, there are probably many ways performance can be increased by reducing load on the KV store.
skrtskrt•1mo ago
Could you use something like Fly's Corrosion to shard and distribute the SQLite data? It uses a CRDT reconciliation, which is familiar for Garage.
lxpz•1mo ago
Garage already shards data by itself if you add more nodes, and it is indeed a viable path to increasing throughput.
tensor•1mo ago
Keep in mind that write safety comes with performance penalties. You can turn off write protections and many databases will be super fast, but easily corrupt.
srcreigh•1mo ago
Did you try WITHOUT ROWID? Your sqlite implementation[1] uses a BLOB primary key. In SQLite, this means each operation requires 2 b-tree traversals: The BLOB->rowid tree and the rowid->data tree.

If you use WITHOUT ROWID, you traverse only the BLOB->data tree.

Looking up lexicographically similar keys gets a huge performance boost since sqlite can scan a B-Tree node and the data is contiguous. Your current implementation is chasing pointers to random locations in a different b-tree.

I'm not sure exactly whether on disk size would get smaller or larger. It probably depends on the key size and value size compared to the 64 bit rowids. This is probably a well studied question you could find the answer to.

[1]: https://git.deuxfleurs.fr/Deuxfleurs/garage/src/commit/4efc8...

lxpz•1mo ago
Very interesting, thank you. It would probably make sense for most tables but not all of them because some are holding large CRDT values.
asa400•1mo ago
Other than knowing this about SQLite beforehand, is there any way one could discover that this is happening through tracing?
rapnie•1mo ago
I learned that Turso apparently have plans for a rewrite of libsql [0] in Rust, and create a more 'hackable' SQLite alternative altogether. It was apparently discussed in this Developer Voices [1] video, which I haven't yet watched.

[0] https://github.com/tursodatabase/libsql

[1] https://www.youtube.com/watch?v=1JHOY0zqNBY

agavra•1mo ago
Sounds like a perfect fit for https://slatedb.io/ -- it's just that (an embedded, rust, KV store that supports transactions).

It's built specifically to run on object storage, currently relies on the `object_store` crate but we're consdering OpenDAL instead so if Garage works with those crates (I assume it does if its S3 compatible) it should just work OOTB.

evil-olive•1mo ago
for Garage's particular use case I think SlateDB's "backed by object storage" would be an anti-feature. their usage of LMDB/SQLite is for the metadata of the object store itself - trying to host that metadata within the object store runs into a circular dependency problem.
fabian2k•1mo ago
I don't really know enough about the specifics here. But my main points isn't about checksums, but more something like WAL in Postgres. For an embedded KV store this is probably not the solution, but my understanding is that there are data structures like LSM that would result in similar robustness. But I don't actually understand this topic well enough.

Checksumming detects corruption after it happened. A database like Postgres will simply notice it was not cleanly shut down and put the DB into a consistent state by replaying the write ahead log on startup. So that is kind of my default expectation for any DB that handles data that isn't ephemeral or easily regenerated.

But I also likely have the wrong mental model of what Garage does with the metadata, as I wouldn't have expected that to be ever limited by Sqlite.

lxpz•1mo ago
So the thing is, different KV stores have different trade-offs, and for now we haven't yet found one that has the best of all worlds.

We do recommend SQLite in our quick-start guide to setup a single-node deployment for small/moderate workloads, and it works fine. The "real world deployment" guide recommends LMDB because it gives much better performance (with the current status of Garage, not to say that this couldn't be improved), and the risk of critical data loss is mitigated by the fact that such a deployment would use multi-node replication, meaning that the data can always be recovered from another replica if one node is corrupted and no snapshot is available. Maybe this should be worded better, I can see that the alarmist wording of the deployment guide is creating quite a debate so we probably need to make these facts clearer.

We are also experimenting Fjall as an alternate KV engine based on LSM, as it theoretically has good speed and crash resilience, which would make it the best option. We are just not recommending it by default yet, as we don't have much data to confirm that it works up to these expectations.

patmorgan23•1mo ago
Valkey?
__turbobrew__•1mo ago
RocksDB possibly. Used in high throughput systems like Ceph OSDs.
johncolanduoni•1mo ago
I’ve used RocksDB for this kind of thing in the past with good results. It’s very thorough from a data corruption detection/rollback perspective (this is naturally much easier to get right with LSMs than B+ trees). The Rust bindings are fine.

It’s worth noting too that B+ tree databases are not a fantastic match for ZFS - they usually require extra tuning (block sizes, other stuff like how WAL commits work) to get performance comparable to XFS/ext4. LSMs on the other hand naturally fit ZFS’s CoW internals like a glove.

VerifiedReports•1mo ago
It's "key/value store", FYI
abustamam•1mo ago
Wikipedia seems to find "key-value store" an appropriate term.

https://en.wikipedia.org/wiki/Key%E2%80%93value_database

VerifiedReports•1mo ago
See above.
abustamam•1mo ago
Still not sure what point you're trying to make. You attempted to correct GP's usage of "key-value store" and I merely pointed out that it is the widely accepted term for what is being discussed.

Whether or not it's semantically "correct" because of usage of hyphen vs slash is irrelevant to that point.

kqr•1mo ago
It's not a store of "keys or values", no. It's a store of key-value pairs.
VerifiedReports•1mo ago
A key-value store would be a store of one thing: key values. A hyphen combines two words to make an adjective, which describes the word that follows:

  A used-car lot

  A value-added tax

  A key-based access system
When you have two exclusive options, two sides to a situation, or separate things; you separate them with a slash:

  An on/off switch

  A win/win situation

  A master/slave arrangement
Therefore a key-value store and a key/value store are quite different.
kqr•1mo ago
All of your slash examples represent either–or situations. A swich turns it on or off, the situation is a win in the first outcome or a win in the second outcome, etc.

It's true that key–value store shouldn't be written with a hyphen. It should be written with an en dash, which is used "to contrast values or illustrate a relationship between two things [... e.g.] Mother–daughter relationship"

https://en.wikipedia.org/wiki/Dash#En_dash

I just didn't want to bother with typography at that level of pedanticism.

VerifiedReports•1mo ago
No, they don't. A master/slave configuration (of hard drives, for example) involves two things. I specifically included it to head off the exact objection you're raising.

"...the slash is now used to represent division and fractions, as a date separator, in between multiple alternative or related terms"

-Wikipedia

And what is a key/value store? A store of related terms.

And if you had a system that only allowed a finite collection of key values, where might you put them? A key-value store.

kqr•1mo ago
The hard drives are either master or slave. A hard drive is not a master-and-slave.
VerifiedReports•1mo ago
Exactly. And an entry in a key/value store is either a key or a value. Not both.
kqr•1mo ago
No, an entry is a key-and-value pair. Are you deriously suggesting it is possible to add only keys without corresponding values, or vice versa?
DonHopkins•1mo ago
Which is infinite of value is zero.
__padding•1mo ago
I’ve not looked at it in a while but sled/rio were interesting up and coming options https://github.com/spacejam/sled
ndyg•1mo ago
Fjall

https://github.com/fjall-rs/fjall

yupyupyups•1mo ago
Depending on the underlying storage being reliable is far from unique to garage. This is what most other services do too, unless we're talking about something like Ceph which manages the physical storage itself.

Standard filesystems such as ext4 and xfs don't have data checksumming, so you'll have to rely on another layer to provide integrity. Regardless, that's not garage's job imo. It's good that they're keeping their design simple and focus their resources on implementing the S3 spec.

nijave•1mo ago
The assumption is nodes are in different fault domains so it'd be highly unlikely to ruin the whole cluster.

LMDB mode also runs with flush/syncing disabled

doctorpangloss•1mo ago
https://git.deuxfleurs.fr/Deuxfleurs/garage/src/branch/main-...

this is the reliability question no?

lxpz•1mo ago
I talked about the meaning of the Jepsen test and the results we obtained in the FOSDEM'24 talk:

https://archive.fosdem.org/2024/schedule/event/fosdem-2024-3...

Slides are available here:

https://git.deuxfleurs.fr/Deuxfleurs/garage/src/commit/4efc8...

agwa•1mo ago
Does this support conditional PUT (If-Match / If-None-Match)?
codethief•1mo ago
https://news.ycombinator.com/item?id=46328218
faizshah•1mo ago
One really useful usecase for Garage for me has been data engineering scripts. I can just use the S3 integration that every tool has to dump to garage and then I can more easily scale up to cloud later.
Eikon•1mo ago
Unfortunately, this doesn’t support conditional writes through if-match and if-none-match [0] and thus is not compatible with ZeroFS [1].

[0] https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/1052

[1] https://github.com/Barre/ZeroFS

chrislusf•1mo ago
I work on SeaweedFS. It has support for these if conditions, and a lot more.
wyattjoh•1mo ago
Wasn't expecting to see it hosted on forgejo. Kind of a breath of fresh air to be honest.
thhck•1mo ago
BTW https://deuxfleurs.fr/ is one of the most beautiful website I have ever seen
codethief•1mo ago
It's beautiful from an artistic point of view but also rather hard to read and probably not very accessible (haven't checked it, though, since I'm on my phone).
isoprophlex•1mo ago
Works perfectly on an iphone. I can't attest to the accessibility features, but the aesthetic is absolutely wonderful. Something I love, and went for on my own portfolio/company website... this is executed 100x better tho, clearly a labor of love and not 30 minutes of shitting around in vi.
self_awareness•1mo ago
Well it's ASCII-themed but it's completely unreadable in terminal links/lynx.
apawloski•1mo ago
Is it the same consistency model as S3? I couldn't see anything about it in their docs.
lxpz•1mo ago
Read-after-write consistency : yes (after PutObject has finished, the object will be immediately visible in all subsequent requests, including GetObject and ListObjects)

Conditionnal writes : no, we can't do it with CRDTs, which are the core of Garage's design.

skrtskrt•1mo ago
Does RAMP or CURE offer any possibility of conditional writes with CRDTs? I have had these papers on my list to read for months, specifically wondering if it could be applied to Garage

https://dd.thekkedam.org/assets/documents/publications/Repor... http://www.bailis.org/papers/ramp-sigmod2014.pdf

lxpz•1mo ago
I had a very rapid look at these two papers, it looks like none of them allow the implementation of compare-and-swap, which is required for if-match / if-none-match support. They have a weaker definition of a "transaction". Which is to be expected as they only implement causal consistency at best and not consensus, whereas consensus is required for compare-and-swap.
skrtskrt•1mo ago
ack - makes sense, thank you for looking!
topspin•1mo ago
No tags on objects.

Garage looks really nice: I've evaluated it with test code and benchmarks and it looks like a winner. Also, very straightforward deployment (self contained executable) and good docs.

But no tags on objects is a pretty big gap, and I had to shelve it. If Garage folk see this: please think on this. You obviously have the talent to make a killer application, but tags are table stakes in the "cloud" API world.

lxpz•1mo ago
Thank you for your feedback, we will take it into account.
topspin•1mo ago
Great, and thank you.

I really, really appreciate that Garage accommodates running as a single node without work-arounds and special configuration to yield some kind of degraded state. Despite the single minded focus on distributed operation you no doubt hear endlessly (as seen among some comments here,) there are, in fact, traditional use cases where someone will be attracted to Garage only for the API compatibility, and where they will achieve availability in production sufficient to their needs by means other than clustering.

VerifiedReports•1mo ago
What are "tags on objects?"
topspin•1mo ago
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object...

Arbitrary name+value pairs attached to S3 objects and buckets, and readily available via the S3 API. Metadata, basically. AWS has some tie-ins with permissions and other features, but tags can be used for any purpose. You might encode video multiple times at different bitrates, and store the rate in a tag on each object, for example. Tags are an affordance used by many applications for countless purposes.

VerifiedReports•1mo ago
Thanks! I understand what tags are, but not what an "object" was in this context. Your example of multiple encodings of the same video seems very good.
JonChesterfield•1mo ago
Corrupts data on power loss according to their own docs. Like what you get outside of data centers. Not reliable then.
lxpz•1mo ago
Losing a node is a regular occurrence, and a scenario for which Garage has been designed.

The assumption Garage makes, which is well-documented, is that of 3 replica nodes, only 1 will be in a crash-like situation at any time. With 1 crashed node, the cluster is still fully functional. With 2 crashed nodes, the cluster is unavailable until at least one additional node is recovered, but no data is lost.

In other words, Garage makes a very precise promise to its users, which is fully respected. Database corruption upon power loss enters in the definition of a "crash state", similarly to a node just being offline due to an internet connection loss. We recommend making metadata snapshots so that recovery of a crashed node is faster and simpler, but it's not required per se: Garage can always start over from an empty database and recover data from the remaining copies in the cluster.

To talk more about concrete scenarios: if you have 3 replicas in 3 different physical locations, the assumption of at-most one crashed node is pretty reasonable, it's quite unlikely that 2 of the 3 locations will be offline at the same time. Concerning data corruption on a power loss, the probability to lose power at 3 distant sites at the exact same time with the same data in the write buffers is extremely low, so I'd say in practice it's not a problem.

Of course, this all implies a Garage cluster running with 3-way replication, which everyone should do.

jiggawatts•1mo ago
So if you put a 3-way cluster in the same building and they lose power together, then what? Is your data toast?
lxpz•1mo ago
If I make certain assumptions and you respect them, I will give you certain guarantees. If you don't respect them, I won't guarantee anything. I won't guarantee that your data will be toast either.
Dylan16807•1mo ago
If you can't guarantee anything for all the nodes losing power at the same time, that's really bad.

If it's just the write buffer at risk, that's fine. But the chance of overlapping power loss across multiple sites isn't low enough to risk all the existing data.

rakoo•1mo ago
I disagree that it's bad, it's a choice. You can't protect against everything. The team made calculations and decided that the cost to protect against this very low probability is not worth it. If all the nodes lose power you may have a bigger problem than that
Dylan16807•1mo ago
Power outages across big areas are common enough.

It's downright stupid if you build a system that loses all existing data when all nodes go down uncleanly, not even simultaneously but just overlapping. What if you just happen to input a shutdown command the wrong way?

I really hope they meant to just say the write buffer gets lost.

rakoo•1mo ago
That's why you need to go to other regions, not remain in the same area. Putting all your eggs in one basket (single area) _is_ stupid. Having a single shutdown command for the whole cluster _is_ stupid. Still accepting writes when the system is in a degraded state _is_ stupid. Don't make it sound worse than it actually is just to prove your point.
Dylan16807•1mo ago
> Still accepting writes when the system is in a degraded state _is_ stupid.

Again, I'm not concerned for new writes, I'm concerned for all existing data from the previous months and years.

And getting in this situation only takes one out of a wide outage or a bad push that takes down the cluster. Even if that's stupid, it's a common enough stupid that you should never risk your data on the certainty you won't make that mistake.

You can't protect against everything, but you should definitely protect against unclean shutdown.

rakoo•1mo ago
If it's a common enough occurrence to have _all_ your nodes down at the same time maybe you should reevaluate your deployment choices. The whole point of multi-nodes clustering is that _some_ of the nodes will always be up and running otherwise what you're doing is useless.

Also, garage gives you the possibility to automatically snapshot the metadata, advices on how to do the snapshotting at the filesystem level and to restore that.

Dylan16807•1mo ago
All nodes going down doesn't have to be common to make that much data loss a terrible design. It just has to be reasonably possible. And it is. Thinking your nodes will never go down together is hubris. Admitting the risk is being realistic, not something that makes the system useless.

How do filesystem level snapshots work if nodes might get corrupted by power loss? Booting from a snapshot looks exactly the same to a node as booting from a power loss event. Are you implying that it does always recover from power loss and you're defending a flaw it doesn't even have?

rakoo•1mo ago
No, the snapshotting and restore is manual
InitialBP•1mo ago
It sounds like that's a possibility, but why on earth would you take the time to setup a 3 node cluster of object storage for reliability and ignore one of the key tenants of what makes it reliable?
JonChesterfield•1mo ago
That is a much stronger guarantee than your documentation currently claims. One site falling over and being rebuilt without loss is great. One site losing power, corrupting the local state, then propagating that corruption to the rest of the cluster would not be fine. Different behaviours.
lxpz•1mo ago
Fair enough, we will work on making the documentation clearer.
JonChesterfield•1mo ago
I think this is one where the behaviour is obvious to you but not to people first running across the project. In particular, whether power loss could do any of:

- you lose whatever writes to s3 haven't finished yet, if any

- the local node will need to repair itself a bit after rebooting

- the local node is now trashed and will have to copy all data back over

- all the nodes are now trashed and it's restore from backup time

I've been kicking the tyres for a bit and I think it's the happy case in the above, but lots of software out there completely falls apart on crashes so it's not generally a safe assumption. I think the behaviour is sqlite on zfs doesn't care about pulling the power cable out, lmdb is a bit further down the list.

ekjhgkejhgk•1mo ago
Anybody understand how this compares with Vast?
allanrbo•1mo ago
I use Syncthing a lot. Is Garage only really useful if you specifically want to expose an S3 drop in compatible API, or does it also provide other benefits over syncthing?
lxpz•1mo ago
They are not solving the same problem.

Syncthing will synchronize a full folder between an arbitrary number of machines, but you still have to access this folder one way or another.

Garage provides an HTTP API for your data, and handles internally the placement of this data among a set of possible replica nodes. But the data is not in the form of files on disk like the ones you upload to the API.

Syncthing is good for, e.g., synchronizing your documents or music collection between computers. Garage is good as a storage service for back-ups with e.g. Restic, for media files stored by a web application, for serving personal (static) web sites to the Internet. Of course, you can always run something like Nextcloud in front of Garage and get folder synchronization between computers somewhat like what you would get with Syncthing.

But to answer your question, yes, Garage only provides a S3-compatible API specifically.

sippeangelo•1mo ago
You use Syncthing for object storage?
supernes•1mo ago
I tried it recently. Uploaded around 300 documents (1GB) and then went to delete them. Maybe my client was buggy, because the S3 service inside the container crashed and couldn't recover - I had to restart it. It's a really cool project, but I wouldn't really call it "reliable" from my experience.
awoimbee•1mo ago
How is garage for a simple local dev env ? I recently used seaweedfs since they have a super simple minimal setup compared to garage which seemed to require a config file just to get started.
adamcharnock•1mo ago
Copy/paste from a previous thread [0]:

We’ve done some fairly extensive testing internally recently and found that Garage is somewhat easier to deploy in comparison to our existing use of MinIO, but is not as performant at high speeds. IIRC we could push about 5 gigabits of (not small) GET requests out of it, but something blocked it from reaching the 20-25 gigabits (on a 25g NIC) that MinIO could reach (also 50k STAT requests/s, over 10 nodes)

I don’t begrudge it that. I get the impression that Garage isn’t necessarily focussed on this kind of use case.

---

In addition:

Next time we come to this we are going to look at RustFS [1], as well as Ceph/Rook [2].

We can see we're going to have to move away from MinIO in the foreseeable future. My hope is that the alternatives get a boost of interest given the direction MinIO is now taking.

[0]: https://news.ycombinator.com/item?id=46140342

[1]: https://rustfs.com/

[2]: https://rook.io/

hardwaresofton•1mo ago
Please also consider including SeaweedFS in the testing.
__turbobrew__•1mo ago
I wouldn’t use rook if you solely want S3. It is a massively complex system which you really need to invest in understanding or else your cluster will croak at some point and you will have no idea on how to fix it.
breakingcups•1mo ago
IS there a better solution for self-healing S3 storage that you could recommend? I'm also curious what will make a rook cluster croak after some time and what kind of maintenance is required in your experience.
adastra22•1mo ago
ceph?
yupyupyups•1mo ago
Rook is ceph.
adamcharnock•1mo ago
Not used it yet, but RustFS sounds like it has self healing

https://docs.rustfs.com/troubleshooting/healing.html

__turbobrew__•1mo ago
I have unfortunately got a ceph cluster in a bad enough state that I just had to delete the pools and start from scratch. It was due to improper sequencing when removing OSDs, but that is kindof the point is you have to know what you are doing to know how to do things safely. For the most part I have so far learned by blundering things and learning hard lessons. Ceph clusters when mistreated can get into death spirals that only an experienced practitioner can advert through very carefully modifying cluster state through things like upmaps. You also need to make sure you understand your failure domains and how to spread mons and osds across the domains to properly handle failure. Lots of people don’t think about this and then one day a rack goes poof and you didn’t replicate your data across racks and you have data loss. Same thing with mons, you should be deploying mons across at least 3 failure domains (ideally 3 different datacenters) to maintain quorum during an outage.
nine_k•1mo ago
They explicitly say that top performance is not a goal: «high performances constrain a lot the design and the infrastructure; we seek performances through minimalism only» (https://garagehq.deuxfleurs.fr/documentation/design/goals/)

But it might be interesting to see where the time is spent. I suspect they may be doing fewer things in parallel than MinIO, but maybe it's something entirely different.

NL807•1mo ago
>I get the impression that Garage isn’t necessarily focussed on this kind of use case.

I wouldn't be surprised if this will be fixed sometime in the future.

throwaway894345•1mo ago
> We can see we're going to have to move away from MinIO in the foreseeable future.

My favorite thing about all of this is that I had just invested a ton of time in understanding MinIO and its Kubernetes operator and got everything into a state that I felt good about. I was nearly ready to deploy it to production when the announcement was released that they would not be supporting it.

I’m somewhat surprised that no one is forking it (or I haven’t heard about any organizations of consequence stepping up anyway) instead of all of these projects to rebuild it from scratch.

BadWolfStartup•1mo ago
You can also just pay for MinIO and treat it like any other commercial dependency, with support and clearer expectations around licensing and roadmap, but forks are a different story: unless there’s a well-funded company or solid consortium behind them, you’re mostly just trading one risk for another.
riku_iki•1mo ago
> with support and clearer expectations around licensing and roadmap

nothing prevents them from hiking pricing, so expectations are not clear.

johncolanduoni•1mo ago
Somewhat unrelated, but I just looked at the RustFS docs intro[1] after seeing it here. It has this statement:

> RustFS is a high-performance, distributed object storage software developed using Rust, the world's most popular memory-safe language.

I’m actually something of a Rust booster, and have used it professionally more than once (including working on a primarily Rust codebase for a while). But it’s hard to take a project’s docs seriously when it describes Rust as “the world’s most popular memory-safe language”. Java, JavaScript, Python, even C# - these all blow it out of the water in popularity and are unambiguously memory safe. I’ve had a lot more segfaults in Rust dependencies than I have in Java dependencies (though both are minuscule in comparison to e.g. C++ dependencies).

[1]: https://docs.rustfs.com/installation/

teiferer•1mo ago
It's hard to take a project seriously if it focuses so much on the language it's written in. As a user, I don't care. Show me the results (bug tracker with low rate of issues), that's what I care about. Whether you program in Rust or C or Java or assembly or PHP.
limagnolia•1mo ago
As a potential user of an open source project, I care a fair bit what language it is implemented in. As an open source project, I preffer projects in languages and ecosystems I am familair and comfortable with. I may need to fix bugs, add features, or otherwise make contributions back to the project, and thus I am more likely to pick a solution in a language I am comfortable with than in a language I am not as comfortable with, given my other needs and priorities are met.
PunchyHamster•1mo ago
The docs of it and the marketing is a bit of a mess, tho I'm just gonna blame that on culture barrier as the devs are chinese
woodruffw•1mo ago
I agree, although I’m guessing they’re measuring “most popular” as in “most beloved” and not as in “most used.” That’s the metric that StackOverflow puts out each year.
riedel•1mo ago
>Secure: RustFS is written in Rust, a memory-safe language, so it is 100% secure

[0]

qed

[0] https://docs.rustfs.com/installation/

tormeh•1mo ago
Oh wow, they really wrote that. I love Rust but this is clown shit.
afiori•1mo ago
I agree that it is a bad idea to describe rust this way but they likely meant memory safety as used in https://www.ralfj.de/blog/2025/07/24/memory-safety.html . Meaning that shared mutable is thread unsafe, I am unsure about Java and JavaScript but I think that almost every language on the popular memory safe list fails this test.

Again the statement is probably still untrue and bad marketing, but I suspect this kind of reasoning was behind it

Of course Rust technically fails too since `unsafe` is a language feature
johncolanduoni•1mo ago
I don't have an issue with `unsafe` - Java has the mythical unsafe object, C# has it's own unsafe keyword, Python has ffi, etc. The title of that blog post - that there is no memory safety without thread safety - is not quite true and it acknowledges how Java, C#, and Go have strong memory safety while not forbidding races. Even the "break the language" framing seems like special pleading; I'd argue that Java permitting reading back a sheared long (64-bit) integer due to a data race does not break the language the same way writing to a totally unintended memory area or smashing the stack does, and that this distinction is useful. Java data races that cause actual exploitable vulnerabilities are very, very rare.
elvinagy•1mo ago
I am Elvin, from the RustFS team in the U.S.

Thanks for the reality check on our documentation. We realize that some of our phrasing sounded more like marketing hype than a technical spec. That wasn’t our intent, and we are currently refining our docs to be more precise and transparent.

A few points to clarify where we’re coming from: 1. The Technical Bet on Rust: Rust wasn’t a buzzword choice for us. We started this project two years ago with the belief that the concurrency and performance demands of modern storage—especially for AI-driven workloads—benefit from a foundation with predictable memory behavior, zero-cost abstractions, and no garbage collector. These properties matter when you care about determinism and tail latency. 2. Language Safety vs. System Design: We’re under no illusion that using a memory-safe language automatically makes a system “100% secure.” Rust gives us strong safety primitives, but the harder problems are still in distributed systems design, failure handling, and correctness under load. That’s where most of our engineering effort is focused. 3. Giving Back to the Ecosystem: We’re committed to the ecosystem we build on. RustFS is a sponsor of the Rust Foundation, and as we move toward a global, Apache 2.0 open-source model, we intend to contribute back in more concrete ways over time.

We know there’s still work to do on the polish side, and we genuinely appreciate the feedback. If you have specific questions about our implementation details or the S3 compatibility layer, I’m happy to dive into the technical details.

Emjayen•1mo ago
Those rates are peanuts considering that a decade ago saturating 40G, per core, was more than reasonable via standard userspace networking, with atleast a few copies in the datapath.
PunchyHamster•1mo ago
passing blocks of memory around vs referencing filesystem/database, ACLs, authentication and SSL
Roark66•1mo ago
Having just finished a "hobby size" setup of Rook-Ceph on 3 n100 mini pcs, with every service to fit in a couple hundred MB of ram (one service needs up to 3Gb when starting, but then runs around 250MB) I'd ask why not ceph?

At work I'm typically a consumer of such services from large cloud providers. I read in few places how "difficult" it is, how you need "4GB minimum RAM for most services" and how "friends do not let friends run Ceph below 10Gb".

But this setup runs on a non dedicated 2.5Gb interface (there is VLAN segmentation and careful QoSing).

My benchmarks show I'm primarily network latency and bandwidth limited. By the very definition you can't get better than that.

There were many factors why I chose Ceph and not Garage, Seaweed or MinIo. (One of the biggest is that ceph does 2 birds with one stone for me - block and object).

PunchyHamster•1mo ago
Ceph is far higher on RAM usage and complexity. Yeah if you need block storage in addition it's a good choice, but for anything smaller than half a rack of devices it's kinda overkill

Also from our experience the docs outright lie about ceph's OSD memory usage and we've seen double or more than what docs claim (8-10GB instead of 4)

PunchyHamster•1mo ago
My small adventure with rustfs is that it is somewhat underbaked at the moment.

And also it is already rigged for a rug-pull

https://github.com/rustfs/rustfs/blob/main/rustfs/src/licens...

evil-olive•1mo ago
yeah, their docs look pretty comprehensive, but there's a disturbing number of 404s that scream "not ready for prime-time" to me.

from https://rustfs.com/ if you click Documentation, it takes you to their main docs site. there's a nav header at the top, if you click Docs there...it 404s.

"Single Node Multiple Disk Installation" is a 404. ditto "Terminology Explanation". and "Troubleshooting > Node Failure". and "RustFS Performance Comparison".

on the 404 page, there's a "take me home" button...which also leads to a 404.

elvinagy•1mo ago
Thanks for flagging this and for taking the time to point out the broken links. We open-sourced RustFS only a few months ago, and while we’ve been heavily focused on getting the core system to GA, that has admittedly created some documentation debt.

We’re actively reviewing the docs and cleaning up any 404s or navigation issues we can find. For the specific 404 you mentioned, we haven’t been able to reproduce it on our end so far, but we’re continuing to investigate in case it’s environment- or cache-related.

On the licensing side, we want to be clear that we’re fully committed to Apache 2.0 for the long term.

eduardogarza•1mo ago
I use this for booting up S3-compatible buckets for local development and testing -- paired up with s5cmd, I can seed 15GB and over 60,000 items (seed/mock data) in < 60s... have a perfect replica of a staging environment with Docker containers (api, db, cache, objects) all up in less than 2mins. Super simple to set up for my case and been working great.

Previously I used LocalStack S3 but ultimately didn't like the lack of persistance thats not available on the OSS verison. MinIO OSS is apparently no longer maintained? Also looked at SeaweedFS and RustFS but from a quick reading into them this once was the easiest to set up.

chrislusf•1mo ago
I work on SeaweedFS. So very biased. :)

Just run "weed sever -s3 -dir=..." to have an object store.

eduardogarza•1mo ago
I'll try it!
k__•1mo ago
Half-OT:

Does anyone know a good open source S3 alternarive that's easily extendable with custom storage backends?

For example, AWS offers IA and Glacier in addition to the defaults.

onionjake•1mo ago
Storj supports arbitrary configured backends each with different erasure coding, node placement, etc.
yupyupyups•1mo ago
Garage is amazing! But it would be even more amazing if it had immutable object support. :)

This is used for ransomware resistant backups.

tenacious_tuna•1mo ago
Anyone know if it's possible to bandwidth-limit the sync operations? I'd love to set up garage instances across my families' houses to act as a distributed backup, but I don't want to hose their (or my) down/uplink during awake hours. Having redundant selfhosted S3like storage would solve many problems for me, but I really need that capability.
BOOSTERHIDROGEN•1mo ago
I use juicefs
ianopolous•1mo ago
@lxpz It would be great to do a follow up to this blog post with the latest Peergos. All the issues with baseline bandwidth and requests have gone away, even with federation on. The baseline is now 0, and even many locally initiated requests will be served directly from a Peergos cache without touching S3.

https://garagehq.deuxfleurs.fr/blog/2022-ipfs/

Let's talk!

PunchyHamster•1mo ago
For someone recently migrating from minio, caveats

* no lifecycle management of any kind - if you're using it for backups you can't set "don't delete versions for 3 months", so if anyone takes hold of your key, you backups are gone. I relied on minio's lifecycle management for that but it's feature missing in garage (and to be fair, most other) S3

* no automatic mirroring (if you want to have second copy in something other than garage or just don't want to have a cluster but rather have more independent nodes)

* ACLs for access are VERY limited - can't make a key access only sub-path, can't make a "master key" (AFAIK, couldn't find an option) that can access all the buckets so the previous point is also harder - I can't easily use rclone to mirror entire instance somewhere else unless I write scrip iterating over buckets and adding them bucket by bucket to key ACK

* Web hosting features are extremely limited so you won't be able to say set CORS headers for the bucket

* No ability to set keys - you can only generate on inside garage or import garage-formatted one - which means you can't just migrate storage itself, you have to re-generate every key. It also makes automating it harder, in case of minio you can pre-generate key and pass then fed it to clients and to the minio key command, here you have to do the dance of "generate with tool" -> "scrape and put into DB" -> put onto clients.

Overall I like the software a lot but if you have setup that uses those features, beware.

coldtea•1mo ago
>no lifecycle management of any kind - if you're using it for backups you can't set "don't delete versions for 3 months", so if anyone takes hold of your key, you backups are gone

If someone gets a hold of your key, can't they also just change your backup deletion policy, even if it supported one?

PunchyHamster•1mo ago
> If someone gets a hold of your key, can't they also just change your backup deletion policy, even if it supported one?

Minio have full on ACLs so you can just create a key that can only write/read but not change any settings like that.

So you just need to keep the "master key" that you use for setup away from potentially vulnerable devices, the "backup key" doesn't need those permissions.

craigds•1mo ago
why did you migrate from Minio? does garage beat minio at something? the website is focussed on low resource requirements but I'm not clear on whether minio needs more resources or not
Oxodao•1mo ago
minio is dying, they focus on entreprise stuff now, the web ui has been gone for a few months, and now they changed the main repository to "maintenance mode"
JonChesterfield•1mo ago
I think this works. A subset of S3's API does look like a CRDT. Metadata can go in sqlite. Compiles to a static binary easily.

I've spent a mostly pleasant day seeing whether I can reasonably use garage + rclone as a replacement for NFS and the answer appears to be yes. Not really a recommended thing to do. Garage setup was trivial, somewhat reminiscent of wireguard. Rclone setup was a nuisance, accumulated a lot of arguments to get latency down and I think the 1.6 in trixie is buggy.

Each node has rclone's fuse mount layer on it with garage as the backing store. Writes are slow and a bit async, debugging shows that to be wholly my fault for putting rclone in front of it. Reads are fast, whether pretending to be a filesystem or not.

Yep, I think I'm sold. There will be better use cases for this than replacing NFS. Thanks for sharing :)