These are nice projects. I had a few rounds with Rust S3 libraries and having a simple low or no dep client is much needed. The problem is that you start to support certain features (async, http2, etc.) and your nice nodep project is starting to grow.
> It runs on Node, Bun, Cloudflare Workers, and other edge platforms
On the other hand S3 uses checksums only to verify expected upload (on the write from client -> server) ... and suprisingly you can do that in paralel after the upload - by checking the MD5 hash of blob to ETag (*with some caveats)
TCP has a checksum for packet loss, and TLS protects against MITM.
I've always found this aspect of S3's design questionable. Sending both a content-md5 AND a x-amz-content-sha256 header and taking up gobs of compute in the process, sheesh...
It's also part of the reason why running minio in its single node single drive mode is a resource hog.
If it's verifying whether if it's the same file, you can use the Etag header which is computed server side by S3. Although I don't like this design as it ossifies the checksum algorithm.
Effingo file copy service does application-layer strong checksums and detects about 4.5 corruptions per exabyte transferred (figure 9, section 6.2 in [1]).
This is on top of TCP checksums, transport layer checksums/encryption (gRPC), ECC RAM and other layers along the way.
Many of these could be traced back to a "broken" machine that was eventually taken out.
I'm sure that they have reasons for this whole request signature scheme over traditional "Authorization: Bearer $token" header, but I never understood it.
However, the s3 pre-signed requests functionality was launched in 2011, but the Bearer token RFC 6750 wasn't standardised until 2012...
https://youtube.com/watch?v=tPr1AgGkvc4, about 10 minutes in I think.
> 32x faster than s3cmd and 12x faster than aws-cli. For downloads, s5cmd can saturate a 40Gbps link (~4.3 GB/s), whereas s3cmd and aws-cli can only reach 85 MB/s and 375 MB/s respectively.
Right now, I am testing/configuring Ceph ... but its open-source! Every talented weirdo with free time is welcomed to contribute!
What's your experience running it?
Source code: https://github.com/oven-sh/bun/tree/6ebad50543bf2c4107d4b4c2...
I want maximum ability to "move" my projects among services/vendors/providers
The other day I was toying with the MCP server (https://github.com/modelcontextprotocol/typescript-sdk). I default to bun these days and the http based server simply did not register in claude or any other client. No error logs, nothing.
After fiddling with my code I simply tried node and it just worked.
Now if you convert the request / response types to native bun server, it can be finicky.
But it works fine using express under bun with the official protocol implementation for typescript.
Actually writing a book about this too and will be using bun for it https://leanpub.com/creatingmcpserverswithoauth
If I want S3 access, I can just use NPM
If I don't want S3 access, I don't want it integrated into my runtime
I'd be surprised if any of your Node projects had less than 100 total deps of which a large number will be maintained by a single person.
See Express for example. 66 total deps with 26 deps relying on a single maintainer.
https://npmgraph.js.org/?q=express
But even in the case of the official aws-sdk they recently deprecated v2. I now need to update all my not-so-old Node projects to work with the newer version. Probably wouldn't have happened if I had used Bun's S3 client.
This approach does not scale. We should make NPM better.
BTW I'm not saying we should kill NPM. What I'm saying is we should reduce our dependance on random packages.
Bun doesn't need to add everything into the core engine. Eg: when using .NET you still add plenty of official Microsoft dependencies from Nuget.
- Trust could be opt-in by default
- Dependency installation could be made fully reproducible
Rather ship oven/bun through docker and have a 90mb container vs using node.
From my helicopter perspective, it adds extra complexity and size, which could maybe be ideal for a separate fork/project?
Our primary use case is browser-based uploads. You don't want people uploading anything and everything, like the wordpress upload folder. And it's timed, so you don't have to worry about someone recycling the URL.
https://github.com/nikeee/lean-s3
Pre-signing is about 30 times faster than the AWS SDK and is not async.
You can read about why it looks like it does here: https://github.com/nikeee/lean-s3/blob/main/DESIGN_DECISIONS...
But either one can only work with s3. His library works with many other backends. Get it? I'm saying he should consider integrating with goofys!
It gets slower as the instance gets faster? I'm looking at ops/sec and time/op. How am I misreading this?
[0] https://github.com/good-lly/s3mini/blob/30a751cc866855f783a1... [1] https://github.com/good-lly/s3mini/blob/30a751cc866855f783a1...
What I would also love to see is a simple, single binary S3 server alternative to Minio. Maybe a small built in UI similar to DuckDB UI.
Garage[1] lacks a web UI but I believe it meets your requirements. It's an S3 implementation that compiles to a single static binary, and it's specifically designed for use cases where nodes do not necessarily have identical hardware (i.e. different CPUs, different RAM, different storage sizes, etc.). Overall, Garage is my go-to solution for object storage at "home server scale" and for quickly setting up a real S3 server.
There seems to be an unofficial Web UI[2] for Garage, but you're no longer running a single binary if you use this. Not as convenient as a built-in web UI.
but has anyone done a price comparison of edge-computing vs say your boring hetzner vps ?
hsbauauvhabzb•1d ago
willwade•1d ago
gchamonlive•1d ago
What do you mean with a webapp?
neon_me•1d ago
gchamonlive•1d ago
neon_me•1d ago
JimDabell•1d ago
hsbauauvhabzb•1d ago
JimDabell•1d ago
> to me it makes sense to have an s3 client on my computer, but less so client side on a webapp
The relevant audience in this situation is not the average IT person, but a person who might mistake this for client-side web app functionality.
If you think that something might run in the browser, then “no browser support!” is not complicated jargon that you won’t understand.