frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Tiny C Compiler

https://bellard.org/tcc/
137•guerrilla•4h ago•60 comments

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
17•yi_wang•1h ago•3 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
220•valyala•9h ago•41 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
127•surprisetalk•8h ago•135 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
154•mellosouls•11h ago•312 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
893•klaussilveira•1d ago•272 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
49•gnufx•7h ago•51 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
145•vinhnx•12h ago•16 comments

Show HN: Craftplan – Elixir-based micro-ERP for small-scale manufacturers

https://puemos.github.io/craftplan/
13•deofoo•4d ago•1 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
170•AlexeyBrin•14h ago•30 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
82•randycupertino•4h ago•154 comments

First Proof

https://arxiv.org/abs/2602.05192
110•samasblack•11h ago•69 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
278•jesperordrup•19h ago•90 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
61•momciloo•8h ago•11 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
91•thelok•10h ago•20 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
31•mbitsnbites•3d ago•2 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
103•zdw•3d ago•52 comments

IBM Beam Spring: The Ultimate Retro Keyboard

https://www.rs-online.com/designspark/ibm-beam-spring-the-ultimate-retro-keyboard
3•rbanffy•4d ago•0 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
558•theblazehen•3d ago•206 comments

Eigen: Building a Workspace

https://reindernijhoff.net/2025/10/eigen-building-a-workspace/
8•todsacerdoti•4d ago•2 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
28•languid-photic•4d ago•9 comments

Microsoft account bugs locked me out of Notepad – Are thin clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
106•josephcsible•6h ago•127 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
263•1vuio0pswjnm7•15h ago•434 comments

I write games in C (yes, C) (2016)

https://jonathanwhiting.com/writing/blog/games_in_c/
175•valyala•8h ago•166 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
114•onurkanbkrc•13h ago•5 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
141•videotopia•4d ago•47 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
133•speckx•4d ago•209 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
222•limoce•4d ago•124 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
297•isitcontent•1d ago•39 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
578•todsacerdoti•1d ago•279 comments
Open in hackernews

Show HN: S3mini – Tiny and fast S3-compatible client, no-deps, edge-ready

https://github.com/good-lly/s3mini
261•neon_me•8mo ago

Comments

hsbauauvhabzb•8mo ago
I found the words used to describe this jarring - to me it makes sense to have an s3 client on my computer, but less so client side on a webapp. On further reading, it makes sense, but highlighting what problem this package solves in the first few lines of the readme would be valuable for people like me at least
willwade•8mo ago
I have a good suspicion this has been written with help from an LLM. The heavy use of emojis and strong hyper confident language is the giveaway. Proof: my own repos look like this after they’ve had the touch of cursor / windsurf etc. still doesn’t take away if the code is useful or good.
gchamonlive•8mo ago
> to me it makes sense to have an s3 client on my computer, but less so client side on a webapp

What do you mean with a webapp?

neon_me•8mo ago
he expected to be s3 client on desktop/local machhine
gchamonlive•8mo ago
It's a typescript client it seems. While you can bundle it in a webapp, typescript application goes beyond just web applications, this is why I was confused.
neon_me•8mo ago
tbh - english is not my mother-language so i do help myself with copy and typos ... but, if it feels uncomfy please feel free to open PR - I want it to be as reasonable as possible
JimDabell•8mo ago
I think “for node and edge platforms” and “No browser support!” makes this pretty clear? Those are in the title and first paragraph.
hsbauauvhabzb•8mo ago
I think if you asked the average IT person what those buzzwords mean, you’ll find the answer unclear…
JimDabell•8mo ago
I was responding to this:

> to me it makes sense to have an s3 client on my computer, but less so client side on a webapp

The relevant audience in this situation is not the average IT person, but a person who might mistake this for client-side web app functionality.

If you think that something might run in the browser, then “no browser support!” is not complicated jargon that you won’t understand.

dev_l1x_be•8mo ago
for Node.

These are nice projects. I had a few rounds with Rust S3 libraries and having a simple low or no dep client is much needed. The problem is that you start to support certain features (async, http2, etc.) and your nice nodep project is starting to grow.

terhechte•8mo ago
I had the same issue recently and used https://crates.io/crates/rusty-s3
maxmcd•8mo ago
also: https://crates.io/crates/object_store
pier25•8mo ago
for JS

> It runs on Node, Bun, Cloudflare Workers, and other edge platforms

spott•8mo ago
But not in the browser… because it depends on node.js apis.
pier25•8mo ago
Cloudflare Workers don't use any Node apis afaik
kentonv•8mo ago
Cloudflare Workers now has extensive Node API compatibility.
pier25•8mo ago
huh TIL!
crabmusket•8mo ago
There's always https://github.com/mhart/aws4fetch/
everfrustrated•8mo ago
Presumably smaller and quicker because it's not doing any checksumming
neon_me•8mo ago
does it make sense or should that be optional?
tom1337•8mo ago
checksumming does make sense because it ensures that the file you've transferred is complete and what was expected. if the checksum of the file you've downloaded differs from the server gave you, you should not process the file further and throw an error (worst case would probably be a man in the middle attack, not so worse cases being packet loss i guess)
neon_me•8mo ago
yes, you are right!

On the other hand S3 uses checksums only to verify expected upload (on the write from client -> server) ... and suprisingly you can do that in paralel after the upload - by checking the MD5 hash of blob to ETag (*with some caveats)

supriyo-biswas•8mo ago
> checksumming does make sense because it ensures that the file you've transferred is complete and what was expected.

TCP has a checksum for packet loss, and TLS protects against MITM.

I've always found this aspect of S3's design questionable. Sending both a content-md5 AND a x-amz-content-sha256 header and taking up gobs of compute in the process, sheesh...

It's also part of the reason why running minio in its single node single drive mode is a resource hog.

dboreham•8mo ago
Well known (apparently not?) that applications can't rely on TCP checksums.
alwyn•8mo ago
In my view one reason is to ensure integrity down the line. You want the checksum of a file to still be the same when you download it maybe years later. If it isn't, you get warned about it. Without the checksum, how will you know for sure? Keep your own database of checksums? :)
supriyo-biswas•8mo ago
If we're talking about bitrot protection, I'm pretty sure S3 would use some form of checksum (such as crc32 or xxhash) on each internal block to facilitate the Reed-Solomon process.

If it's verifying whether if it's the same file, you can use the Etag header which is computed server side by S3. Although I don't like this design as it ossifies the checksum algorithm.

everfrustrated•8mo ago
You may be interested in this https://aws.amazon.com/blogs/aws/introducing-default-data-in...
lacop•8mo ago
I got some empirical data on this!

Effingo file copy service does application-layer strong checksums and detects about 4.5 corruptions per exabyte transferred (figure 9, section 6.2 in [1]).

This is on top of TCP checksums, transport layer checksums/encryption (gRPC), ECC RAM and other layers along the way.

Many of these could be traced back to a "broken" machine that was eventually taken out.

[1] https://dl.acm.org/doi/abs/10.1145/3651890.3672262

vbezhenar•8mo ago
TLS ensures that stream was not altered. Any further checksums are redundant.
tom1337•8mo ago
Thats true, but wouldn't it be still required if you're having a internal S3 service which is used by internal services and does not have HTTPS (as it is not exposed to the public)? I get that the best practice would be to also use HTTPS there but I'd guess thats not the norm?
vbezhenar•8mo ago
Theoretically TCP packets have checksums, however it's fairly weak. So for HTTP, additional checksums make sense. Although I'm not sure, if there are any internal AWS S3 deployments working over HTTP and why would they complicate their protocol for everyone to help such a niche use case.

I'm sure that they have reasons for this whole request signature scheme over traditional "Authorization: Bearer $token" header, but I never understood it.

formerly_proven•8mo ago
Because a bearer token is a bearer token to do any request, while a pre-signed request allows you to hand out the capability to perform _only that specific request_.
degamad•8mo ago
Bearer tokens have a defined scope, which could be used to limit functionality in a similar way to pre-signed requests.

However, the s3 pre-signed requests functionality was launched in 2011, but the Bearer token RFC 6750 wasn't standardised until 2012...

easton•8mo ago
AWS has a video about it somewhere, but in general, it’s because S3 was designed in a world where not all browsers/clients had HTTPS and it was a reasonably expensive operation to do the encryption (like, IE6 world). SigV4 (and its predecessors) are cheap and easy once you understand the code.

https://youtube.com/watch?v=tPr1AgGkvc4, about 10 minutes in I think.

huntaub•8mo ago
This is actually not the case. The TLS stream ensures that the packets transferred between your machine and S3 are not corrupted, but that doesn't protect against bit-flips which could (though, obviously, shouldn't) occur from within S3 itself. The benefit of an end-to-end checksum like this is that the S3 system can store it directly next to the data, validate it when it reads the data back (making sure that nothing has changed since your original PutObject), and then give it back to you on request (so that you can also validate it in your client). It's the only way for your client to have bullet-proof certainty of integrity the entire time that the data is in the system.
Spooky23•8mo ago
Not always. Lots of companies intercept and potentially modify TLS traffic between network boundaries.
0x1ceb00da•8mo ago
You need the checksum only if the file is big and you're downloading it to disk, or if you're paranoid that some malware with root access might be altering the contents of your memory.
arbll•8mo ago
I mean if a malware is root and altering your memory it's not like you're in a position where this check is meaningful haha
lazide•8mo ago
Or you really care about the data and are aware of the statistical inevitability of a bit flip somewhere along the line if you’re operating long enough.
nodesocket•8mo ago
Somewhat related, I just came across s5cmd[1] which is mainly focused on performance and fast upload/download and sync of s3 buckets.

> 32x faster than s3cmd and 12x faster than aws-cli. For downloads, s5cmd can saturate a 40Gbps link (~4.3 GB/s), whereas s3cmd and aws-cli can only reach 85 MB/s and 375 MB/s respectively.

[1] https://github.com/peak/s5cmd

rsync•8mo ago
s5cmd is built into the rsync.net platform. See:

https://news.ycombinator.com/item?id=44248372

uncircle•8mo ago
I prefer s5cmd as well because it has a better CLI interface than s3cmd, especially if you need to talk with non-AWS S3-compatible servers. It does few things and does them well, whereas s3cmd is a tool with a billion options, configuration files, badly documented env variables, and its default mode of operation assumes you are talking with AWS.
bradknowles•7mo ago
For the time I worked at AWS, pretty much everyone inside the company used s5cmd for its speed.

I think that speaks pretty highly of it.

tommoor•8mo ago
Interesting project, though it's a little amusing that you announced this before actually confirming it works with AWS?
neon_me•8mo ago
Personally, I don't like AWS that much. I tried to set it up, but found it "terribly tedious" and drop the idea and instead focus on other platforms.

Right now, I am testing/configuring Ceph ... but its open-source! Every talented weirdo with free time is welcomed to contribute!

leansensei•8mo ago
Also try out Garage.
zikani_03•8mo ago
Good to see this mentioned. We are considering running it for some things internally, along with Harbor. The fact that the resource footprint is advertised as small enough is compelling.

What's your experience running it?

yard2010•8mo ago
Tangibly related: Bun has a built-in S3-compatible client. Bun is a gift, if you're using npm consider making the switch.
neon_me•8mo ago
is there a way to wrap their s3 client for use in HonoJS/CF workers?
oakesm9•8mo ago
No. It's implemented in native code (Zig) inside bun itself and just exposed to developers as a JavaScript API.

Source code: https://github.com/oven-sh/bun/tree/6ebad50543bf2c4107d4b4c2...

neon_me•8mo ago
10/10 Loving it (and how fast it is!) - its just not the use-case that fits my needs.

I want maximum ability to "move" my projects among services/vendors/providers

throawayonthe•8mo ago
i assume you can just use it[0] in your project and then build (using bun) for cf workers[1]

[0] https://bun.sh/docs/api/s3

[1] https://hono.dev/docs/getting-started/cloudflare-workers \ (https://bun.sh/docs/api/workers ?)

ChocolateGod•8mo ago
I tried to go this route of using Bun for everything (Bun.serve, Bun.s3 etc), but was forced back to switch back to NodeJS proper and Express/aws-sdk due to Bun not fully implementing Nodes APIs.
biorach•8mo ago
What were the most significant missing bits?
eknkc•8mo ago
The worst thing is issues without any visibility.

The other day I was toying with the MCP server (https://github.com/modelcontextprotocol/typescript-sdk). I default to bun these days and the http based server simply did not register in claude or any other client. No error logs, nothing.

After fiddling with my code I simply tried node and it just worked.

zackify•8mo ago
It definitely works in bun just fine. I have a production built mcp server with auth running under bun.

Now if you convert the request / response types to native bun server, it can be finicky.

But it works fine using express under bun with the official protocol implementation for typescript.

Actually writing a book about this too and will be using bun for it https://leanpub.com/creatingmcpserverswithoauth

tengbretson•8mo ago
Not sure about the specific underlying apis, but as of my last attempt, Bun still doesn't support PDF.js (pdfjs-dist), ssh2, or playwright.
ChocolateGod•8mo ago
localAddress is unsupported on sockets, meaning you can not specify an outgoing interface, which is useful if you have multiple network cards.
pier25•8mo ago
Proividing built APIs to not rely on NPM is one of the most interesting aspects of Bun IMO.
greener_grass•8mo ago
Can someone explain the advantage of this?

If I want S3 access, I can just use NPM

If I don't want S3 access, I don't want it integrated into my runtime

pier25•8mo ago
Would you rather use an officially maintained solution or some random package by a random author who might abandon the project (or worse)?
greener_grass•8mo ago
The S3 packages on NPM are maintained by AWS
pier25•8mo ago
Indeed but I was arguing about a general point.

I'd be surprised if any of your Node projects had less than 100 total deps of which a large number will be maintained by a single person.

See Express for example. 66 total deps with 26 deps relying on a single maintainer.

https://npmgraph.js.org/?q=express

But even in the case of the official aws-sdk they recently deprecated v2. I now need to update all my not-so-old Node projects to work with the newer version. Probably wouldn't have happened if I had used Bun's S3 client.

greener_grass•8mo ago
So let's put every package under the sun into the client?

This approach does not scale. We should make NPM better.

pier25•8mo ago
How do you make NPM better?

BTW I'm not saying we should kill NPM. What I'm saying is we should reduce our dependance on random packages.

Bun doesn't need to add everything into the core engine. Eg: when using .NET you still add plenty of official Microsoft dependencies from Nuget.

greener_grass•8mo ago
- NPM could migrate to reproducible builds of artefacts

- Trust could be opt-in by default

- Dependency installation could be made fully reproducible

zackify•8mo ago
I came here to say the same thing.

Rather ship oven/bun through docker and have a 90mb container vs using node.

akouri•8mo ago
This is awesome! Been waiting for something like this to replace the bloated SDK Amazon provides. Important question— is there a pathway to getting signed URLs?
neon_me•8mo ago
For now, unfortunately, no - no signed URLs are supported. It wasn't my focus (use case), but if you find a simple/minimalistic way to implement it, I can help you with that to integrate it.

From my helicopter perspective, it adds extra complexity and size, which could maybe be ideal for a separate fork/project?

mannyv•8mo ago
Signed URLs are great because it allows you to allow third parties access to a file without them having to authenticate against AWS.

Our primary use case is browser-based uploads. You don't want people uploading anything and everything, like the wordpress upload folder. And it's timed, so you don't have to worry about someone recycling the URL.

jmogly•8mo ago
I use presigned urls as part of a federation layer on top of an s3 bucket. Users make authenticated requests to my api which checks their permissions (if they have access to read/write to the specified slice of the s3 bucket), my api sends a presigned url back to allow read/write/delete to that specific portion of the bucket.
ecshafer•8mo ago
You can just use s3 vis rest calls if you dont like their sdk.
nikeee•8mo ago
I've built an S3 client with similar goals like TFA, but supports pre-signing:

https://github.com/nikeee/lean-s3

Pre-signing is about 30 times faster than the AWS SDK and is not async.

You can read about why it looks like it does here: https://github.com/nikeee/lean-s3/blob/main/DESIGN_DECISIONS...

e1g•8mo ago
FYI, you can add browser support by using noble-hashes[1] for SHA256/HMAC - it's a well-done library, and gives you performance that is indistinguishable from native crypto on any scale relevant to S3 operations. We use it for our in-house S3 client.

[1] https://github.com/paulmillr/noble-hashes

continuational•8mo ago
SHA256 and HMAC are widely available in the browser APIs: https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypt...
e1g•8mo ago
SublteCrypto is async, and the author specifically said they want their API to be sync.
shortformblog•8mo ago
This is good to have. A few months ago I was testing a S3 alternative but running into issues getting it to work. Turned out it was because AWS made changes to the tool that had the effect of blocking non-first-party clients. Just sheer chance on my end, but I imagine that was infuriating for folks who have to rely on that client. There is an obvious need for a compatible client like this that AWS doesn’t manage.
_1•8mo ago
Same as this https://github.com/minio/minio ?
carlio•8mo ago
minio is an S3-compatable object store, the linked s3mini is just a client for s3-compatable stores.
arbll•8mo ago
No this is an S3-compatible client, minio is an S3-compatible backend
prmoustache•8mo ago
The minio project provides both.
EGreg•8mo ago
You know what would be really awesome? Making a fuse-based drop-in replacement for mapping a folder to a bucket, like goofys. Maybe a node.js process can watch files for instance and backup, or even better it can back the folder and not actually take up space on the local machine (except for a cache).

https://github.com/kahing/goofys

arbll•8mo ago
This seem completely unrelated to the goal of OP's library ?
EGreg•8mo ago
It seems to be related to what a lot of people want and is low hanging fruit now that he has this library!
TuningYourCode•8mo ago
You mean like https://github.com/s3fs-fuse/s3fs-fuse ? It‘s so old that even debian has precompiled packages ;)
EGreg•8mo ago
I was talking about goofys because it is not POSIX compliant, so it's much faster than s3fs-fuse

But either one can only work with s3. His library works with many other backends. Get it? I'm saying he should consider integrating with goofys!

cosmotic•8mo ago
> https://raw.githubusercontent.com/good-lly/s3mini/dev/perfor...

It gets slower as the instance gets faster? I'm looking at ops/sec and time/op. How am I misreading this?

xrendan•8mo ago
I read that as the size of file it's transferring so each operation would be bigger and therefore slower
math-ias•8mo ago
It measures PutObject[0] performance across different object sizes (1, 8, 100MiB)[1]. Seems to be an odd screenshot of text in the terminal.

[0] https://github.com/good-lly/s3mini/blob/30a751cc866855f783a1... [1] https://github.com/good-lly/s3mini/blob/30a751cc866855f783a1...

cosmotic•8mo ago
Oh, I see my mistake. Those are payload sizes not intance sizes in the heading for each table.
arianvanp•8mo ago
libcurl also has AWS auth with --aws-sigv4 which gives you a fully compatible S3 cliënt without installing anything! (You probably already have curl installed)
impulser_•8mo ago
Yeah, but that will not work on cloudflare, vercel, or any other serverless environment because at most you only have access to node apis.
leerob•8mo ago
Should work on Vercel, you have access to full Node.js APIs in functions.
busymom0•8mo ago
Does this allow generating signed URLs for uploads with size limit and name check?
brendanashworth•8mo ago
How does this compare to obstore? [1]

[1] https://developmentseed.org/obstore/latest/

linotype•8mo ago
This looks slick.

What I would also love to see is a simple, single binary S3 server alternative to Minio. Maybe a small built in UI similar to DuckDB UI.

koito17•8mo ago
> What I would also love to see is a simple, single binary S3 server alternative to Minio

Garage[1] lacks a web UI but I believe it meets your requirements. It's an S3 implementation that compiles to a single static binary, and it's specifically designed for use cases where nodes do not necessarily have identical hardware (i.e. different CPUs, different RAM, different storage sizes, etc.). Overall, Garage is my go-to solution for object storage at "home server scale" and for quickly setting up a real S3 server.

There seems to be an unofficial Web UI[2] for Garage, but you're no longer running a single binary if you use this. Not as convenient as a built-in web UI.

[1] https://garagehq.deuxfleurs.fr/

[2] https://github.com/khairul169/garage-webui

dzonga•8mo ago
this looks dope.

but has anyone done a price comparison of edge-computing vs say your boring hetzner vps ?