frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open Chaos: A self-evolving open-source project

https://www.openchaos.dev/
232•stefanvdw1•5h ago•43 comments

Show HN: I used Claude Code to discover connections between 100 books

https://trails.pieterma.es/
48•pmaze•4h ago•10 comments

AI is a business model stress test

https://dri.es/ai-is-a-business-model-stress-test
86•amarsahinovic•4h ago•123 comments

A Eulogy for Dark Sky, a Data Visualization Masterpiece (2023)

https://nightingaledvs.com/dark-sky-weather-data-viz/
318•skadamat•9h ago•140 comments

Finding and Fixing Ghostty's Largest Memory Leak

https://mitchellh.com/writing/ghostty-memory-leak-fix
52•thorel•2h ago•8 comments

Rats caught on camera hunting flying bats

https://scienceclock.com/rats-caught-on-camera-hunting-flying-bats-for-the-first-time/
38•akg130522•2h ago•6 comments

Worst of Breed Software

https://worstofbreed.net/
9•facundo_olano•55m ago•1 comments

ASCII-Driven Development

https://medium.com/@calufa/ascii-driven-development-850f66661351
53•_hfqa•2d ago•30 comments

I replaced Windows with Linux and everything's going great

https://www.theverge.com/tech/858910/linux-diary-gaming-desktop
387•rorylawless•6h ago•319 comments

ChatGPT Health is a marketplace, guess who is the product?

https://consciousdigital.org/chatgpt-health-is-a-marketplace-guess-who-is-the-product/
171•yoaviram•2d ago•189 comments

New information extracted from Snowden PDFs through metadata version analysis

https://libroot.org/posts/going-through-snowden-documents-part-4/
239•libroot•10h ago•109 comments

Bichon: A lightweight, high-performance Rust email archiver with WebUI

https://github.com/rustmailer/bichon
35•rendx•2h ago•15 comments

Side-by-side comparison of how AI models answer moral dilemmas

https://civai.org/p/ai-values
38•jesenator•1d ago•29 comments

Org Mode Syntax Is One of the Most Reasonable Markup Languages to Use for Text

https://karl-voit.at/2017/09/23/orgmode-as-markup-only/
194•adityaathalye•12h ago•153 comments

UpCodes (YC S17) is hiring PMs, SWEs to automate construction compliance

https://up.codes/careers?utm_source=HN
1•Old_Thrashbarg•4h ago

Bindless Oriented Graphics Programming

https://alextardif.com/BindlessProgramming.html
19•ibobev•3d ago•0 comments

Distributed Denial of Secrets

https://ddosecrets.com/
37•sabakhoj•2d ago•9 comments

Drones that recharge directly on transmission lines

https://www.ycombinator.com/companies/voltair
122•alphabetatango•4h ago•90 comments

NASA announces unprecedented return of sick ISS astronaut and crew

https://www.livescience.com/space/space-exploration/nasa-cancels-spacewalk-and-considers-early-cr...
62•bookofjoe•7h ago•51 comments

How we made v0 an effective coding agent

https://vercel.com/blog/how-we-made-v0-an-effective-coding-agent
22•MaxLeiter•2d ago•7 comments

UK government exempting itself from cyber law inspires little confidence

https://www.theregister.com/2026/01/10/csr_bill_analysis/
267•DyslexicAtheist•7h ago•52 comments

“Erdos problem #728 was solved more or less autonomously by AI”

https://mathstodon.xyz/@tao/115855840223258103
587•cod1r•23h ago•331 comments

Httpz – Zero-Allocation HTTP/1.1 Parser for OxCaml

https://github.com/avsm/httpz
62•noelwelsh•3d ago•16 comments

GPU memory snapshots: sub-second startup (2025)

https://modal.com/blog/gpu-mem-snapshots
13•jxmorris12•2d ago•4 comments

Sinclair C5

https://en.wikipedia.org/wiki/Sinclair_C5
15•jszymborski•1h ago•3 comments

O-Ring Automation

https://www.nber.org/papers/w34639
22•jandrewrogers•5d ago•9 comments

Allow me to introduce, the Citroen C15

https://eupolicy.social/@jmaris/115860595238097654
636•colinprince•10h ago•436 comments

Changes to Android Open Source Project

https://source.android.com/
272•TechTechTech•3d ago•175 comments

Good Judgment Open

https://www.gjopen.com
11•kaycebasques•2d ago•2 comments

Reverse Engineering the Epson FilmScan 200 for Classic Mac

https://ronangaillard.github.io/posts/reverse-engineering-epson-filmscan-200/
86•j_leboulanger•1w ago•7 comments
Open in hackernews

Achieveing lower latencies with S3 object storage

https://spiraldb.com/post/so-you-want-to-use-object-storage
31•znpy•8mo ago

Comments

jmull•8mo ago
> Roughly speaking, the latency of systems like object storage tend to have a lognormal distribution

I would dig into that. This might (or might not) be something you can do something about more directly.

That's not really an "organic" pattern, so I'd guess some retry/routing/robustness mechanism is not working the way it should. And, it might (or might not) be one you have control over and can fix.

To dig in, I might look at what's going on at the packet/ack level.

nkmnz•8mo ago
I don't know what you mean by the word "organic", but I think lognormal distributions are very common and intuitive: whenever the true generative mechanism is “lots of tiny, independent percentage effects piling up,” you’ll see a log‑normal pattern.
jmull•8mo ago
You can think of a network generally as a bunch of uniform nodes with uniform connections each with a random chance of failure, as a useful first approximation.

But that’s not what they really are.

If you’re optimizing or troubleshooting it’s usually better to look at what’s actually happening. Certainly before implementing a fix. You really want to understand what you’re fixing, or you’re kind of doing a rain dance.

pyfon•8mo ago
How do you do that for an abstract service like S3? I see how you could do that for your own machines.
anorwell•8mo ago
The article posts a table of latency distributions, but the latencies are simulated based on the assumption that latencies are lognormal. I would be interested to read the article comparing the simulation to actual measurements.

The assumption that latencies are lognormal is a useful approximation but not really true. In reality you will see a lot of multi-modality (e.g. cache hits vs misses, internal timeouts). Requests for the same key can have correlated latency.

MasterIdiot•8mo ago
I think the distribution he uses is pretty close to the paper he links "Exploiting Cloud Object Storage for High-Performance Analytics" https://www.durner.dev/app/media/papers/anyblob-vldb23.pdf
tossandthrow•8mo ago
The hedging strategies all seem to assume that latency for an object is an independent variable.

However, I would assume dependency?

Eg. if. a node holding a copy of the object is down and traffic needs to be re-routed to a slower node. Indifferently of how many requests I send, the latency will still be high?

(I am genuinly curious of this is the case)

n_u•8mo ago
It’s not addressed directly but I do think the article implies you hope your request latencies are not correlated. It provides a strategy for helping to achieve that

> Try different endpoints. Depending on your setup, you may be able to hit different servers serving the same data. The less infrastructure they share with each other, the more likely it is that their latency won’t correlate.

addisonj•8mo ago
S3 scale is quite massive with each object spread across a large number of nodes via erasure encoding.

So while you could get unlucky and routed to same bad node / bad rack, the reality is that it is quite unlikely.

And while the testing here is simulated, this is a technique that is used with success.

Source: working on these sort of systems

jmpman•8mo ago
Lots of areas left for exploration.
up2isomorphism•8mo ago
S3 is a bad choice if you need low latency to begin with.
mannyv•8mo ago
They have both ssd and platter based storage now. So that's not a true statement anymore.
up2isomorphism•8mo ago
The problem of s3 latency is never about hdd or ssd to begin with.

This a big problem of so called modern “data pipeline”; public cloud providers will anything and a lot of people will believe it.

mannyv•8mo ago
No, sorry.
sgarland•8mo ago
Network-based storage is a bad choice if you need low latency, period. You’re not going to beat data locality.
UltraSane•8mo ago
It is kinda of crazy how much work is done to mitigate the very high latency of S3 when we have NVMe SSDs with access latency of microseconds.
addisonj•8mo ago
Yeah, engineering high scale distributed data systems on top the cloud providers a very weird thing at times.

But the reality is that as large enterprise move to the cloud, but still need lots of different data systems, it is really hard to not play the cloud game. Buying bare metal and direct connect with AWS seems a reasonable solution... But it will add years to your timeline to sell to any large companies.

So instead, you work in the constraints the CSPs have, and in AWS, that means guaranteeing durability cross zone, and at scale, that means either huge cross az network costs or offloading it to s3.

You would think this massive cloud would remove constraints, and in some ways that is true, but in others you are even more constrained because you don't directly own any of it and are the whims of unit costs of 30 AWS teams.

But it is also kind of fun

UltraSane•8mo ago
If cross AZ bandwidth was more reasonably priced it would enable a lot of design options like running something like MinIO on nothing but directly connected NVMe Instance store volumes.
jen20•8mo ago
The very first sentence of this article contains an error:

> Over the past 19 years (S3 was launched on March 14th 2006, as the first public AWS service), object storage has become the gold standard for storing large amounts of data in the cloud.

While it’s true that S3 is the gold standard, it was not the first AWS service, which was in fact SQS in 2004.

hermanradtke•8mo ago
I thought S3 was first as well.

This is the source Wikipedia uses: https://web.archive.org/web/20041217191947/http://aws.typepa...

adam_gs•8mo ago
author here - took that quote from this[1] blog post by an AWS VP/distinguished engineer, the use of "public service" might have some loosely defined meaning in this context.

[1] https://www.allthingsdistributed.com/2025/03/in-s3-simplicit...

jen20•8mo ago
Interesting source - looks like it means “GA” service, rather than “public” per se. The SQS beta was also available to the public.
n_u•8mo ago
What I’ve always been curious about is if you can help the S3 query optimizer* in any way to use specialized optimizations. For example if you indicate the data is immutable[1] does the lack of a write path allow further optimization under the hood? Replicas could in theory serve requests without coordination.

*I’m using “query optimizer” rather broadly here. I know S3 isn’t a DBMS.

[1] https://aws.amazon.com/blogs/storage/protecting-data-with-am...