The assumption that latencies are lognormal is a useful approximation but not really true. In reality you will see a lot of multi-modality (e.g. cache hits vs misses, internal timeouts). Requests for the same key can have correlated latency.
However, I would assume dependency?
Eg. if. a node holding a copy of the object is down and traffic needs to be re-routed to a slower node. Indifferently of how many requests I send, the latency will still be high?
(I am genuinly curious of this is the case)
> Try different endpoints. Depending on your setup, you may be able to hit different servers serving the same data. The less infrastructure they share with each other, the more likely it is that their latency won’t correlate.
So while you could get unlucky and routed to same bad node / bad rack, the reality is that it is quite unlikely.
And while the testing here is simulated, this is a technique that is used with success.
Source: working on these sort of systems
But the reality is that as large enterprise move to the cloud, but still need lots of different data systems, it is really hard to not play the cloud game. Buying bare metal and direct connect with AWS seems a reasonable solution... But it will add years to your timeline to sell to any large companies.
So instead, you work in the constraints the CSPs have, and in AWS, that means guaranteeing durability cross zone, and at scale, that means either huge cross az network costs or offloading it to s3.
You would think this massive cloud would remove constraints, and in some ways that is true, but in others you are even more constrained because you don't directly own any of it and are the whims of unit costs of 30 AWS teams.
But it is also kind of fun
> Over the past 19 years (S3 was launched on March 14th 2006, as the first public AWS service), object storage has become the gold standard for storing large amounts of data in the cloud.
While it’s true that S3 is the gold standard, it was not the first AWS service, which was in fact SQS in 2004.
This is the source Wikipedia uses: https://web.archive.org/web/20041217191947/http://aws.typepa...
*I’m using “query optimizer” rather broadly here. I know S3 isn’t a DBMS.
[1] https://aws.amazon.com/blogs/storage/protecting-data-with-am...
jmull•4h ago
I would dig into that. This might (or might not) be something you can do something about more directly.
That's not really an "organic" pattern, so I'd guess some retry/routing/robustness mechanism is not working the way it should. And, it might (or might not) be one you have control over and can fix.
To dig in, I might look at what's going on at the packet/ack level.
nkmnz•3h ago
jmull•19m ago
But that’s not what they really are.
If you’re optimizing or troubleshooting it’s usually better to look at what’s actually happening. Certainly before implementing a fix. You really want to understand what you’re fixing, or you’re kind of doing a rain dance.
pyfon•3h ago