I was using timers, and I was getting insanely different times for the same code, going anywhere from 0ms to 20ms without any obvious changes to the environment or anything.
I was banging my head against it for hours, until I realized that async code is weird. Async code isn’t directly “run”, it’s “scheduled” and the calling thread can yield until we get the result. By trying to do microbenchmarks, I wasn’t really testing “my code”, I was testing the .NET scheduler.
It was my first glimpse into seeing why benchmarking is deceptively hard. I think about it all the time whenever I have to write performance tests.
The other thing is that L1/L2 switches provide this functionality, of taking switch timestamps and marking them, which is the true test of e2e latency without any clock drift etc.
Also, fast code is actually really really hard, you just to create the right test harness once
...unless youre faang and can amortize the costs across your gigafleet
The number one mistake I see people make is measuring one time and taking the results at face value. If you do nothing else, measure three times and you will at least have a feeling for the variability of your data. If you want to compare two versions of your code with confidence there is usually no way around proper statistical analysis.
Which brings me to the second mistake. When measuring runtime, taking the mean is not a good idea. Runtime measurements usually skew heavily towards a theoretical minimum which is a hard lower bound. The distribution is heavily lopsided with a long tail. If your objective is to compare two versions of some code, the minimum is a much better measure than the mean.
You'll see this in any properly active online system. Back in the previous job we had to drill it to teams that mean() was never an acceptable latency measurement. For that reason the telemetry agent we used provided out-of-the-box p50 (median), p90, p95, p99 and max values for every timer measurement window.
The difference between p99 and max was an incredibly useful indicator of poor tail latency cases. After all, every one of those max figures was an occurrence of someone or something experiencing the long wait.
These days, if I had the pleasure of dealing with systems where individual nodes handled thousands of messages per second, I'd add p999 to the mix.
The article states the opposite.
> Writing fast algorithmic trading system code is hard. Measuring it properly is even harder.
* mean/med/p99/p999/p9999/max over day, minute, second, 10ms
* software timestamps of rdtsc counter for interval measurements - am17 says why below
* all of that not just on a timer - but also for each event - order triggered for send, cancel sent, etc - for ease of correlation to markouts.
* hw timestamps off some sort of port replicator that has under 3ns jitter - and a way to correlate to above.
* network card timestamps for similar - solar flare card (amd now) support start of frame to start of Ethernet frame measurements.
nine_k•9h ago
And measuring is hard. This us why consistently fast code is hard.
In any case, adding some crude performance testing into your CI/CD suite, and signaling a problem if a test ran for much longer than it used to, is very helpful at quickly detecting bad performance regressions.
mattigames•8h ago
auc•8h ago