I personally find the original code just as if not more readable than the static lookup table and I don't need to count out the elements if I want to know how often it will retry.
But more importantly, changing the max retries is trivial in the original code and tedious in the static lookup table, especially for bigger changes.
Also, this is something you most likely want to make configurable to adapt to certain scenarios which is not possible with a static table
There are more reasons against it but those are the the main ones that jump at me right away
for range ExponentialBackoff(ctx) {
err := request(ctx)
if err == nil {
return nil
}
}
and if one of the callsites needs to configure the backoff then you've got something like: for range ExponentialBackoff(ctx, MaxAttempts(20), BaseDelay(5)) {
err := request(ctx)
if err == nil {
return nil
}
}
def getNewBackoff( oldBackoff: int): int {
newBackoff = oldBackoff * BACKOFF_CONSTANT
if (newBackoff > BACKOFF_CEILING) { newBackoff = BACKOFF_CEILING }
return newBackoff }
if you bit align the ceiling value you can replace the conditional with a bitwise and mask. If you use 2 as your exponent you can replace the multiplication with a bit shift left.
It’s also a good idea to introduce jitter, e.g. don’t set the new backoff to the old backoff * a constant, but add a random value such that the old value gets randomized around the desired new value- this helps prevent pile-ons when a bunch of clients noticed about the same time that the server became unresponsive; if they all have the same backoff algorithm then they’re all going to hit the server again at about the same time. It’s good to spread that out to try to avoid server overloading during restarts/cold starts/whatever.
The best results I ever got were when I used server side hinting to avoid pile-ons when the retries from the client were all clustered around the same time.
When my server got in an overloaded state, the auth path for clients short circuited and delivered an overloaded message with a randomized suggested retry value. The server can’t make the clients honor it (we could in my case because we wrote the clients) but if the clients do honor it you can spread out reconnects over a period instead of make them all point-in-time and dampen or prevent oscillating overloads.
I'm not sure how my example fares against on a simplicity scale, but it feels a little more readable to me:
Retry(5, BackOff(time.Second, Linear(1.4), func(_ context.Context) { // do stuff } )
Example cribbed from the documentation: https://pkg.go.dev/git.sr.ht/~mariusor/ssm#StrategyFnThe only thing (IMO) explicit retries are good for is to make client state more deterministic on the client, but there are many ways of doing that other than hard coding retry intervals. For example, use asynchronous patterns for networking APIs and allow querying state of API-calls-in-progress.
Might be worth looking at. The current implementation is slightly awkward, but it also uses a random delay.
Watch/iPhone communication isn't very robust. It's much better, in more recent Watches.
> Lots of people don’t want to read about AI.
> I respect that.
> But I’m currently steeped in the world of AI, for better or for worse, and I want to blog about it. So I’ve split this blog in half.
> The normal blog, which you are reading, is now AI-free.
> There’s another, at /ai that contains only AI posts. It has its own RSS feed.
Thank you. It (non-AI) definitely goes to my RSS reader.
func do(ctx context.Context) error {
for attempt := 1; ; attempt++ {
if request(ctx) == nil {
return nil
}
if attempt == 10 {
return fmt.Errorf("failed after %d attempts", attempt)
}
select {
case <-ctx.Done():
return ctx.Err()
case <-time.After(time.Second *
min(60, math.Pow(2, attempt-1)*(0.75+rand.Float64()*0.5))):
}
}
}
I think the problem with the original isn’t using a formula, it’s over-parameterization. Declaring constants right next to where they’re used once isn’t giving you anything but the opportunity for naming issues and rigidity in how you combine them. If they aren’t passed in or repeated, then I wouldn’t pull them out; if they are, then I’d try to look for minimal and natural boundary points, like just min and max, or maybe an attempt → delay function.(Though sometimes your text editor and docs are good enough to help, but I don’t like making those feel required).
I like how Python (and probably many other languages) allows for using the name of a positional arg. So you could call the exact same function but write it as retry(count=3, min_ms=1_000, max_ms=2_500) if you feel like it needs more clarity. Now you’ve got documented values that are tied to the function signature, their names have been decided for you, and they’re inline!
Using magic numbers all over the place is the mark of a novice who lacks understanding that the meaning behind the numbers he knows in his mind will not magically transfer over to other people reading the code. Advising this approach is an interesting call.
Specifically here, we've got these at the beginning of the function:
const (
maxAttempts = 10
baseDelay = 1 * time.Second
maxDelay = 60 * time.Second
)
If I went out of my way to look at this function, these are probably the values I'm interested in changing, and look, they're right here, at the top. Neat. It's obvious how to adjust the values to get what I want. I can stop reading the function right there.Without these declarations, how would I know that "10" midway through the function is the number of attempts? I'd have to read and understand the entire logic first. Great. Now multiply this effort by the amount of programmers on the team and the amount of magic numbers.
I thought based on the title it would have been about different approaches to backoff.
"This code, or something like it, probably looks really familiar:"
Better leave that company, to be honest.
charcircuit•1d ago
larodi•1d ago
this() and that() or otherr() and fail();
so this all seems very reasonable, and clear, and people who read the code hated it. I've tried many times to do it in JS sadly it does not work:
a && b || do { console.log (err) }
but this all does not make it easier to read, but makes it only very likely to outsmart yourself.
some code is algorithmically more beautiful than other, more declarative perhaps. but going away from the common understanding of trivial stuff, does not always benefit the group or the project.
spauldo•1d ago
ayhanfuat•1d ago
anonzzzies•1d ago