I wrote this after seeing cases where instances were technically “up” but clearly not serving traffic correctly.
The article explores how client-side and server-side load balancing differ in failure detection speed, consistency, and operational complexity.
I’d love input from people who’ve operated service meshes, Envoy/HAProxy setups, or large distributed fleets — particularly around edge cases and scaling tradeoffs.
Noumenon72•20m ago
Thanks for writing something that's accessible to someone who's only used Nginx server-side load balancing and didn't know client-side load balancing existed at higher scale.
AuthAuth•1h ago
It seems like passive is the best option here but can someone explain why one real request must fail? So the load balancer is monitoring for failed requests. If it receives one can it not forward the initial request again?
jayd16•1h ago
Not every request is idempotent and its not known when or why a request has failed. GETs are ok (in theory) but you can't retry a POST without risk of side effects.
cormacrelf•1h ago
For GET /, sure, and some mature load balancers can do this. For POST /upload_video, no. You'd have to store all in-flight requests, either in-memory or on disk, in case you need to replay the entire thing with a different backend. Not a very good tradeoff.
dotwaffle•14m ago
I've never quite understood why there couldn't be a standardised "reverse" HTTP connection, from server to load balancer, over which connections are balanced. Standardised so that some kind of health signalling could be present for easy/safe draining of connections.
singhsanjay12•3h ago
The article explores how client-side and server-side load balancing differ in failure detection speed, consistency, and operational complexity.
I’d love input from people who’ve operated service meshes, Envoy/HAProxy setups, or large distributed fleets — particularly around edge cases and scaling tradeoffs.
Noumenon72•20m ago