- Know the global state of your GPU cluster via the client.
- Target the most struggling GPU instances specifically since the client decides which one to hit.
You offer a free tier which means anyone can get an account and try to do it (e.g. you can have one "harmless, mostly inactive" free account with the only purpose of retrieving GPU cluster status, and a bunch of burner accounts to overload struggling instances).
I may be completely wrong, but this sounds like DDoS served on a silver plate to me.
It would indeed be very strange to hope your random users coordinate with your client side load balancer. You wouldn't even have to send real traffic. You could just manipulate redis directly to force all the real traffic to go to a single node. DoSing redis itself is also pretty easy.
So no, I don't think they run these clients themselves. If the code runs out there, it's open to inspection.
I would have probably approached this by implementing a fix for the misbehaving part of k8s, though since there isnt a default LoadBalancer in k8s, I can't really can't speculate further as to the root cause of the initial problem. But most CNI or cloud providers that implement LB do have a way to take feedback from an external metric. I'd be curious why doing it this way wasn't considered, at least.
https://brooker.co.za/blog/2012/01/17/two-random.html https://medium.com/the-intuition-project/load-balancing-the-...
lneiman•2mo ago
I built a scrappy client-side router using Redis and Lua to track real-time GPU load. It boosted utilization by ~40% and improved latencies.
Happy to hear feedback on the implementation or thoughts on better ways to do this!
pbrumm•2mo ago