Traditional platforms still charge per app, even if each app barely touches the CPU.
We redesigned scheduling: CPU is dynamically shared across your workloads based on real demand. You pay once for the resource, rather than multiple times for idle capacity.
I'm happy to discuss technical details, scheduler design, and the challenges we encountered along the way.
ktaraszk•2h ago
2. We track CPU usage in real-time across all workloads and maintain a global usage map.
3. Idle CPU from any app/node becomes available for re-purchase by other workloads in the same resource plan.
4. CPU limits can be adjusted on the fly without restarts, enabling real-time response to changing load.
If anyone wants to dive into topics like threshold algorithms, node assignment heuristics, or Kubernetes API interactions - I'm happy to dig into that.