> No polling. No heuristics. Just fast, kernel-level idle detection.
Isn't the `scaletozero-agent` daemon effectively polling eBPF map counters...?
Does this include the RAM for the VM? For auto-idle systems like this where to park the RAM tends to be a significant concern. If you don't "retire" the RAM too the idling savings are limited to CPU cycles but if you do, the overheads of moving RAM around can easily wreck any latency budget you may have.
Curious how you are dealing with it.
If you don't even want to pay for that though scheduling unikernels on something like ec2 gets you your full vm, is cheaper, has more resources than lambda and doesn't have the various limitations such as no gpu or timeouts or anything like that.
I guess it depends on the workload, if you are snapshotting an already-loaded Python program, the time savings are huge, but if it's a program with fast startup, it's probably slower to snapshot.
> Waking up instantly on real traffic without breaking clients
is this for new TCP connections? Or also for connections opened prior to sleep?
You got everything correctly. The advantages are: - For the end-user: not paying or paying less - For the hypervisor owner: a sleeping instance uses no CPU, so it reduces the load on the hypervisor
Other than that, it's still possible to oversubscribe, but you're right, we need to trump the scheduler. Another cool thing is that in the worst case scenario where an hypervisor gets full and it's over capacity, sleeping instances are great candidates for eviction.
mjb•5mo ago
> But Firecracker comes with a few limitations, specifically around PCI passthrough and GPU virtualization, which prevented Firecracker from working with GPU Instances
Worth mentioning that Firecracker supports PCI passthrough as of 1.13.0. But that doesn't diminish the value of Cloud Hypervisor - it's really good to have multiple options in this space with different design goals (including QEMU, which has the most features).
> We use the sk_buff.mark field — a kernel-level metadata flag on packets - to tag health check traffic.
Clever!
> Light Sleep, which reduces cold starts to around 200ms for CPU workloads.
If you're restoring on the same box, I suspect 200ms is significantly above the best you can do (unless your images are huge). Do you know what you're spending those 200ms doing? Is it just creating the VMM process and setting up kvm? Device and networking setup? I assume you're mmapping the snapshot of memory and loading it on demand, but wouldn't expect anywhere near 200ms of page faults to handle a simple request.
tuananh•5mo ago
I'm curious on why is it taking so long to add support for different runtime? I imagine it would be same for all of them?
> where we clone and restore a snapshot of Postgres on every database connection
This is interesting. Is there any challenge while working on this?
deivid•5mo ago
I assume that every runtime must be forked to add such signal right before calling into usercode
tuananh•5mo ago