frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

How the economics of multitenancy work

https://www.blacksmith.sh/blog/the-economics-of-operating-a-ci-cloud
79•tsaifu•3h ago

Comments

shadowgovt•2h ago
Interesting writeup. I wonder somewhat what this looks like from the customer side; one downside I've observed with some serverless in the past is that it can introduce up-front latency delays as the system spins up support to handle your spike. I know the CI consensus seems to be that latency matters little in a process that's going to take a long time to run to completion anyway... But I'm also a developer of CI, and that latency is painful during a tight-loop development cycle.

(The good news is that if the spikes are regular, a sufficiently-advanced serverless can "prime the pump" and prep-and-launch instances into surplus compute before the spike since historical data suggests the spike is coming).

aayushshah15•2h ago
> one downside I've observed with some serverless in the past is that it can introduce up-front latency delays as the system spins up support to handle your spike

[cofounder of blacksmith here]

This is exactly one of the symptoms of running CI on traditional hyperscalers we're setting out to solve. The fundamental requirement for CI is that each job requires its own fresh VM (which is unlike traditional serverless workloads like lambdas). To provision an EC2 instance for a CI job:

- you're contending against general on-demand production workloads (which have a particular demand curve based on, say, the time of day). This can typically imply high variance in instance provisioning times.

- since AWS/GCP/Azure deploy capacity out as spot instances with a guaranteed pre-emption warning, you're also waiting for the pre-emption windows to expire before a VM can be handed to you!

shadowgovt•1h ago
Excellent! I did some work in the past on prediction of behavior given past data, and I can tell you two things we learned:

- there are low-frequency and high frequency effects (so you can make predictions based on last week, for example, but those predictions fall flat if the company rushes launches at the EOQ or takes the last couple weeks in December off).

- you can try to capture those low-frequency effects, but in practice we consistently found that comprehension by end-users beat out a high-fidelity model, and users were just not going to learn an idea like "you can generate any wave by summing two other waves." The user feedback they got was that they consistently preferred the predictive model being a very dumb "The next four weeks look like the past four weeks" and an explicit slider to flag "Christmas is coming: we anticipate our load to be 10% of normal" (which can simultaneously tune the prediction for Christmas and drop Christmas as an outlier when making future predictions). When they set the slider wrong they'd get the wrong predictions, but they were wrong predictions that were "their fault" and they understood; they preferred wrong predictions they could understand to less-wrong predictions they had to think about Fourier analysis to understand.

0xbadcafebee•2h ago
tl;dr for this particular case it's bin packing

other business cases have economics where multitenancy has (almost) nothing to do with "efficient computing", and more to do with other efficiencies, like human costs, organizational costs, and (like the other post linked in the article) functional efficiencies

zdc1•1h ago
One thing I'd love to see is dynamic CPU allocation or otherwise something similar to Jenkin's concept of a flyweight runner. Certain pipelines can often spend minutes to hours using zero CPU just polling for completion (e.g. CloudFormation, hosted E2E tests, etc.) In these cases I'd be charged for 2 vCPUs but use almost nothing.

Otherwise, the customers are stuck with the same sizing/packing/utilisation problems. And imagine being the CI vendor in this world: you know which pipeline steps use what resources on average (and at the p99), and with that information you could over-provision customer jobs so that you sell 20 vCPUs but schedule them on 10 vCPUs. 200% utilisation baby!

arccy•1h ago
i think cloudflare workers does this
coolcase•22m ago
Was thinking about this exact thing today. Where I work combining X services from their own scaling sets to pack them together into a kubernetes cluster (or similar tech) should "smooth out" the spikes relatively and reduce wastage and also need to scale. This is on cloud so no fixed hardware concern but even then it helps with reserve instances, discounts and keeping cost down generally. This was intuition but I might math the maths on it now inspired by this.
Havoc•18m ago
Surprised they’re doing fixed leases. I would have thought a fixed base with a layer of spot priced VMs for peaks would be more efficient on cost
mlhpdx•11m ago
Everyone doing multi-tenant SaaS wants cost to be a sub-linear function of usage. This model of large unit capacity divided by small work units is an example of how to get there. The tough bit is that it’s stepwise at low volumes, and becomes linear at large scale, so it’s only magic during the growth phase — which is pretty solid for a growth phase company showing numbers for the next raise.