other business cases have economics where multitenancy has (almost) nothing to do with "efficient computing", and more to do with other efficiencies, like human costs, organizational costs, and (like the other post linked in the article) functional efficiencies
Otherwise, the customers are stuck with the same sizing/packing/utilisation problems. And imagine being the CI vendor in this world: you know which pipeline steps use what resources on average (and at the p99), and with that information you could over-provision customer jobs so that you sell 20 vCPUs but schedule them on 10 vCPUs. 200% utilisation baby!
But at the end of that project I realized that all this work could have been done on a CI agent if only they had more compute on them. My little cluster was still almost the size of the build agent pool tended to be. If I could convince them to double or quadruple the instance size on the CI pipeline I could turn these machines off entirely, which would be a lower total cost at 2x and only a 30% increase at 4, especially since some builds would go faster resulting in less autoscaling.
So if one other team could also eliminate a similar service, it would be a huge win. I unfortunately did not get to finish that thought due to yet another round of layoffs.
To illustrate a 128GB ram 20 core server with a 10Gbps NIC and some small SSD storage is probably going to cost you <$2000 USD for a years rental.
If that works out to same prices as keeping compute at literally your peak requirement level round the clock then something is very wrong somewhere. Maybe that issue is not in-house at blacksmith - perhaps spot pricing is a joke...but something there doesn't check out.
Loads of companies do scaling with much less predictable patterns.
>risk of slowdowns
Yeah you do probably want the scaling to be super conservative...but -80% fluctuation is a comically large gap to not actively scale
>To illustrate
Better view I'd say is: That chart looks like ~4.5 peak. So you're paying for 730 hours of peak capacity and using all of it about 90 hrs.
Given that they wrote a blog about this topic they probably have a good reason for doing it this way. Just doesn't really make sense to me based on given info
m7i.4xlarge on AWS spot price right now is $0.39/hour whereas renting the server is about half that per hour.
>whereas renting the server is about half that per hour.
If you're at capacity only 90 out of 730 a month then paying 2x for spot to cover those peaks is a slam dunk
another thing to note is that we bootstrap the hosts, and tune them a decent amount, to support certain high-performance features which takes time and makes control + fixed-term ownership desirable
[disclaimer: i work at blacksmith]
It looks like everything old is new again.
Hyperscaler rack designs definitely blur this line further. In some ways I think Oxide is trying to reinvent the mainframe, in a world where the suppliers got too much leverage and started getting uppity.
There's different takes on it, that's just mine. I really appreciate and respect their work.
That’s a mainframe, sport. At their height they were modular and in at least IBM’s case they could run with damaged parts and were delivered with dark hardware that could replace damaged parts until a maintenance person could arrive, or be remotely enabled to increase throughput for a fee.
All of this cloud stuff, except the geographical redundancy parts, is recreating software that business had versions of forty years ago.
(While the rack is the unit of delivery overall, we can ship individual sleds and they’re operator replaceable, if say, one of your sleds dies, incidentally.)
The old machine was about desk height and 15 feet wide. The new machine was a 4u box running a Unisys mainframe emulator on NT 4, on a quad processor pentium pro. Pretty sad. But they did keep the giant line printer, at least while I was working there.
Occasionally, the new term is warranted, of course, but that's far less common than simply trying to appear different.
> Slaps cryptocurrency sticker in 2019-2021 era
Gets 1 million $ funding
> in 2025, slaps AI powered sticker
Gets 10 million $ funding.
But its still a crud app nonetheless.
I know it sounds really over the line example but I am sure that there are examples like this where the same thing gets some new terms and it gets a lot of funding that is, there is an incentive to put on new stickers.
The goal is not to appear different, the goal is probably profit, which they can get if they can get better funding I suppose, and they get better funding by slapping stickers.
(I'm reading into this further than it needs to go for fun, primarily)
Resourcing dynamically is also difficult because you don't actually know upfront how many resources your CI needs.
Scaling on traffic and resource demand gives us an increased average utilization rate for the hardware we pay for. Especially when peaks are short lived, an hour out of 24 for example.
If I were them I would be looking at renting from bargin bin hosting providers like hetzner or ovh to run this on. The great thing is that hetzner also has a large pool of racked servers that you can tap into.
You are basically going to re-implement hetzner at a smaller (and probably worse) scale by creating your own multitenant mini cloud for running these ci jobs.
Free advice: set up a giant kubernetes cluster on hetzner/ovh, use gvisor runtime for isolation, submit ci workloads as k8s jobs, give the k8s jobs different priority classes based on job urgency and/or some sort of credit system, jobs will naturally be executed/preempted based upon priority.
There you go, that is the product using nearly 100% existing and open source software.
I think we can rent hetnzer vms on a per hour basis or maybe we can't , but I do know that there are services like (linode?) I guess, which use a per second model.
Combine that with I think automatic installation of act and you pay for per second use of your CI.
Plus points if we can use criu to scale from lower end machines to higher end machines depending upon the task while continuing the task from where it was left.
Curious what you use and why. Larger datacenter CPUs have a steep entry price, but usually better economics. Also, don't trust the public pricing — it's totally broken in this industry, unfortunately.
- start a paused vm in google cloud
- run the build there (via a gcloud ssh command) and capture the output
- pause the vm after it is done
Takes about 4-5 minutes. Maybe a few dozen times per month. It's a nice fast machine with lots of CPU and memory. Would cost a small fortune to run 24x7. It would cost more than it costs to run our entire production environment. But a few hours of build time per month barely moves the needle.
Our build and tests max out those CPUs. We only pay for the minutes it is running. Without that it would takw 2-3 times as long. And it would sometimes fail because some of our async tests time out if they take too long.
It's not the most elegant thing I've ever done but it works and hasn't failed me in the two years I've been using this setup.
But it's also a bit artificial because bare metal is cheaper and it runs 24x7. The real underlying issue is the vastly inflated price of virtual machines cloud providers rent out vs. the cost of the hardware that powers them. The physical servers pay themselves back within weeks of coming online. Everything after is pure profit.
How much cheap is this as compared to github actions?
also why are you using gcloud? would certain other competitors like aws/(hetzner? if we are talking about vps) also suit the case.
I would love it if you could write a blog post about it.
We use gcloud for convenience. Our production environment is there. So spinning up a vm is easy. Our builds also deploy there so we need gcloud credentials in our gh actions anyway. It only runs for a few hours per month in total. So the cost isn't very high. A few dollar at most.
No time for blog posts but feel free to adapt my gh action: https://gist.github.com/jillesvangurp/cccf5f9d61f4b457a994dc...
It basically runs a script on the vm. Should be fairly easy to adapt. There's a bit of bash in there that waits for the machine to come up before it does the ssh command that runs the build script.
We take on the small fortune expense and multi tenancy does the rest. The plan is to offer this as inexpensive as possible (far cheaper that deploying and running this type of compute yourself), so that it is a no brainer to take advantage of.
shadowgovt•1mo ago
(The good news is that if the spikes are regular, a sufficiently-advanced serverless can "prime the pump" and prep-and-launch instances into surplus compute before the spike since historical data suggests the spike is coming).
aayushshah15•1mo ago
[cofounder of blacksmith here]
This is exactly one of the symptoms of running CI on traditional hyperscalers we're setting out to solve. The fundamental requirement for CI is that each job requires its own fresh VM (which is unlike traditional serverless workloads like lambdas). To provision an EC2 instance for a CI job:
- you're contending against general on-demand production workloads (which have a particular demand curve based on, say, the time of day). This can typically imply high variance in instance provisioning times.
- since AWS/GCP/Azure deploy capacity out as spot instances with a guaranteed pre-emption warning, you're also waiting for the pre-emption windows to expire before a VM can be handed to you!
shadowgovt•1mo ago
- there are low-frequency and high frequency effects (so you can make predictions based on last week, for example, but those predictions fall flat if the company rushes launches at the EOQ or takes the last couple weeks in December off).
- you can try to capture those low-frequency effects, but in practice we consistently found that comprehension by end-users beat out a high-fidelity model, and users were just not going to learn an idea like "you can generate any wave by summing two other waves." The user feedback they got was that they consistently preferred the predictive model being a very dumb "The next four weeks look like the past four weeks" and an explicit slider to flag "Christmas is coming: we anticipate our load to be 10% of normal" (which can simultaneously tune the prediction for Christmas and drop Christmas as an outlier when making future predictions). When they set the slider wrong they'd get the wrong predictions, but they were wrong predictions that were "their fault" and they understood; they preferred wrong predictions they could understand to less-wrong predictions they had to think about Fourier analysis to understand.
LunaSea•1mo ago
Is this different to lambdas or ECS services due to the need to setup a VM / container and nested virtualisation / Docker-in-Docker is not supported?