Examples of docker image cache mounts:
# yum + dnf
RUN --mount=type=cache,id=yum-cache,target=/var/cache/yum,sharing=shared \
--mount=type=cache,id=dnf-cache,target=/var/cache/dnf,sharing=shared \
# apt
RUN --mount=type=cache,id=aptcache,target=/var/cache/apt,sharing=shared
# pip
RUN --mount=type=cache,id=pip-cache,target=${apphome}/.cache/pip,sharing=shared \
# cargo w/ uid=1000
RUN --mount=type=cache,id=cargocache,target=${apphome}/.cargo/registry,uid=1000,sharing=shared \
"Optimize cache usage in builds" https://docs.docker.com/build/cache/optimize/
JackSlateur•8mo ago
I am missing something here ..
dijksterhuis•8mo ago
> From a billing perspective, AWS does not charge for the EC2 instance itself when stopped, as there's no physical hardware being reserved; a stopped instance is just the configuration that will be used when the instance is started next. Note that you do pay for the root EBS volume though, as it's still consuming storage.
https://depot.dev/blog/faster-ec2-boot-time
SOLAR_FIELDS•8mo ago
Though I would say for a lot of organizations, you aren't operating your builds at a scale where you need to be idling that many runners and bringing them down and up often enough to need this level of dynamic autoscaling. As the article indicates, there's a fair amount of configuration and tweaking to set something up like this. Of course, for the author it makes total sense to do that, because their entire product is based on being able to run other people's builds in a cost effective way.
If cost savings are a concern, write a 10 line cron script to scale your runners down to a single one when not in business hours or something. You'll spend way less time configuring that than trying to get dynamic autoscaling right. Heck, if your workloads are spiky and short enough, this kind of dynamic scaling isn't even that much better than just keeping them on all the time, because while this organization got their EC2 boot time down to 4 seconds, they are optimizing the heck out of it. I'll tell you in a vanilla configuration with the classic AMI's that they offer on AWS the cold boot time is closer to 40 seconds.
JackSlateur•8mo ago
Even with their current tech, ec2 supports autoscale, so they could have a fleet of instances, where nodes are created and deleted based on the overall usage
(of course, one could also stop using ec2 instances and jump in k8s or even ecs ..)
mdaniel•8mo ago
jacobwg•8mo ago
Eventually you realize, IMO, that doing the bin packing yourself is just competing with AWS, that’s what they do when you launch a non-metal EC2 instance and it’s best to let them do what they’re good at. Hence why we’ve focused on optimization of that launch type, rather than trying to take over the virtualization.
There’s other security and performance reasons too: AWS is better at workload isolation than we can be, both that the isolation boundary is very strong, and that preventing noisy neighbors is difficult. Especially with things like disk, the strategies for ensuring fair access to the physical hardware (rate-limiting I/O) themselves have CPU overhead that slows everything down and prevents perfect bin-packing.
nand_gate•8mo ago
Patting themselves on the back for 'fixing' a self-created problem... EC2 is the wrong abstraction for this use case imo
jacobwg•8mo ago