It encodes the most common types of application backend deployments into simple patterns and makes it pretty easy to build full and deploy full application stacks (Route 53 -> CloudFront -> S3 (static FE) -> ALB + SGs + TGs -> ECS cluster (backend APIs) -> DBs)
With the Copilot CLI, I find the experience on AWS significantly better and in some ways, more well-rounded than on GCP. GCP's Firebase tooling and CLI comes close, but alas, Firebase does not have the same level of extensibility that Copilot offers by allowing you to include both CDK and CloudFormation as extension points which then allows you to use Copilot to manage a good chunk of your AWS infra with a single, easy to use CLI.
For simple apps, I prefer Firebase on GCP. For more complex apps, I think Copilot on AWS is really, really good. One caveat: ECS is much slower to roll nodes over to new versions compared to Cloud Run. Best I could achieve on it was ~180s for a Blue/Green rollover whereas Cloud Run does this in seconds.
TL;DR: if you are not an enterprise and you are on AWS, your life will assuredly be better with Copilot instead of CDK, CloudFormation, or almost any other solution for deploying to AWS.
(not affiliated with or paid by Fly.io)
> TIL, that looks pretty nice
It is very nice and is 1000% the easiest way to work on AWS, IME.Software sales people are there for their goals first, and your goals second.
If there's a more optimal way for them to reach their goals in the long run, unbeknownst to you, it will happen.
why do you think so? this is a very incurious read into it because the space AWS works in is highly competitive.
Turning off all compute resources (EC2, Lambda, Fargate, etc) seems obvious, but what about systems managing state like S3, EBS, and DynamoDB? Should buckets, volumes, and tables be deleted?
"Tunable spending limit" has consequences that can create other, equally real, problems.
Best effort: Turn off all compute resources, drop dynamically-adjustable persistent resources to their minimums (e.g. dynamo write and read capacity of 1 on every table), leave EBS volumes and S3 alone. In some cases, a user might find their business effectively offline while still racking up a massive AWS bill.
Hard cutoff: Very close to deleting an AWS account. In addition to compute and dynamically-adjustable resources to minimums, this means deleting S3 buckets, Dynamo tables, EBS volumes and snapshots, and everything else that racks up cost by the hour.
The best effort approach sounds reasonable to me. The hard cutoff solution sounds worse than the problem it purports to solve.
Agreed that AWS is poorly incentivized to fix the problem.
- student, learning how to use AWS: set a maximum spend limit and hard cutoff
- small business, running a website: ddos protection and compute limits, pause compute and alert user if spend goes over, giving them the option to raise the limit and/or resume
Etc etc
The cloud vendors are capable of solving this problem, they just refuse to.
Unlimited spending by customers is precisely what they want.
A simple way is for each created resource or rule that creates resources have a toggle to stop/delete the resource if spending limit is reached. I would use this by not enabling this on backups and enabling them on non-production critical resources.
This is a basic problem that every adult needs to know how to solve, like "how can I make sure I don't run out of money to pay for food and shelter when I'm buying toys?" You set aside money to pay for the important things first, and what remains sets your limit for discretionary spending.
Can you explain why this can't work as a first step into you figure out the rest?
Try Azure...shared responsibility its shared data when another tenant keys leak into yours. Then go to GCP where your incident ticket gets answered by a documentation bot... You will crawl back to AWS praying for a good old fashioned throttling error.
I got tired of vercel's cold starts and upsell button on every feature.
But the support I get from AWS is world class compared to GCP. My AWS account team is always proactively reaching out, not just for potential security risks but also with cost optimization advice.
Now the next generation, who hasn't been indoctrinated into the 'cloud is the way to go, also microservices' mentality, and arriving at a time when the cloud providers aren't flooding everyone with free credits, courses and conferences to push their solutions, to them it looks like another legacy stack.
Or maybe not legacy, but definitely it has a rugged everyday feel as opposed to having the air of 'future tech' around it.
techblueberry•2mo ago