However, and missing from this article + discussion so far, is their revenue. If they pay $4/day and make $2 in revenue, that's bad. They pay $300k/day but make ~ $2250k/day in revenue. I don't know what the ratio is supposed to be, but at first blush that doesn't actually seem too bad. I'll let the more qualified take over, I'm struggling to find out how big a % of their total expenses this is.
In reality, the cloud creeps into your systems in all sorts of ways. Your permissions use cloud identities, your firewalls are based on security group referencing, your cross-region connectivity relies on cloud networking products, you're managing secrets and secret rotation using cloud secrets management, your observability is based on cloud metrics, your partners have whitelisted static ip ranges that belong to the cloud provider, your database upgrades are automated by the cloud provider, your VM images are built specifically for your cloud provider, your auditing is based on the cloud provider's logs, half the items in your security compliance audit reference things that are solved by your cloud provider, your applications are running on a container scheduler managed by your cloud provider, your serverless systems are strongly coupled distributed monoliths dependent on events on cloud specific event buses, your disaster recovery plans depend on your cloud provider's backup or region failover capabilities, etc. Not to mention that when you have several hundred systems, you're not going to be moving them all at the same time. They still need to be able to communicate during the transition period (extra fun when your service-to-service authentication is dependent on your cloud) without any downtime.
It's not just a matter of dropping a server binary onto a VM from a different provider. If I think about how long it would take my org to move fully off of _a_ cloud (just to a different cloud with somewhat similar capabilities), 3 years doesn't sound unrealistic.
If you can't run it locally, don't use it unless you have absolutely no choice.
I think it can still be a big deal depending on what's the overall system architecture, where are all data stores, how many services you run, and what constraints you're dealing with.
For example, when you are between two cloud providers, more often than not you will have to replace internal calls with external calls, at least within the migration stage. That as important impacts on reliability and performance. In some scenarios, this performance impact is not acceptable and might require architecting services.
IMO, the value proposition these days is rather to avoid maintenance. I.e. help with up with all the latest patches on your infrastructure.
In fact, those of my clients who insist of relying on cloud, tend to spend far more with me for similar complexity systems. I love taking their money, but I'd frankly rather help them save it, because longer term it's better.
Because in fact cloud is not just someone else's computer.
My experience in helping people do cloud migrations are that companies also often quickly lose oversight over which cloud services they are even still running. Sometimes systems that should've been shut down years ago are still hanging around, or parts of them anyway, like S3 buckets etc. Most companies that use cloud systems underprovision their devops because they think they don't need much for a cloud system (in fact, they typically need more to do it well).
$300k/day for their revenue is very much crazy high.
Repeat it by a bunch of cloud services your app use and you're in for major rewrite.
Cloud offers many, many "good enough" solutions for common problems in development and it is oh so easy to use another cloud api for few cents more
And that's not even touching a bunch of auxiliary services you now need to provide like monitoring, backups, metrics etc.
Usually best strategy is to start move from most expensive parts but if that part is entangled in cloud services it could take forever
Seriously the CTO should be fired.
It's extremely rare to come across people using cloud services where you can't cut their costs by at least 30%, often more than half, sometimes as much as 90%+, and usually it results in reduced devops spends as well.
The interesting thing is that the hardest part of the sales process is to convince people that cloud isn't cheap, because there's a near religious belief that cloud providers must be cost effective in some companies.
For some of them, a colo facility would be cheaper, but that's highly dependent on where you want to host it (e.g. I'm in London - putting things in a colo in London is really hard to make cost effective vs. renting servers somewhere with lower land costs; data centre operators are real-estate plays)
However, you can usually make managed hosting/colo even cheaper by sprinkling some cloud in. E.g. a "trick" that can work amazingly well is to set up the bare minimum to let you spin up what you need to handle traffic spikes in a cloud environment, and then set up monitoring for your load balancer so that you start scaling into the cloud environment once load hits a certain level, but use only the managed hosting below that level.
That way, you almost never end up actually spinning up cloud instances, but you gain the ability to run the managed hosting environment far closer to the wire than you could otherwise safely do, and drive your cost per request down accordingly.
This is literally my bread and butter, and we make more money off customers who insist on staying on cloud providers, because they consistently need more help.
How large the setup is before that point varies a lot by organisational maturity and what exactly they do, but that can typically reflect hosting costs ranging from a couple of thousand a month and up.
But the biggest cost differential is the hosting itself. It's not unusual for a managed setup to cost half or less what the equivalent cloud environment does, for similar levels of resiliency, including the fully loaded cost of devops resources.
The most important thing tends to be to make it possible to run services in isolation so devs can test individual services on their own machines. Then you'll end up with much more time-limited need to spin up larger environments for integration tests etc.
The most important part, though, tends to be to ensure you have a system to keep track of what is running and who is currently using it.
And yes, that's absolute peanuts for any business. But in my view, spending almost $1k annually for a GitLab server for a team of ten is ridiculous.
We could accomplish exactly the same thing on-prem with hardware we already own. It would take a couple of engineer-hours per year. As long as it's under 40 hours of maintenance per year, we come out day ahead. And over the last two years, I've had to spend a total of maybe 10 hours with hands on this box.
I just don't get the thinking that leads businesses to put unnecessary crap in the cloud. Just save your money, on-prem is cheaper for pretty much everyone not doing a saas or less than a few hundred employees
Also, my experience with GitLab isn't that the thing is hurp-durping because of scaling to ten million, it's that ruby gonna ruby
I usually tell people that if they don't want something on-prem, then at least pick a cheaper provider.
But as long as it's a VM and you don't build in reliance on other AWS services, at least you can move off it as you scale. The real problem comes when you start depending on a single cloud.
Not only do you lose the ability to do an easy move, but you also lose almost all negotiating leverage as your bill increases.
> $70 a month times 12 is $840
> As long as it’s under 40 hours of maintenance (??)
The thing is, you're not likely to spend amortised more than minutes a year maintaining the on-prem hardware (or you could rent managed servers and spend nothing on it), and as they pointed out they already spend hours maintaing the EC2 instance.
I wouldn't typically recommend on prem hardware for just a single server - if you already have a rack and a proper setup, sure - otherwise just rent a cheap server somewhere, and ensure you have backups.
Meanwhile your devs put more man-hours than your actual hardware ops teams would ever do into wrangling cloud quirks and spend way more than getting a bunch of servers (even if you include costs of financing them in that) would ever cost.
Some web sockets with CRDT processes running.
What else?
Emotional manipulation at its finest.
No wonder nothing feels special anymore. Everything’s automated, abstracted, owned by someone else. If you make things by hand, you have to prove their value meanwhile, platforms like AWS are printing money.
They want us to own nothing. And if “heaven” is real, why aren’t they there already?
It may take longer to start, but owning all your servers is cheaper in the long run.
At the very least, don't lock yourself into one provider. Spread out any services you use.
More than one company has screwed itself over permanently by tying itself to a single service. At the end of the day, it is the extremely poor decision to follow the trend of locking oneself into a single company that dooms it.
You would think people would have learned the lesson to avoid lock-in by now. It is absolutely Darwinian at this point.
javierluraschi•7mo ago
pgwhalen•7mo ago