Yes, as long as the $2 trillion dollar American corporation beholden to shareholders to maximize profits doesn't try to milk its captive customers you'll be fine. Shouldn't be a problem.
Whether that means that Amazon won't try to squeeze profits is a different question.
Brushing off lock-in is a short term luxury.
Embracing k8s is a monumental decision for a company and from that point forward it becomes the galactic center of everything you do. Any new solution you introduce needs to fit into the k8s ecosystem you will have created. It insists upon itself, and it insists upon people's time. It is also a lock in of its own kind, and an insidious one too. You are not going to escape a lock in, the only thing you can do is accept it or perform an appropriate set of mental gymnastics to convince yourself that it isn't one.
Many of us are technologists, and part our role is to understand that cost and impact. After 10 years, hopefully it is evident by now and we have learned in the industry that k8s not something that should be taken on lightly (spoiler: we haven't learned squat).
It absolutely is possible to go managed well - forget serverless itself - managed services where you work with what is given by a cloud provider. The point of managed services isn't just cost, it is reducing the amount of time and effort your humans are spending. Part of managed services is to actually understand what they do and not go in blindly, then acting surprised when they do something else. Small functions, Lambda. Containers, ECS Fargate. Databases, RDS. The service costs and boogeymans often trotted out are irrelevant in the face of human time, if your humans are having to maintain and manage something, that is wasted time, and they are not delivering actual things of value.
I'd assume a majority of people working with k8s knows what serverless is and where Functions as a Service work more generically.
The rest of the post just seems to be full of strawman arguments.
who is this kubernetes engineer villain? It sounds like a bad coworker at a company with a toxic culture, or a serverless advocate complaining at a bar after a bad meeting.
> k8s is great for container orchestration and complex workloads, while serverless shines for event-driven, auto-scaling applications.
> But will a k8s engineer ever admit that?
Of course. I manage k8s clusters in aws with eks. We use karpenter for autoscaling. A lot of our system is argo workflows, but we've also got a dozen or so services running.
We also have some large step functions written by a team that chose use lambda because aws can handle that kind of scaling much better than we would have wanted to in k8s.
No need to worry about them, they’ll easily get a job at Amazon running the infrastructure that you will use instead of running the infrastructure you would have built for them.
K8s and Lambda serve different scopes and use cases. You can adopt a Lambda-style architecture using tools like Fargate. But if a company has already committed to a k8s, and this direction has been approved by engineering leadership, then pushing a completely different serverless model without alignment is a recipe for friction.
IHMO, the author seems to genuinely want to try something new and that’s great. But they may have overlooked how their company’s architecture and team dynamics were already structured. What comes across in the post isn’t just a technical argument — it reads like venting frustration after failing to get buy-in.
I’ve worked with “Lambda-style” architectures myself. And yes, while there are limitations (layer size, deployment package limits, cold starts), the real charm of serverless is how close it feels to the old CGI-bin days: write your code, upload it, and let it run. But of course, that comes with new challenges: observability, startup latency, vendor lock-in, etc...
On the other side, the engineer in this story could have been more constructive. There’s clearly a desire from the dev team to experiment with newer tools. Sometimes, the dev just wants to try out that “cool shiny thing” in a staging environment — and that should be welcomed, not immediately shut down.
The biggest problem I see here is culture. The author wanted to innovate, but did it by diminishing the current status quo. The engineer felt attacked, and the conversation devolved into ego clashes. When DevOps loses the trust of developers, it creates long-term instability and resentment within teams.
Interestingly, K8S itself was born from that very tension. If you read Beautiful Code or the original Borg paper (which inspired), you’ll see it was designed to abstract complexity away from developers — not dump it on their heads in YAML format.
At the end of the day, this shouldn’t be a religious debate. Good architecture comes from understanding context, constraints, and cooperation, not just cool tech.
Author is making a moot argument that doesn't resonate. The real struggle is about steady-state load versus spiky load. The best place to run steady-state load is on-prem (it's cheapest). The best place to run spiky workloads is in the cloud (cheapest way of eliminating exhausted capacity risk). Then you have crazy cloud egress networking costs throwing a wrench into things. Then you have C-suite folks concerned about appearances and trading off stability (functional teams) versus agility (feature teams) with very strong arguments for treating infrastructure teams not as feature teams ("platform teams") but as functional teams (the "Kubernetes team" or the "serverless team").
And yes, there woukd be a "serverless" team, because somebody has to debug why DynamoDB is so expensive (why is there a table scan here...?!) and cost-optimize provisioned throughput, and somebody has to look at those ECS Fargate steady-state costs and wonder if managing something like auto-patching Amazon Linux is really that hard considering the cost savings. At the end of the day, infrastructure teams are cost centers, and knowing how to reduce costs while supporting developer agility is the whole game.
Kubernetes and AWS are both complex, but one of them frontloads all the complexity because it's free software written by infrastructure dorks, and one of them backloads all of it because it's a business whose model involves minimizing barriers to entry so that they can spring all the real costs on you once you're locked in. That doesn't mean either one is a better or worse technical solution to whatever specific problem you have, but it does make it really easy to make the wrong choice if you don't know what you're getting into.
As for the last point, I don't discourage serverless solutions because they make less work for me, I do it because they make more. The moment the developers decide they want any kind of consistency across deployments, I'm stuck writing or rewriting a bunch of Terraform and CI/CD pipelines for people who didn't think very hard about what they were doing the first time. They got a PoC working in half an hour clicking around the AWS console, fell in love, and then handed it to someone else to figure out esoterica like "TLS termination" and "logs" and "not making all your S3 buckets public by accident."
From my experience Kubernetes is the most complex with most foot guns and most churn.
I'd argue between AWS serverless and AWS EKS fargate, the initial complexity is about the same. But serverless is a lot harder to scale cost efficiently and not accidentally go wild with function or sns loops.
Used Golang and optimised our queries and data structures and rarely needed more than 2 of whatever the smallest ECS Fargate task size is, but if we did it scaled in and out without any issues.
Realise that isn't at scale for some but it's probably a relatively common point for a lot of use cases.
We put some effort into maintenance, mostly ensuring we kept on an upgrade path but barely touched the infrastructure code other than that.
One thing we did do was limit the number of other AWS services we adopted and kept it fairly basic. Seen plenty of other teams go down the rabbit hole.
Are there any open standards for "serverless" yet?
This argument lost me. If you’re running your own k8s install on top of servers, you’re doing it wrong. You don’t need highly specialized k8s engineers. Use your cloud provider’s k8s infrastructure, configure it once, put together a deploy script, and you never have to touch yaml files for typical deploys. You don’t need Lambda and the like to get the same benefits. And as a bonus, you avoid the premium costs of Lambda if you’re doing serious traffic (like a billion incoming API requests/day).
Every developer should be able to deploy at any time by running a single command to deploy the latest CI build. Here’s how: https://engineering.streak.com/p/implementing-bluegreen-depl...
We operate in a self-serve fashion predominantly on kubernetes, and the product teams are perfectly capable of standing up new services and associated infrastructure.
This is enabled through a collection of opinionated terraform modules and helm charts that pave a golden path for our typical use cases (http server, queue processor, etc). If they want to try something different/new they're free to, and if successful we'll incorporate it back into the golden path.
As the author somewhat acknowledges, the answer isn't k8s or serverless, but both. Each has their place, but as general rule of thumb if it's going to run more than about 30% of the time it's probably more suitable for k8s, assuming your org has that capability.
I think it's also worth noting that k8s isn't the esoteric beast it was ~5-8 years ago - the managed offerings from GCP/AWS and projects like ArgoCD make it trivial to operate and maintain reliable, secure clusters.
Strangely no mention of knative in this thread, there's a lot of tradeoffs in going full serverless and the promised reduction in infra costs/wages doesn't always pan out.
It's a fairly mature CNCF project at this point and makes running your own serverless setup quite simple.
I doubt the fight between microservices and batch processing will end any decade soon but it's easy enough to run both on the same infra, that most importantly you control.
Wouldn't call it the best of both worlds, but it's reasonable enough to offer you the option of both worlds.
I also don't see the scalability argument. Being able to own a whole CPU indefinitely means I can take better advantage of its memory architecture over time. Caches actually have meaning. Latency becomes something you can control. A t2.large running proper software and handling full load for 60 seconds could cost $10-20 if the same were handled in AWS lambda. The difference is truly absurd.
TCO-wise, serverless is probably the biggest liability in any cloud portfolio, just short of the alternative database engines and "lakes".
I feel like the whole article very much sounded like constructing a strawman and arguing against that. The way I see it, there can be advantages and disadvantages to either approach.
If you really find a good use case for serverless, then try it out, summarize the good and the bad and go from there. Maybe it's a good fit for the problem but not for the team, or vice versa. Maybe it's neither. Or maybe it works and then you can implement it more. Or maybe you need to value consistency over an otherwise optimal solution so you just stick with EC2.
Most of the deployments I've seen don't really need serverless, nor do they need Kubernetes. More often than not, Docker Swarm is more than enough from a utilitarian perspective and often something like Docker/Compose with some light Ansible server configuration is also enough. Kubernetes seems more like the right solution when you have strong familiarity and organizational support for it, much like with orgs that try to run as much of their infra as possible on a specific Linux distro.
It's good when you can pick tech that's suited for the job (that you have now and in the near future, vs the scale you might need to be at in N years), the problems seem to start when multiple options that are good enough meet strong opinions.
I will admit that I quite do like containers for packaging and running software, especially since they're the opposite of vendor lock (OCI).
Seems like a shallow take. Prices could rise and reliability fall, but you’d still be married to them.
QuinnyPig•2d ago