frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•1m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•4m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
1•DEntisT_•6m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
1•tosh•6m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•6m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•9m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
4•sakanakana00•13m ago•0 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•15m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
3•Tehnix•15m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•17m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
4•Nive11•17m ago•6 comments

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
2•hunglee2•21m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
2•chartscout•23m ago•0 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
3•AlexeyBrin•26m ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
2•machielrey•28m ago•1 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
3•tablets•32m ago•1 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•35m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•37m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
2•pastage•37m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
2•billiob•38m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
2•birdculture•43m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•49m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•50m ago•1 comments

Slop News - The Front Page right now but it's only Slop

https://slop-news.pages.dev/slop-news
1•keepamovin•55m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•57m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
4•tosh•1h ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
4•oxxoxoxooo•1h ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•1h ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
4•goranmoomin•1h ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

4•throwaw12•1h ago•0 comments
Open in hackernews

Ask HN: Is Kubernetes still a big no-no for early stages in 2025?

27•herval•5mo ago
It's a commonly-repeated comment that early stage startups should avoid K8s at all cost. As someone who had to manage it on a baremetal infrastructure in the past, I get where that comes from - Kubernetes has been historically hard to setup, you'd need to spend a lot of time learning the concepts and how to write the YAML configs, etc.

However, hosted K8s options have improved significantly in recent years (all cloud providers have Kubernetes options that are pretty much self-managed), and I feel like with LLMs, it's become extremely easy to read & write deployment configs.

What's your thoughts about adopting K8s as infrastructure early on (say, when you have initial customer fit and a team of 5+ engineers) and standardizing around it? How early is too early? What pitfalls do you think still exist today?

Comments

delichon•5mo ago
Same question. I'm a one man band who wants to be scalable, but doesn't want to get married to a particular cloud. So Kubernetes appears to be the default recommendation. Are there better alternatives?
vorpalhex•5mo ago
Docker compose, docker swarm if you outgrow that.
herval•5mo ago
I tried that in the past and found it extremely unreliable as a production environment. Documentation was also non-existent and I'd have to manually handle clusters, setup my own observability and log stack, etc. Any cloud provider these days gives you all that out of the box for K8s, so I'm, not sure the time one would invest on Swarm really makes sense?
vorpalhex•5mo ago
You will be married to a particular cloud.

Either you go all in on someones setup or you get to do it all yourself.

That's true for any service. Either you drink the AWS/GCP/Axure koolaid or you make your own. Whether it's k8s or Swarm or whatever doesn't matter.

JustExAWS•5mo ago
Out of the long list of things a five person startup needs to be worried about, “cloud lock in” doesn’t make the top 10 or even top 20.
vorpalhex•5mo ago
Dolla bills.

Cloud LockIn is Price LockIn.

Last startup I was in 5x'd our runway by using dynamic spot pricing across multiple clouds.

Also sounds like the software has to support multiple clouds to sell to clients.

JustExAWS•5mo ago
This startup is probably just a web server though. Let’s not pretend that multi cloud doesn’t have administration cost, network latency when using one cloud your data can literally be on the same server rack as your compute if you need it to be at best or at worse be in the same data center.

That also begs the question if you need the cheapest costs possible, wouldn’t you be better off at a colo and have your processing and data in the same data center and not have to spend extra on cloud costs?

dangus•5mo ago
It’s not really black and white as you describe it, there’s a huge spectrum between “do it all yourself” and “go all in on someone’s setup.”

Different cloud services have different levels of difficulty in migrating in or out of them.

You can also mix levels of abstraction for different layers of your product.

For example, you can host something on a bare EC2 instance fronted by nginx (let’s encrypt for certs) with an RDS database and that’s going to be far more portable to someone else’s cloud than deploying to Lambda behind an ALB using AWS certificate manager.

You still didn’t “do it all yourself” because RDS still took a solid chunk of your work away even though you did nginx and your deployment to EC2 on your own.

In the case of RDS it’s one of the more trivial services to move to another provider or move to bare metal since you’re just running a standard database and all your app needs is a connection string.

(I’m not claiming this is a real architecture that makes sense, just an example of how different layers can be chosen to be managed or unmanaged).

vorpalhex•5mo ago
Unless you start using a non-portable RDS feature. Or you set up RDS auth using the preferred AWS method which is non-portable. Or you come to depend on a performance feature of RDS hosting. Or..
dangus•5mo ago
1. Rare, and easy to avoid, even if using Aurora. Anything without "Aurora" in the name is a plain database engine with no Amazon customization (e.g., PostgreSQL).

2. Not correct, IAM authentication is not the preferred connection method, and it has a performance limit of 200 connections per second. It's intended for access by humans and not by your applications. In my experience I've never seen any organization set it up. The other authentication methods are not AWS specific (Kerberos/password auth). Easy to avoid.

3. Most performance features of RDS have some kind of non-AWS equivalent. AWS isn't reinventing the wheel as a database host.

more_corn•5mo ago
Heroku Vercel AWS ECS

What’s your business? Do you have product market fit? Do you benefit enough from the three things K8S does well to pay the cost in increased complexity, reduced visibility, and increased toil? If you can’t immediately rattle off those three things, you don’t need it.

Don’t you want to just focus on the problem you’re solving for your customers and not the infrastructure that makes your app go? Every startup I’ve seen doing k8s should not have been. Every startup I’ve seen not using k8s didn’t need it. (Except a startup who moved from beanstalk which nobody should ever use, and they could have done better by moving to something like ecs)

I’ve seen a startup lose their entire DevOps team and successfully go a year without it because their core app was on heroku. What’s that worth in dollars? What are those dollars worth in opportunity cost?

otterley•5mo ago
> I'm a one man band who wants to be scalable, but doesn't want to get married to a particular cloud.

(AWS employee, but opinions are my own)

Just pick a cloud provider and move on. All of the top-tier providers can scale. Choose the one you're most comfortable with and move on. Focus on the things that matter for your business like building the right product and getting customers. If you have regret later, it's going to be because you were so wildly successful that it is now an actual business risk and/or your customers demand it. But don't make this decision based on a problem you don't have and are unlikely to have in the next 12-24 months, if ever. Cloud agnosticism is rarely a functional or strategic requirement of any given business, and it's usually very expensive to implement--more than any savings you might achieve by pitting providers against each other (which you can't do anyway unless you are big enough to be a strategic customer).

delichon•5mo ago
I'm writing a public statement platform, and a core design priority is to avoid being Parlered, since human moderation is not an option. Responding to court orders is inevitable, but responding to cloud provider demands isn't quite, yet.
dangus•5mo ago
The thing is that avoiding cloud vendor “lock-in” won’t necessarily help you here. If it isn’t cloud provider demands it’ll be someone else involved in serving your content, could go all the way down to your ISP.
zekrioca•5mo ago
It seems to me the issue is not really setting up and building around K8s as an infrastructure orchestrator, afterall k8s sells itself as a cluster API which is the de facto standard nowadays. The issue starts when you need to handle very specific use-cases, e.g., security. This requires very low-level experience not only with K8s, but with the whole stack (including OS + HW) + knowledge of safe resource and application scheduling, which is hard to find talent for.

PS. Edit for clarity.

Ethee•5mo ago
The answer, as with everything, is going to be that it depends on your situation and use-case. If you have a bunch of engineers who are already familiar with K8s then using a different implementation just because others told you to doesn't make much sense. But if you're choosing K8s because you want a good future foundation, in dreams of 'scale', then you should stop and really consider what it is that you need from K8s. Most people I've seen who's infrastructure succeeds with K8s only moved to it as a necessity, usually away from some monolithic structure, they didn't start there. Build what you need, and only that much, not for some future need that might not come.
richwater•5mo ago
Those same cloud providers usually have simplified container deployment mechanisms as well. You don't need Kubernetes to deploy containers.
lillecarl•5mo ago
And then you can no longer deploy anywhere else, sounds perfect for the cloud provider!
gabrielpoca118•5mo ago
I heard about that a few times already, but I never reached a point where my ecs setup combined with other aws services was not enough. If you had everything with just stuff in kubernetes wouldn’t it still be a pretty big deal to migrate?
lillecarl•5mo ago
As a deployment model I bet ECS, fargate and lambda are great. But at the scale of my projects (small) I like being able to run a copy of the full infrastructure (or as much as possible) locally and reuse as much as possible from "prod".

And regarding Kubernetes migrations, once you've made sure you have network and DNS connectivity cross cluster it's essentially just replacing the CSI and LoadBalancer controller. Then the actual data migration there's no magic bullet, depends on what you run.

The USP for Kubernetes is that it's essentially the same no matter where you run it since everything conforms to the Kubernetes API spec.

If you don't want or need local development LARPing prod then anything goes.

gabrielpoca118•5mo ago
So for stuff like secrets management, buckets, api gateways and such, you deploy those services to k8s? And if you don’t mind, is maintaining those services cost effective? I’m asking because I’m always looking to do the trade off of money per time
lillecarl•5mo ago
Kubernetes already has simple secrets, good enough for me.

I would provision buckets with Terraform/tofu, we just use ingress so idk about API gateways.

The eye opener for me was "I can just do this in Kubernetes", which is pretty much always true (though not always right).

Kubernetes + Prometheus + Grafana (with friends), cert-manager, CSI, LB and some CNI you have something resembling what I'd use from $cloud provider.

Deploying K3s is really easy, it can definitely be a time-sink when you're learning but the knowledge transfers really well.

You also don't really need all Kubernetes features to use it, you can deploy K3s on a single VM and run your pods with hostnetworking and local path mounts, essentially turning it into a fancy docker-compose which you can grow with instead of throw out.

I value FOSS and being able to run "anywhere" with the same tools. K8s and Postgres gets me there, I haven't worked on any "web scale" projects though but I know both can scale pretty high.

herval•5mo ago
I personally reached the multi-cloud situation a handful of times - it's particularly common if you're doing anything on-prem, or with customers on certain regions (eg China). If you're truly married to AWS/GCP/whatever stack (fargate, lambda, etc), it's literally impossible to migrate. The successful migrations / multi-cloud setups I've seen all used the cloud provider's specific features sparingly (eg S3 encapsulated on a single library, so migrating is simple), etc

But I agree if you're doing something simpler, sticking to a single provider is fine

vorpalhex•5mo ago
What does it provide you?

Maybe you need a cluster per client and k8s is the only option.

Maybe you literally only need a few docker services and swarm/ecs/etc are fine forever.

What is the problem that K8s solves for you?

herval•5mo ago
in my current case - the ability to deploy on-prem for some specific customers (on-prem meaning their own AWS or GCP account usually) + per-customer/multitenancy on the main product (ideally with segregated databases)

In general - scaling up a small number of microservices + their associated infra (redis/rabbitmq/etc)

vorpalhex•5mo ago
You are basically stuck with k8s but you will end up having to "roll your own" (bring your own components) if you intend for operations to be consistent across different clouds/prems/etc.

Ideally start with an existing kube stack and slowly make it your own.

Operationalizing across hetereogenous clusters will be an unfortunate source of excitement.

languagehacker•5mo ago
If you've already done the work figuring out how to knock out a basic web app deployment of kubernetes for a project that you think will grow, then I say go for it. It's not much cheaper than buying a reasonable minimum amount of compute with a company like Digital Ocean.

For hobby projects that I don't really plan to scale, I've recently gotten back into non-containerized workloads running off of systemctl in an Ubuntu VM. It feels pretty freeing not to worry about all the cruft, but that will bite me if something ever does need to live on multiple servers.

ebiester•5mo ago
The biggest reason Kubernetes should be a big no-no is because you should have a much simpler architecture (monolith) that doesn't need K8s.
ruuda•5mo ago
It depends of course, but probably Kubernetes is a solution to problems that you don't have, while it creates new problems that you don't currently have.
tnjm•5mo ago
While I wouldn't dream of standing up k8s on a bare metal cluster without a devops team, I set up managed k8s using EKS several years ago for a client and... it just chugs along, self-healing, with essentially zero maintenance.

For my own projects I use a managed Northflank cluster on my own AWS account and likewise... just a fantastic experience. Everything that Heroku could and should have been. Yes the cluster is a bit pricey to stand up both in terms of EC2 compute and management layer costs, but once it's there, it's there. And the costs scale much more nicely than shoving side projects onto Heroku.

At this stage I consider managed k8s my default go-to unless it's something so lightweight I just want to push it to Vercel and forget about it.

signal11•5mo ago
k8s isn’t worth the time and money for many small teams, until they cross a complexity bar.

Even in some very non-startup enterprises, Cloud Foundry and Open Shift get adopted for a reason: some teams don’t need the overhead.

For startups there’s fly.io, render.com, and of course Heroku, but really — you can get from MVP to pretty decent scale on AWS or GCP with some scripts or Ansible.

Use k8s if you need it. It’s pretty well-proven. But it’s not something you need to have FOMO about.

Dedime•5mo ago
I'll add my opinion as a DevOps engineer, not a startup, so take it with a grain of salt.

* Kubernetes is great for a lot of things, and I think there's many use cases for it where it's the best option bar none

* Particularly once you start piling on requirements - we need logging, we need metrics, we need rolling redeployments, we need HTTPS, we need a reverse proxy, we need a load balancer, we need healthchecks. Many (not all!) of these things are what mature services want, and k8s provides a standardized way to handle them.

* K8s IS complex. I won't lie. You need someone who understands it. But I do enjoy it, and I think others do too.

* The next best alternative in my opinion (if you don't want vendor lock in) is docker-compose. It's easy to deploy locally or on a server

* If you use docker-compose, but you find yourself wanting more, migrating to k8s should be straightforward

So to answer your questions, I think you can adopt k8s whenever you feel like it, assuming you have the expertise and are willing to dedicate time to maintaining it. I use it in my home network with a 1 node "cluster". The biggest pitfalls are all related to vendor lock in - managed Redis, Azure Key Vault. Hyper specific config related to your managed k8s provider that might be tough to untangle. At the same time, you can just as easily start small with docker-compose and scale up later as needed.

time4tea•5mo ago
You can run docker swarm easy peasy. Its not that trendy, but anyone can manage it, and you can migrate to k8s later if you need to. Of course it doesn't do some of the things that k8s does, and thats why its less complicated...

Edit: you can run a lot on one or two hetzner servers for almost no money. Compare €60/month, vs about $1000/month for a couple of replicated fargate services.

strls•5mo ago
If PaaS or some "run container as a service" setup can work for your use case, I'd probably go with that. It takes care of many things K8s does without all the baggage. Also you are not investing into anything that doesn't port easily to K8s in the future.

On the other hand, if you are thinking of using bare VMs, then better go with managed K8s. I think in 2025 it's a draw in terms of initial setup complexity, but managed K8s doesn't require constant babysitting in my experience, unlike VMs, and you are not sinking hours into a bespoke throwaway setup.

chistev•5mo ago
I have no experience with AWS and the rest I always use Paas and they serve me well, I wonder what cases I'd need kubernetes and stuff?
Nextgrid•5mo ago
If you need to go beyond what a single bare-metal server can offer, then consider it.

But don’t discount bare-metal first! I see a lot of K8s or other cluster managers being used to manage underpowered cloud VMs, and while I understand the need for an orchestrator if you’re managing dozens of VMs, I wonder - why do you need multiple VMs in the first place if their total performance can be achieved by a handful of bare-metal machines?

byrnedo•5mo ago
You could use https://github.com/skateco/skate and graduate to k8s later.

Disclosure: I’m the author of skate

adamcharnock•5mo ago
There are some excellent comments here, so I'll just add my particular flavour.

I think using Kubernetes effectively in 2025 is more about what you _don't_ use than what you _do_ use. As an early stage startup you can get a long way with no RBAC, no network policies, no auto-scalers, and even no stateful workloads. You can use in-cluster metrics metrics and logging before you need to turn to Prometheus, Loki, etc. Use something managed like AWS EKS.

Try to solve your problems first by taking away, and only if that isn't feasible then start adding. Plain old Deployments will get you a long way.

Now this next bit is going to sound like a pitch, and that's because it is – but when those free credits start running out, your bill starts reaching mid-four-figures, and you start thinking about your first DevOps hire, _call us_. Just for 30 minutes. We can migrate you out your cloud infra and onto a nice spacious bare metal k8s cluster, and we'll become your 24/7 on-call DevOps team. We'll get woken up in the night when things break, not you. And core-for-core it will cost a lot less than AWS.

The fact that we can do all that is a testament to how expensive AWS really is. K8s is a good choice now if you keep it simple, positions you well for growth in the future, and for a cluster under a couple of hundred cores it is going to be pretty economical to run it in the public cloud.

PS. Link in bio

lucideng•5mo ago
imo, really depends on what you're doing, what your team's skills are, growth trajectory, money, etc. if you need to scale a ton of compute up and down, k8s might be a good fit, but for most startups, it's using a sledgehammer to drive a finishing nail.

* how much downtime can be tolerated during a deploy or outage? load balancing and multi-region is more $$$.

* if you have a bunch of linux nerds and an efficient app -- a nginx webserver + your app + Postgres DB and Ansible to manage a single VM with Cloudflare in front of it might be a good option. Portainer in the VM is nice if you want to go with containers.

* if you have a bunch of desktop devs, containers and build pipelines with PaaS are a good option. many are resilient and have HTTPs built in.

* the smaller your infra/devops team, the more i would leverage team knowhow and encourage PaaS offerings.

* the smaller your budget, the more creative you need to be (ec2/storage accounts as part of hosting, singular monolithic VM has relatively flat costs, what free stuff do i have on my cloud provider, etc)

atmosx•5mo ago
I've seen it work. I’ve managed EKS clusters for small teams myself, so it’s definitely doable.

The real challenge isn’t setting up the EKS cluster, but configuring everything around it: RBAC, secrets management, infrastructure-as-code, and so on. That part takes experience. If you haven’t done it before, you're likely to make decisions that will eventually come back to haunt you — not in a catastrophic way, but enough to require a painful redesign later.

P.S. If your needs are simple, consider starting with Docker Swarm. It's surprisingly low maintenance compared to Kubernetes, which has many moving parts and frequent deprecations from cloud providers. Feel free to drop me an email. I can share a custom Python tool I've written long time ago to automate the initial setup via the AWS API.

therealfiona•5mo ago
After 7 years, and me wanting to move off EKS since I got the job 4 years ago, we are moving to ECS (I rose to power of Lead recently, but my engineers also thought it was a great move as they're sick of all the K8s BS).

The time sink required for the care and feeding just isn't worth it. I pretty much have to dedicate one engineer about 50% of the year to keeping the dang thing updated.

The folks who set it all up did a poor job. And it has been a mess to clean up. Not for lack of trying, but for lack of those same people being able to refine their work, getting pulled into the new hotness and letting the clusters rot.

Idk your workload, but mine is not even suited for K8s... The app doesn't like to scale. And if the leader node gets terminated from a scale down, or an EC2 fails, processing stops while the leader is reelected. Hopefully not another node that is going down in a few seconds... Most of the app teams stopped trying to scale their app up and down because of this ...

I would run on ECS if AWS was my cloud at a start up. Then if scaling was getting too crazy, move to EKS.

But for the love of God ... Keep your monitoring and logging separated from your apps. Give it its own ECS cluster, or buy a fully managed solution. It is hard to record downtime if your monitoring goes down during your K8s upgrade.

herval•5mo ago
do you have a sense of how it compares to GKE or the Azure k8s offering?
davnicwil•5mo ago
When it works, it works well. Just don't spend any innovation tokens messing with it. Weight it likely that you will end up spending a bunch of time discovering its corners if you don't already know them.

Same goes for all tech choices. If you already know it, you understand its pros and cons and it still seems like the simplest best option for the concrete thing you need to build right now, use it.

Otherwise, use whatever alternative tech fits that description instead!

xyzzy123•5mo ago
There aren't really any huge gotchas imho in 2025, just watch out that you don't get sidetracked delivering awesome developer infrastructure (preview environments! blue/green! pristine iac! its fun!) if there are actually more important things to be working on (there usually are).

At early stage the product should usually be a monolith and there are a LOT of simple ways to deploy & manage 1 thing.

Probably not an issue for you but costs will also tend to bump up quite a lot, you will be ingesting way more logs, tons more metrics etc just for the cluster itself, you may find yourself paying for more things to help manage & maintain your cluster(s) . Security add-ons can quickly get expensive.

fragmede•5mo ago
Just go all in on Vercel unless you have a particular need for extra backend computing. You haven't given any details that would lead us to believe that's inappropriate, so I'd start there, add Neon, and iterate on the product that you're selling, rather than over engineering unrelated pieces. Unless that is where the product you're selling has efficiencies no one else has.
JustExAWS•5mo ago
Why would you need Kubernetes with a startup instead of in the case of AWS just using some EC2 instances, a load balancer and an autoscaling group and a monolithic application? For your backend just use a hosted version of MySql/Postgres/ElasticSearch ?

It’s simple, no “cloud locki in” (which is really over exaggerated). The only reason to use K8s is for resume driven development. Which honestly is not a bad idea in an of itself because your startup is going to statistically fail and you might as well use the experience to get another job.

dangus•5mo ago
I would still personally prefer to containerize early because I think a new company should avoid the infrastructure dependency and deployment pitfalls that come with using plain traditional EC2.

I’d say aim to avoid using tech like Chef, Ansible, Packer, etc, which will be an inevitable need for someone putting applications on EC2 instances. If these need to be used they should be used for extremely limited/basic things like if there’s stuff you need to install in the base OS for management that can’t be done in a different way.

Of course this doesn’t mean use k8s for your applications, I think ideally you’re looking at ECS leveraging either EC2 or Fargate or another cloud’s equivalent.

JustExAWS•5mo ago
That’s actually a good point. It’s been 6 years since I have deployed to EC2. I have been doing Lambda or ECS/Fargate.

But when I did it was either:

1. Azure DevOps with agents on EC2

2. Octopus Deploy with agents (not my choice)

3. CodeDeploy with agents

All of those are horrible choices.

On the other hand, my Fargate deployments have all been:

1. Create Docker container

2. Push to ECR

3. Deploy a parameterized CloudFormation template to update an ECS cluster.

I’ve used a modified version of this template for 7 years. I’m not the author.

https://github.com/1Strategy/fargate-cloudformation-example/...

dangus•5mo ago
Exactly, I think any solution that is just like “build docker container, push docker container” is very much what you want as a startup.

“Avoid k8s” shouldn’t mean “avoid containers” because they have major velocity and reliability benefits for small teams with no ops engineers.

I totally agree with you that CodeDeploy in particular is miserable. I’d much rather overpay for compute using Fargate than be saddled with that.

tdsanchez•5mo ago
Best advice is if you have to rearchitect your system JUST to run it on Kubernetes you probably don’t know enough about it yet.

I have worked with ~100 dev teams across industries modernizing infrastructure and most dev teams are too silo’d off from operations to effectively use K8s off the shelf.