I've built pretty scalable things using nothing but Python, Celery and Postgres (that usually started as asyncio queues and sqlite).
There are things we don’t want to do (talk to costumers, investors, legal, etc.), so instead we do the fun things (fun for engineers).
It’s a convenient arrangement because we can easily convince ourselves and others that we’re actually being productive (we’re not, we’re just spinning wheels).
Unless you actively push yourself to do the uncomfortable work every day, you will always slowly deteriorate and you will run into huge issues in the future that could've been avoided.
And that doesn't just apply to software.
I should get off HN, close the editor where I'm dicking about with HTMX, and actually close some fucking tickets today.
Right after I make another pot of coffee.
...
No. Now. Two tickets, then coffee.
Thank you for the kick up the arse.
Do programmers really find so much fun in creating accidental complexity?
I've certainly been guilty of that myself, but building a microservices architecture is not one of these cases.
FWIW, the alternativey presented here for small web sites/apps seems infinitely more fun.
Immediate feedback, easy to create something visible and change things, etc.
Now if the problems to tackle are hard for me, for example, customer or company requirements that I feel I can't fulfill properly, or communication where I feel unable to clarify achievable goals, that's another story.
But even then, procrastrination via accidental complexity is not really fun.
Maybe it's me getting old there.
But I'd say that doing work that I am able to complete and achieve tangible results is certainly more fun than getting tangled in a mess of accidental complexity. I don't see how this is fun for engineers, maybe I'm not an engineer then.
Over-generalization, setting wrong priorities, that I can understand.
But setting up complex infra doesn't seem fun to me at all!
Normally the impetus to overcomplicate ends before devs become experienced enough to be able to even do such complex infra by themselves.
Overengineered infra doesn't happen in a vacuum. There is always support from the entire company.
Or is it to satisfy the ideals of some CTO/VPE disconnected from the real world that wants architecture to be done a certain way?
I still remember doing systems design interviews a few years ago when microservices were in vogue, and my routine was probing if they were ok with a simpler monolith or if they wanted to go crazy on cloud-native, serverless and microservices shizzle.
It did backfire once on a cloud infrastructure company that had "microservices" plastered in their marketing, even though the people interviewing me actually hated it. They offered me an IC position (which I told them to fuck off), but they really hated how I did the exercise with microservices.
Before that, it almost backfired when I initially offered a monolith for a (unbeknownst to me) microservice-heavy company. Luckily I managed to read the room and pivot to microservice during the 1h systems design exercise.
That diagram is just aws, programming language, database. For some reason hadoop I guess. And riak/openstack as redundant.
It just seems like pretty standard stuff with some seemingly small extra parts because that make me think that someone on the team was familiar with something like ruby, so they used that instead of using java.
"Why is Redis talking to MongoDB" It isn't.
"Why do you even use MongoDB" Because that's the only database there, and nosql schemaless solutions are faster to get started... because you don't have to specify a schema. It's not something I would ever choose, but there is a reason for it.
"Let's talk about scale" Let's not, because other than hadoop, these are all valid solutions for projects that don't prioritize scale. Things like a distributed system aren't just about technology, but also data design that aren't that difficult to do and are useful for reasons other thant performance.
"Your deployment strategy" Honestly, even 15 microservices and 8 databases (assuming that it's really 2 databases across multiple envs) aren't that bad. If they are small and can be put on one single server, they can be reproduced for dev/testing purposes without all the networking cruft that devops can spend their time dealing with.
Once you have a service that has users and costs actual money, while you don’t need to make it a spaghetti of 100 software products, you need a bit of redundancy at each layer — backend, frontend, databases, background jobs — so that you don’t end up in a catastrophic failure mode each time some piece of software decides to barf.
I mean it will happen regardless just from the side effects of complexity. With a simpler system you can at least save on maintenance and overhead.
I totally get the point it makes. I remember many years ago we announced SocketStream at a HackerNews meet-up and it went straight to #1. The traffic was incredible but none of us were DevOps pros so I ended up restarting the Node.js process manually via SSH from a pub in London every time the Node.js process crashed.
If only I'd known about upstart on Ubuntu then I'd have saved some trouble for that night at least.
I think the other thing is worrying about SPOF and knowing how to respond if services go down for any reason (e.g. server runs out of disk space - perhaps log rotation hasn't been setup, or has a hardware failure of some kind, or the data center has an outage - I remember Linode would have a few in their London datacenter that just happened to occur at the worst possible time).
If you're building a side project I can see the appeal of not going overboard and setting up a Kubernetes cluster from the get-go, but when it is things that are more serious and critical (like digital infrastructure for supporting car services like remotely turning on climate controls in a car), then you design the system like your life depends on it.
Really that's going way too far - you do NOT need Redis for caching. Just put it in Postgres. Why go to this much trouble to put people in their place for over engineering then concede "maybe Redis for caching" when this is absolutely something you can do in Postgres. The author clearly cannot stop their own inner desire for overengineering.
sachahjkl•1h ago