There are infinite permutations in architecture and we've collectively narrowed them down to things that are cheap to deploy, automatically scale for low costs, and easily replicable with a simple script
We should be talking about how AI knows those scripts too and can synthetize adjustments, dedicated Site Reliability Engineers and DevOps is great for maintaining convoluted legacy setups, but irrelevant for doing the same thing from scratch nowadays
A network call. Because nothing could be better for your code than putting the INTERNET into the middle of your application.
--
The "micro" of microservices has always been ridiculous.
If it can run on one machine then do it. Otherwise you have to deal with networking. Only do networking when you have to. Not as a hobby, unless your program really is a hobby.
Local function calls are infinitely more reliable. The main operational downside with a binary monolith is that a bug in one part of the program will crash the whole thing. Honestly, I still think Erlang got it right here with supervisor trees. Use "microservices". But let them all live on the same computer, in the same process. And add tooling to the runtime environment to allow individual "services" to fail or get replaced without taking down the rest of the system.
yes, networking is the bottleneck between the processes, while one machine is the bottleneck to end users
You can run your monolith on multiple machines and round-robin end-user requests between them. Your state is in the DB anyway.
Gonna stop you right there.
Microservices have nothing to do with the hosting or operating architecture.
Martin Fowler who formalized the term, Microservices are:
“In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery”
You can have an entirely local application built on the “microservice architectural style.”
Saying they are “often HTTP and API” is besides the point.
The problem Twilio actually describe is that they messed up service granularity and distributed systems engineering processes
Twilio's experience was not a failure of the microservice architectural style. This was a failure to correctly define service boundaries based on business capabilities.
Their struggles with serialization, network hops, and complex queueing were symptoms of building a distributed monolith, which they finally made explicit with this move. So they accidentally built a system with the overhead of distribution but the tight coupling of a single application. Now they are making their foundations of architecture fit what they built, likely cause they poorly planned it.
The true lesson is that correctly applying microservices requires insanely hard domain modeling and iteration and meticulous attention to the "Distributed Systems Premium."
Given the quality and the structure neither approach really matters much. The root problems are elsewhere.
If you must to deploy every service because of a library change, you don't have services, you have a distributed monolith. The entire idea of a "shared library" which must be kept updated across your entire service fleet is antithetical to how you need to treat services.
It's likely there's a single source of truth where you pull libraries or shared resources from, when team A wants to update the pointer to library-latest to 2.0 but the current reference of library-latest is still 1.0, everyone needs to migrate off of it otherwise things will break due to backwards compatibility or whatever.
Likewise, if there's a -need- to remove a version for a vulnerability or what have you, then everyone needs to redeploy, sure, but the centralized benefit of this likely outweighs the security cost and complexity of tracking the patching and deployment process for each and every service.
I would say those systems -are- and likely would be classified as micro services but from a cost and ease perspective operate within a shared services environment. I don't think it's fair to consider this style of design decision as a distributed monolith.
By that level of logic, having a singular business entity vs 140 individual business entities for each service would mean it's a distributed monolith.
No, this misses one of the biggest benefits of services; you explicitly don't need everyone to upgrade library-latest to 2.0 at the same time. If you do find yourself in a situation where you can't upgrade a core library like e.g. SQLAlchemy or Spring, or the underlying Python/Java/Go/etc runtime, without requiring updates to every service, you are back in the realm of a distributed monolith.
However, the world we live in, people choose pointing to latest, to avoid manual work and trust other teams did the right diligence when updating to the latest version.
You can point to a stable version in the model I described and still be distributed and a micro service, while depending on a shared service or repository.
Can you imagine if Google could only release a new API if all their customers simultaneously updated to that new API? You need loose coupling between services.
OP is correct that you are indeed now in a weird hybrid monolith application where it’s deployed piecemeal but can’t really be deployed that way because of tightly coupled dependencies.
Be ready for a blog post in ten years how they broke apart the monolith into loosely coupled components because it was too difficult to ship things with a large team and actually have it land in production without getting reverted to an unrelated issue.
The logical problem you’re running into is exactly why microservices are such a bad idea for most businesses. How many businesses can have entirely independent system components?
Almost all “microservice” systems in production are distributed monoliths. Real microservices are incredibly rare.
A mental model for true microservices is something akin to depending on the APIs of Netflix, Hulu, HBO Max and YouTube. They’ll have their own data models, their own versioning cycles and all that you consume is the public interface.
Do you depend on a cloud provider? Not a microservice. Do you depend on an ISP for Internet? Not a microservice. Depend on humans to do something? Not a microservice.
Textbook definitions and reality rarely coincide, rather than taking such a fundamentalist approach that leads nowhere, recognize that for all intents and purposes, what I described is a microservice, not a distributed monolith.
There are categories and ontologies are real in the world. If you create one thing and call it something else that doesn’t mean the definition of “something else” should change
By your definition it is impossible to create a state based on coherent specifications because most states don’t align to the specification.
We know for a fact that’s wrong via functional programming, state machines, and formal verification
That is fundamentally incorrect. As presented in my other post you can correctly use the shared repository as a dependency and refer to a stable version vs a dynamic version which is where the problem is presented.
For example, a library with a security vulnerability would need to be upgraded everywhere regardless of how well you’ve designed your system.
In that example the monolith is much easier to work with.
The npm supply chain attacks were only an issue if you don't use lock files. In fact they were a great example of why you shouldn't blindly upgrade to the latest packages when they are available.
I'm referring to the all hands on deck nature of responding to security issues not the best practice. For many, the NPM issue was an all hands on deck.
Microservices as a goal is mostly touted by people who don't know what the heck they're doing - the kind of people who tend to mistakenly believe blind adherence to one philosophy or the other will help them turn their shoddy work into something passable.
Engineer something that makes sense. If, once you're done, whatever you've built fits the description of "monolith" or "microservices", that's fine.
However if you're just following some cult hoping it works out for your particular use-case, it's time to reevaluate whether you've chosen the right profession.
Putting this anywhere near "engineering" is an insult to even the shoddiest, OceanGate-levels of engineering.
This is the problem with the undefined nature of the term `microservices`, In my experience if you can't develop in a way that allows you to deploy all services independently and without coordination between services, it may not be a good fit for your orgs needs.
In the parent SOA(v2), what they described is a well known anti-pattern: [0]
Application Silos to SOA Silos
* Doing SOA right is not just about technology. It also requires optimal cross-team communications.
Web Service Sprawl
* Create services only where and when they are needed. Target areas of greatest ROI, and avoid the service sprawl headache.
If you cannot, due to technical or political reasons, retain the ability to independently deploy a service, no matter if you choose to actually independently deploy, you will not gain most of the advantages that were the original selling point of microservices, which had to do more with organizational scaling than technical conserns.There are other reasons to consider the pattern, especially due to the tooling available, but it is simply not a silver bullet.
And yes, I get that not everyone is going to accept Chris Richardson's definitions[1], but even in more modern versions of this, people always seem to run into the most problems because they try to shove it in a place where the pattern isn't appropriate, or isn't possible.
But kudos to Twilio for doing what every team should be, reassessing if their previous decisions were still valid and moving forward with new choices when they aren't.
[0] https://www.oracle.com/technetwork/topics/entarch/oea-soa-an... [1] https://microservices.io/post/architecture/2022/05/04/micros...
Doing it for organizational scaling can lead to insular vision with turf defensive attitude, as teams are rewarded on the individual service’s performance and not the complete product’s performance. Also refactoring services now means organizational refactoring, so the friction to refactor is massively increased.
I agree that patterns should be used where most appropriate, instead of blindly.
What pains me is that a language like “Cloud-Native” has been usurped to mean microservices. Did Twilio just stop having a “Cloud-Native” product due to shipping a monolith? According to CNCF, yes. According to reason, no.
Small teams run on shared context. That is their superpower. Everyone can reason end-to-end. Everyone can change anything. Microservices vaporize that advantage on contact. They replace shared understanding with distributed ignorance. No one owns the whole anymore. Everyone owns a shard. The system becomes something that merely happens to the team, rather than something the team actively understands. This isn’t sophistication. It’s abdication.
Then comes the operational farce. Each service demands its own pipeline, secrets, alerts, metrics, dashboards, permissions, backups, and rituals of appeasement. You don’t “deploy” anymore—you synchronize a fleet. One bug now requires a multi-service autopsy. A feature release becomes a coordination exercise across artificial borders you invented for no reason. You didn’t simplify your system. You shattered it and called the debris “architecture.”
Microservices also lock incompetence in amber. You are forced to define APIs before you understand your own business. Guesses become contracts. Bad ideas become permanent dependencies. Every early mistake metastasizes through the network. In a monolith, wrong thinking is corrected with a refactor. In microservices, wrong thinking becomes infrastructure. You don’t just regret it—you host it, version it, and monitor it.
The claim that monoliths don’t scale is one of the dumbest lies in modern engineering folklore. What doesn’t scale is chaos. What doesn’t scale is process cosplay. What doesn’t scale is pretending you’re Netflix while shipping a glorified CRUD app. Monoliths scale just fine when teams have discipline, tests, and restraint. But restraint isn’t fashionable, and boring doesn’t make conference talks.
Microservices for small teams is not a technical mistake—it is a philosophical failure. It announces, loudly, that the team does not trust itself to understand its own system. It replaces accountability with protocol and momentum with middleware. You don’t get “future proofing.” You get permanent drag. And by the time you finally earn the scale that might justify this circus, your speed, your clarity, and your product instincts will already be gone."
-DHH
I managed a product where a team of 6–8 people handles 200+ microservices. I've also managed other teams at the same time on another product where 80+ people managed a monolith.
What i learned? Both approaches have pros and cons.
With microservices, it's much easier to push isolated changes with just one or two people. At the same time, global changes become significantly harder.
That's the trade-off, and your mental model needs to align with your business logic. If your software solves a tightly connected business problem, microservices probably aren't the right fit.
On the other hand, if you have a multitude of integrations with different lifecycles but a stable internal protocol, microservices can be a lifesaver.
If someone tries to tell you one approach is universally better, they're being dogmatic/religious rather than rational.
Ultimately, it's not about architecture, it's about how you build abstractions and approach testing and decoupling.
These problems are effectively solved on beam, the jvm, rust, go, etc.
TL;DR they have a highly partitioned job database, where a job is a delivery of a specific event to a specific destination, and each partition is acted upon by at-most-one worker at a time, so lock contention is only at the infrastructure level.
In that context, each worker can handle a similar balanced workload between destinations, with a fraction of production traffic, so a monorepo makes all the sense in the world.
IMO it speaks to the way in which microservices can be a way to enforce good boundaries between teams... but the drawbacks are significant, and a cross-team review process for API changes and extensions can be equally effective and enable simplified architectures that sidestep many distributed-system problems at scale.
Plus, it's ALWAYS easier/better to run v2 of something when you completely re-write v1 from scratch. The article could have just as easily been "Why Segment moved from 100 microservices to 5" or "Why Segment rewrote every microservice". The benefits of hindsight and real-world data shouldn't be undersold.
At the end of the day, write something, get it out there. Make decisions, accept some of them will be wrong. Be willing to correct for those mistakes or at least accept they will be a pain for a while.
In short: No matter what you do the first time around... it's wrong.
develatio•1h ago
pmbanugo•49m ago
Towaway69•23m ago