The common bits can eventually be moved into a package dependency and referenced. Separate features if they become large enough can also be moved into separate packages, but part of the same monolithic codebase.
In some cases, it can be easier still and just ship the entire runtime as-is (without any additional work to enable modularity) and simply route different endpoints (e.g. https://feature1.domain.com -> node set 1, https://feature2.domain.com -> node set 2) so you still have the option to monitor and scale the features differently based on their load profile and needs. This works great as long as cold starts are not a big concern (thereby adding a requirement for minimizing package size).
I find this particularly easy on AWS, especially when deploying with Copilot CLI[1] because it makes it relatively easy to just route different sub-domains to different target groups. Now you have one singular container image that just gets scaled differently by route (e.g. a high volume feature gets bigger nodes and a dedicated route in Route 53).
I find some teams have trouble thinking this way because devs many times are not involved enough in the deploy time considerations. For more involved app-level partitioning of modules, I have a practical example in C#[2] that would work equally well with something like Nest.js (or Elysia or Hono) by simply using environment variables to declare a "feature role" for the instance and dynamically enabling/disabling feature modules.
[0] https://www.jimmybogard.com/vertical-slice-architecture/
The devil in the details is how you pull something like this off. At the end of the day is boils down to how do you enforce that your team does the right thing. You can have a single person that enforces standards with an iron fist, but this doesn't scale. You can teach everyone how this should work, but you're going to experience drift over time as people come and go. Or you can enforce it using technology and automation.
In the cases of the first choice, its going to restrict how big your team can get and will end up eating all of the time of your one person.
In the case of the second choice, a combination of the tragedy of the commons and regression to mean will degrade the system to spaghetti code.
For the third scenario language choice matters here a lot. In Java with multi maven modules you can setup maven to forbid imports of specific module types allowing you to make modules as private/public. In Python you can't do any of this.
In the end, AWS only happened because of Jeff Bezos’ infamous “all intra-team communication now goes over HTTP, no exceptions, or you’re fired”-email.
The decision to prefer modules whenever they do the job, and defer only to microservices whenever they don’t, seems like the kind of mantra that needs to come from the CTO and made part of the company culture’s DNA.
This is of course still possible with a microservices architecture, but the barrier to changing a rest contract/API is usually much higher, and people think a lot more about what is being passed across the interface since that data is going to be sent over a wire.
Theoretically there is no difference, but its just far easier to slip when its one codebase and all it takes is someone a little too "LGTM" happy to let it through.
My guess is it’s a top level folder which shows the cross module deps.
We spun out the specialized tasks (data analysis and PDF generation key among them) to native-compiled binaries or containerized packages like Gotenberg, started moving data around between modules via JSON, isolated the legacy monoliths to containers, unified on our now-modularized PHP backend, and have been working on updating or replacing any other pieces with new modules that can serve the task better. Our clients and non-engineering employees get antsy, but as a smaller company and a smaller programming team, we simply cannot maintain multiple 20-year-old codebases with near-total overlap. It makes no sense now, it didn't make sense when they were each created.
Coming up with a greenfield microservice design with arbitrary responsibilities and intercommunication feels so stupid to me. Why not build the thing as a monolith and split parts out when you actually have scaling problems, instead of solving theoretical problems? Development velocity is going to be way higher and it gives developers a chance to discover problems without having to deal with the mental overhead of shipping N services all at once.
The value of fast iteration cannot be overstated. Build the minimum viable version of the project first, then stress test it and break parts out when needed. This is much easier if you write modular code.
I sometimes think programmers are the last people who should be writing software.
The personality type that likes writing code is the exact type that likes tinkering around the edge and working on hypotheticals instead of addressing the problem at hand.
Good programmers pride themselves on striking compromises and shipping a smaller thing sooner and they love iterating. The ones that are working on hypotheticals unchecked are not bad--just nobody has educated them.
The tooling is good enough to scale out, but micro services are mostly beneficial for organizational scaling.
The value for other concerns is mostly situational.
Unless you have some very specific need, I was horizontally scaling the app layer in 1996 even in true monoliths.
K8s can be too much, but even when tooling and costs forced us to segment by technology layers, you never tried two node failovers.
If you are trying to share state at the app level you would most reduce availability, because of split brain etc…
The persistence layer was bad enough with shared quorum drives, heartbeat networks etc…
It sounds like you are just spinning up to app servers or are you talking about active/passive or active/active two node clusters?
That is vertically scaling in the way I understand the term, not just scaling out two instances.
Shipping a monolith worked on by 250+ devs in 30 teams is slow due to the coordination needed as compared to 30 teams shipping 50-75 services.
Vertical scaling can take you really, really far operationally.
But again, most systems will never be there ;)
Then, uncoincidentally, they started crying about how they couldn't find work anymore.
- "I'm struggling to find work and need to make money. What can I do?"
- "Learn to code, good buddy! Software engineering solves all problems."
2025:
- "I learned to code and am still struggling to find work and need to make money."
- "You shouldn't be doing software engineering."
This happens at all sorts of companies that are not FAANGs, and don't have "insane scale". There's an extremely large spectrum between the web site for Joe's Coffee Bar and Amazon. On commodity hardware, you hit issues with scaling up a monolith long before you reach FAANG level or "insane scale".
I've worked with multiple startups that have hit scaling limits with their monoliths. Inevitably, dealing with that is a huge problem because the monolith was developed with few resources under heavy time pressure. Modularity is lacking, breaking it up is difficult. Individual devs are often inclined to say that's just a skill issue, and that may have some truth to it, but managing those skill issues is a big part of what corporate software development is about.
This can have a huge impact on a company's funding, ability to deliver new features, ability to scale development, and of course ability to scale the user base. Typically, by the time they hit that wall, scaling the monolith horizontally is not a great option, because it wasn't designed to support that.
It's often been observed that microservices are a primarily an organizational tool, and that's true. But organization is critical if you have multiple development teams.
That doesn't necessarily mean every app should consist of hundreds of tiny microservices. But there can be enormous benefits from implementing an app from the start as independent services based on its natural divisions between modules.
I have done a bit of work implementing a prototype framework for coding concepts using TypeScript [2], and it has worked beautifully for the Software Design class at MIT Daniel teaches. I think the newest iteration of class this semester uses a different approach to code concepts, but it's still a research space.
[0] https://essenceofsoftware.com/tutorials/
[1] https://people.csail.mit.edu/dnj/
[2] https://61040-fa24.github.io/pages/concept-implementations.h...
This is because web and SQL are both dominant platforms for enterprise and are both allergic to modularization. Everything is global, everything is shared. Isolation and private interfaces are either impossible or late-added afterthoughts on a global-first platform.
So faced with this, it made sense that the only way to provide modularization was to use isolated computers that only talk over sluggish HTTP.
That's not actually true of SQL-based RDBMSs (except embedded ones like SQLite), and hasn't been for longer than the median developer has been alive, but it is probably fairly accurate of the average app developers understanding of SQL.
What is the problem with Monoliths? Nothing. Until there is. The problem with monoliths is when you have a million LoC Java application that is on Java 6, and will take months of work to get up to date, take 20 minutes to load on a dev machine, starts to fail because its getting too big for a dev machine to handle, can't bring in any new dependencies because of how old the Java version is, and has an old bespoke Ant + Maven + Jenkins + Bash + Perl build and deploy system that has been built up over the last 30 years.
So what do you do? Breaking off pieces of the code into microservices that can run on a new Spring boot and can run on a newer nice IaC set up is an easy win. Sure you basically have a microlith, but it increases your dev velocity.
I think Monolith issues are typically a symptom of a few other things: 1. accumulated deferred maintenance and tech debt 2. Inadequate developer tooling 3. Inadequate CICD tooling 4. Rarely scale until you really start to hit the size of like Google, Uber, Facebook, etc.
But you don't need to do this on your dev machine. Nearly a decade ago at GrubHub we already had a setup that allowed to run a few microservices under development locally, while relegating the rest to the staging environment that just runs every microservice, like prod, but in small quantities.
A JVM-based microservice used to take, say, 16-20 MiB of RAM; a 50-MiB service was considered a behemoth that may need slimming down. You could run quite a number of 20-MiB containers on a laptop with 16 GiB, along with all your dev setup, some local databases, etc.
But, if you have a modular monolith, it will be easy to split it up into separate services, whether microservices or just services. It will be a good test to see how modular your system/monolith really is.
Modularization is what's primary here and gives you flexibility; not having one vs multiple units of deployment
Microservices is where you treat each team of people as their own independent business unit. It models services found in the macro economy and applies the same patterns in the micro economy of a single organization. Hence the name.
The clearest and probably simplest technical road to achieving that is to have each team limit exposure to their work to what can be provided over a network, which is I guess how that connotation was established. But theoretically you could offer microservices with, for example, a shared library or even a source repository instead.
If one endpoint needs to scale to handle 10x more traffic, its wholefully inefficient to 10x your whole cluster.
Ideally you write the code as services/modules in a monolith imo. Then you can easily run those services as separate deployments later down the line if need be
There's not one silver bullet. It's not 100% monoliths, or 100% microservices for all.
Learning from the things we don't do, haven't done yet, in the ways you haven't yet thought of also helps expand one's skills.
This is because clever architecture, will always beat clever coding.
Like through network versus code running on the same machine? Cause that should already be distributed unless you can really fit your whole needs on a single machine
We have an app with two different deployments. One is serving HTTP traffic, and the other is handling kafka messages. The code is exactly the same, but they scale based on different metrics. It works fine.
Your design goal should not be "create the smallest service you can to satisfy the 'micro' label". Your design goal should be to create right-sized services aligned to your domain and organization.
The deployment side is of course a red herring. People can and do deploy monoliths with multiple deployments and different endpoints. And I've seen numerous places do "microservices" which have extensive shared libraries where the bulk of the code actually lives. Technically not a monolith - except it really is, just packaged differently.
If you've got "microservices" but every dev still has to run a dozen kubernetes pods to be able to develop on any part of it, then I'm pretty sure you ended up with the worst of both worlds.
A place I worked at years ago did what I effectively called "nano-services".
It was as if each API endpoint needed its own service. User registration, logging in, password reset, and user preference management were each their own microservice.
When I first saw the repo layout, I thought maybe they were just using a bunch of Lambdas that would sit behind an AWS API Gateway, but I quickly learned the horror as I investigated. To make it worse, they weren't using Kubernetes or any sort of containers for that matter. Each nanoservice was running on its own EC2 instance.
I swear the entire thing was designed by someone with AWS stock or something.
- a million LoC Java application that is on Java 6 -> Congrats, now you have two half a million LoC Java application on two different Java versions. And if the set up is like most apps, you will likely need both running to debug most issues because most issues happen at the system to system interface
- take 20 minutes to load on a dev machine -> that is fair enough, I have only ever seen an app that takes that long on a modern machine once, most shops doing micro services don't have apps that big
- has an old bespoke Ant + Maven + Jenkins + Bash + Perl build and deploy system that has been built up over the last 30 years -> you can have the same problem on a micro service architecture, you can actually have that problem multiplied by 10 and now you can spend a whole sprint updating dependencies. Fun!
Breaking off pieces of the code into microservices that can run on a new Spring boot and can run on a newer nice IaC set up is an easy win -> You conveniently forget to mention the additional team to fix issues related to system to system communication
> you can have the same problem on a micro service architecture, you can actually have that problem multiplied by 10 and now you can spend a whole sprint updating dependencies. Fun!
Definitely true... Not to mention when your entire orchestration becomes too big to run anything locally, that's where the real fun and complexity starts. There's definitely such a thing as too many micro-services, or too micro for that matter...
wrong. well architected services would have a good interface and problems rarely span multiple services.
>- has an old bespoke Ant + Maven + Jenkins + Bash + Perl build and deploy system that has been built up over the last 30 years -> you can have the same problem on a micro service architecture, you can actually have that problem multiplied by 10 and now you can spend a whole sprint updating dependencies. Fun!
this is no un-nuanced. the point is if you have decomposed the codebase into smaller ones - migrations are easier.
He's saying instead of using "microservices" to modularize your shit, you can use folders to modularize your shit. Folders? Files? When someone told me that I could use folders to modularize stuff instead of entire microservices the concept was so foreign to me that it opened up a hole new world.
> “we have users, so let’s do a user service. Then we have files, so let’s do a file service”.
Agreed that this is not a useful heuristic for deciding how many services you need.
However, having a monolith does not automatically mean you abandon all addressing of tech debt. I worked on a large monolith that went from Java 7 to Java 21, it was never stuck, had excellent CI tooling, including heavy integration/functional testing, and a good one-laptop DX, where complex requests can be stepped through in your IDE all the way thru in one process.
Your argument dooes not invalidate a service-oriented approach with large (non-micro) services. You can have a large shared code base (e.g. domain objects, authentication and authorization logic, scheduling and job execution logic) that consists of modular service objects that you can compose into one or three or four larger services. If I had to sell that to the microservice crowd, I would call them "virtualized microservices", combined into one or many several deployment units.
In fact, if I were to start a new project today, I would keep it a monolith with internal modularity until there was a clear value to break out a 2nd service.
Also, it is completely valid to break out into microservices things that make absolute sense and are far detached from your normal services. You can run a monolith + a microservice when it makes sense.
What doesn't make sense is microservices-by-default.
The danger of microservices-by-default is that you are forced to do big design up-front, as refactoring microservices at their network boundaries is much more difficult than refactoring your internal modules.
Also, microservices-by-default means you now have so many more network boundaries and security boundaries to worry about. You now have to threat-model many microservices because of the significantly increased number of boundaries and network surface. You are now forcing your team to deal with significantly more distributed computing aspects right away--so now inter-service boundaries are network calls instead of in-process calls, requiring more careful design that has to account for latency, bandwidth and failure. You now have to worry about the availability and latency of many services, and risk a weakest-link-in-chain service bring your end-user availability down. You waste considerably more computing resources by not being able to share memory or CPU across all these services. You will end up writing microservice caches to serve over the network that which could've been an in-process read. Or if you're hardcore about having stateless microservices (another dogmatic delusion), you will now be standing up Redis instances or Memcached for your caches--to be transferred over the network.
I’m not sure why that’s your first instinct as opposed to splitting up your monolith into multiple Java packages that only have a downstream dependency relationship. (This is the second option in the article.) Spinning up microservices is hardly an easy win compared to this approach.
I was with you until this part.
The correct answer is:
> Breaking off pieces of the code into microservices that no longer have Spring Boot as a dependency so you are not pulling in unknown numbers of unneeded dependencies that could have an unexpected impact on your application at surprising times, and forced version upgrades for security patches that also make major semantic breaking changes.
The purpose of OOP was to replicate the benefits of micro-services in single user environments. A class corresponds to a service type and an object an physical instance of the service.
So why did Monoliths/modules fail? Some pretty simple issues, incomplete isolate between the modules, memory corruption and performance issues easily propagate between the "isolated" modules.
But the main killer is compile times. Monoliths/module based programs require massive compile times that grow quickly with the size of the program.
Each individual module is fast to compile.
Micro-services then came along (Distributed computing).
Then OOP was then invented to replicate the benefits of micro-services in single user environments.
DCOM (MS) replicated some of that but never really caught on. You'll find quite a bit of that kind of stuff in OSGI (Java) that was somewhat popular in the early 2000s. And there was of course the whole SOAP / Web Services / SOA mess that people got into around the same time. Docker emerged early 2010s and Kubernetes soon followed and fast forward ten years and we're neck deep into the same shit all over again via micro services.
Part of this is just stuff you kind of need if you want to do a distributed system (like service discovery, auditing, security, etc.). And part of that is a lot of complexity resulting from that. Especially when you are distributing for organizational rather than technical reasons (Conway's law). Which is a major driving force in larger organizations.
The point here is that none of this is new. Also this stuff did not really fail but just seamlessly morphed into the next thing. People keep on making the mistake of believing that buzzword compliance makes things better/easier when in reality you are paying a largish price in terms of complexity and overhead for not that much gain. There's a lot of wheel reinvention happening here as well. And a lot of the exact same naivity being projected on the latest and greatest framework or thing. The same type of people that are cheer leading microservices now would have been cheer leading web services twenty years ago. In some cases these literally are the same people. Or companies. There's a reason IBM is all over this stuff, for example.
This is not even remotely true, in any conceivable sense. But I'd love to hear what you were thinking of.
I've discovered that their taxonomy at the opening of the article ("single unit of deployment - monolith, multiple units of deployment - microservices/services") is a bit optimistic. Because, when it comes to making things unnecessarily complicated, human ingenuity knows no limits.
I've now worked on a project that somehow managed to be monolithic despite having multiple units of deployment. We weren't good about contract/API change management, so in practice it was rare that you could separately deploy "independent" services.
And I've also worked on a project that had a single unit of deployment but was somehow still more microservice-like. Everything was packaged into a single giant Docker image that contained the binaries for all the services. (You'd pick which one a container was running with run-time configuration.) But they were well-modularized and services from different versions of the image could talk to each other just fine, so in practice working on it often felt more like successful microservice implementations in development and production. It's just that getting things from development to production was an unholy nightmare because the CI pipeline for that "mono-image" was such a monstrosity.
More academic and language neutral introduction here if that’s your thing: https://arxiv.org/pdf/2404.09357
Capabilities based programming could come along way though to help with closing that gap.
1. Everything is a monolith. Frontend, backend, dataplane, processing, whatever: it's all one giant, tightly coupled vertically-scaled ball of mud. (This is insane.)
2. Everything is a monolith, but parts are horizontally scaled. Imagine a big Flask app where there are M frontend servers, and N backend async task queue processors, all running the same codebase but with different configurations for each kind of deployment. (This is perfectly reasonable.)
3. There are a small number of separate services. That frontend Flask server talks to a Go or Rust or Node or whatever backend, each appropriate to the task at hand. (This is perfectly reasonable.)
4. Everything is a separate service. There are N engineers and N+50% servers written in N languages, and a web page load hits 8 different internal servers that do 12 different things. The site currently handles 23 requests per day, but it's meant to vertically scale to Google size once it becomes popular. Also, everything is behind a single load balancer, but the principle engineer (who interned at Netflix) handwaves it away a "basically infinitely scalable". (This is insane.)
These conversations seem to devolve into fans of 1 and 4 arguing that the other is wrong. People in 2 and 3 make eye contact with each other, shrug, and get back to making money.
You can get really, really far with a decent machine if you mind the bottlenecks. Getting into swap? Add RAM. Blocked by IO? Throw in some more NVMe. Any reasonable CPU can process a lot more data than it's popular to think.
In general, the vast number of small shops chugging away with a tractably sized monolith aren't really participating in the conversation, just idly wondering what approach they'd take if they suddenly needed to scale up.
The corollary to that is, it's amazing how far you can push vertical scaling if you're mindful of how you use memory. I've seen people scale single-process, single-threaded systems multiple orders of magnitude past the point where many people would say scale-out is an absolute necessity, just by being mindful of things like locality of reference and avoiding unnecessary copying.
Resist the urge to roll out that Prolog-driven inference service that your VP Eng vibe coded after reading an article about cool and strange programming languages.
The key problem of developing a large system is allowing many people to work on it, without producing a gridlock. A monolith, by its nature, produces a gridlock easily once a sufficient number of people need to work on it in parallel. Hence modules, "narrow waist" interfaces, etc.
But the key thing is that "you ship your org chart" [1]. Modules allow different teams to work in parallel and independently, as long as the interfaces are clearly defined. Modules deployed separately, aka services, allow different teams to ship their results independently, as long as they remain compatible with the rest of the system. Solving these organizational problems is much more important for any large company than overcoming any technical hurdles.
1. modularity - yes - but even better than what the article describes, there is little or no ability to cheat the modularity - the microservice has an api as a contract and is isolated in execution so there’s no trivial way to go around the api.
2. Independently committable/deployable. One reason to consider microservices is organizational- maybe you don’t want to or can’t share a repro with another team.
Now of course microservices have lots of downsides and are not a panacea and may be a bad fit for your project.
As long as your bottom dependency is fixed, you cannot progress!
sure there are a lot of startups that start off and don't have much traffic. but i don't like that we dismiss real world _productive_ applications because not many companies hit that scale. but most productive applications hit that scale!!
mlhpdx•1h ago