For example, we have a authentication microservice at work. It makes sense that it lives outside of the main application, because its used in a multiple different contexts and the service boundary allows for it to be more responsive to changes, upgrades and security fixes than having it be part of the main app, and it deploys differently than the application. It also adds enough intentional friction that we don't accidentally put logic where it doesn't belong as part of the user authentication process. It has helped keep the code focused on only primary concerns.
That said, you can't apply any of these patterns blindly, as is so often the case. A good technical leader should push back when the benefits don't actually exist. The real issue is lack of experience making technical decisions on merits.
This includes high level executive leaders in the organization. At a startup especially, they are still often involved in many technical decisions. You'd be surprised (well maybe not!) how the highest leadership in a company at a startup will mandate things like using microservices and refuse to listen to anything running counter to such things.
> It also adds enough intentional friction that we don't accidentally put logic where it doesn't belong as part of the user authentication process.
Preventing misplaced logic is a matter of good code structure, well defined software development processes and team discipline - not something that requires splitting into a separate microservice, and definitely not something that you want to solve on system architecture level.
Perhaps yes. Every situation should be evaluated on merits. This came across that there is also an assumption that we didn't try other solutions first - we absolutely did. Microservice is the best solution to solving the problems we needed solved in this case. Even better than modular monolith with clear interfaces.
>without the cognitive and operational tax microservices impose
When done correctly, I don't think there is a tax. Most operational questions should be automated away once discovered. The only 'tax' is that it lives separately from the larger application and is deployed independently, but I haven't seen in practice this add any notable overhead.
>Preventing misplaced logic is a matter of good code structure, well defined software development processes and team discipline
All true, and a microservice can aid all of these things too, but isn't the solution you should reach for when solving for these things and these things alone in my opinion. That said, myself and others have observed there is time saved on enforcing discipline around this issue once we separated the code away from the main application. I can't deny that hasn't been a good thing, because it has. It would be leaving information out about the benefits we'v experienced, and I see no reason to do that.
All told, completely dismissing the value of microservices as a potential solution is no different than completely dismissing other solutions in favor of microservices. Things have their place, there are pros and cons to them, and should be evaluated relative to their merit for the situation.
You may find you never implement microservices, or implement very few, or perhaps the needs of an organization is as such that its a pattern used most of the time, but the technical merits of doing so - with any decision of this nature, not limited to microservices - should have a backing justification that includes why other solutions don't fit
I completely agree. But this a little bit contradicts with your original comment that caught my eye:
> In my experience, a good rule of thumb[0] is if there are actual benefits from being a standalone service.
A rule of thumb is, by nature, a generalization — it simplifies decision making through heuristics. Benefits on the other hand always subjective, they can be interpreted in a given context.
That's a "large" organization problem. But large is actually, not that big (about 5-10 scrum teams before this is a very large problem).
It also means on critical systems separating high risk and low risk changes are not possible.
Like all engineering decisions, this is a set of tradeoffs.
What lifecycle are we really talking about? There are massive monoliths - like the Linux kernel or PostgreSQL - with long lifespans, clear modularity, and thousands of contributors, all without microservices. Lifecycle management is achievable with good architecture, not necessarily with service boundaries.
> If you update the logging library in a monolith, everyone takes that updates even if it breaks half the teams.
This is a vague argument. In a microservice architecture, if multiple systems rely on the structure or semantics of logs — or any shared behavior or state - updating one service without coordination can just as easily break integrations. It’s not the architecture that protects you from this, but communication, discipline, and tests.
> It also means on critical systems separating high risk and low risk changes are not possible.
Risk can be isolated within a monolith through careful modular design, feature flags, interface boundaries, and staged rollouts. Microservices don’t eliminate risk - they often just move it across a network boundary, where failures can be harder to trace and debug.
I’m not against microservices. But the examples given in the comment I responded to reflect the wrong reasons (at least based on what I’ve seen in 15+ years across various workplaces) for choosing or avoiding a microservice architecture.
Microservices don’t solve coupling or modularity issues — they just move communication from in-process calls to network calls. If a system is poorly structured as a monolith, it will likely be a mess as microservices too — just a slower, harder-to-debug one.
> grug wonder why big brain take hardest problem, factoring system correctly, and introduce network call too
> seem very confusing to grug
Make small changes in the monolith a time, though.
Do cloud/paas providers deeply support this flow anymore? Every dashboard would need to compare across multiple live versions and I haven't tried that in a while.
But when I see a plan to use it that doesn't include a plan for how to stop using it again ASAP, I get very worried.
The static typing makes even less sense at finer code scopes, like I don't need to keep asserting that a for-loop counter is an int.
Statically-types languages are a form of automatically-verified documentation, and an opportunity to name semantic properties different modules have in common. Both of those are great, but it is awkward that it is usually treated as an all-or-nothing matter.
Almost no language offers what I actually want: duck typing plus the ability to specify named interfaces for function inputs. Probably the closest I've found is Ruby with a linter to enforce RDoc comments on any public methods.
If you don't understand the benefit of xyz then don't do it.
Our microservice implementation is great. It scales with no maintenance, and when you have three people that makes a difference.
They ignored me and went the microservices way.
Guess what?
2 years later the rebuild of the old codebase was done.
3 years later and they are still fighting delivery and other issues they would never have had if they didn't ignore me and just went for the "lame" monolith.
Moral of this short story: I can personally say everything this article says is pretty much true.
Having added a fancy new technology and a "successful" project to their resume, they're supposed to move on to the next job before the consequences of their actions are fully obvious.
Rather than 1 micro service per team, which many devs.. it was some team that owns 20 services, generally way more services than developers.
It's probably just how non-lean Mag7 were in peak vs how lean most other orgs that try to ape them are.
That's why I don't like the term "microservice", as it suggests each service should be very small. I don't think it's the case.
You can have a distributed system of multiple services of a decent size.
I know "services of a decent size" isn't as catchy as "go for one huge monolith!" or "microservices!" but that's the sensible way to approach things.
Even with a monorepo, you will hit a point where you have 1, 10, 100 million lines of e.g. Python, realize you should upgrade from 3.8 to 3.14 because it's EOL, and feel a lot of pain as you have to do a big-bang, all-at-once change, fixing every single breaking change, including from libraries which you also have to update. There's no way around this in current mainstream languages.
I don't think it's much better if you have to spend a year and a half updating 400+ different repos, though. It's much easier to use an operationalized language that knows backwards compatibility matters.
> I don't think it's much better if you have to spend a year and a half updating 400+ different repos, though.
There's two things going for separate services (which may or may not be separate repos; remember a single repo can have multiple services):
1. You can do it piecemeal. 90% of your services will be 15-minute changes: update versions in a few files, let automated tests run, it's good to go. The 10% that have deeper compatibility issues can be addressed separately without holding back the rest. You can't separate this if you have a single deployable artifact.
2. Complexity is superlinear with respect to lines of code. Upgrading a single 1mLOC service isn't 10x harder than updating ten 100kLOC services, it's more like 20, 30x harder. Obviously this is hard to measure, but there's a reason these massive legacy codebases get stuck on ancient versions of dependencies. (And a reason companies pay out the ass for Oracle's extended Java 8 support, which they still offer.)
I've worked on monoliths with 400+ developers that were great, but it takes skills that people who have only ever worked in orgs that mandate microservice just don't have.
The paranoid socialist in me thinks big companies like team-sized microservices because it lets them prevent workers from talking to each other without completely ruling out producing running software.
When companies instead encourage forums for communication across team boundaries, it unlocks completely different architectural patterns.
Why can the game industry etc somehow manage this fine, but the only place where it's actually possible to adapt this kind of artificial separation over the network, it's somehow impossible not do it beyond an even lower number of devs than for a large game? Suggests confirmation bias to me.
The main problem with microservices is that it's preemptive, split whatever you want when it makes sense after-the-fact, but to intentionally split everything up before-the-fact is madness.
How many times have AAA releases been total crap?
How many times have games been delayed by months or years?
How many times have games left off features like local LAN play, and instead implemented a 'microservice' as a service for online play?
How many times have the console manufactures said "Yea, actually you have the option of running a client server architecture with as many services you want?"
The "one team per microservice" makes code-enclosure style code ownership possible, but it is the least efficient way I have ever seen software written.
I've long wanted to hack an IDE so people are only allowed to change the Java objects they created, and then put six Java programmers in a room and make them write an application, yelling back and forth across the room. "CAN YOU ADD A VERSION OF THAT METHOD THAT ACCEPTS THIS NEW CLASS?" "SURE THING! TRY THAT?"
People discount the costs of microservices because they makes management's job easier, especially when companies have adopted global promotion processes. But unless they are solving a real technical constriant, they are a shitty way to work as an engineer.
The trouble comes when some political wind blows and reshuffles the org chart, and now you're responsible for some services that only made sense in the context of a political reality that no longer exists.
If people on the team continue to think about the "system" as a monolith (what they already know and are comfortable with), you'll hit friction ever step of the way from design all the way out to deployment. Microservices throw out a lot of traditional assumptions and designs, which can be hard for people to subscribe to.
I think there has to be adequate "buy-in" throughout the org for it to be successful. Turning an existing mono into microservices is very likely to meet lots of internal resistance as people have varying levels of being "with it", so-to-speak.
As one would expect, they made bank from their consulting endeavor and rode off into the sunset while the rest of us wasted several years of our careers rewriting ugly but functional monolithic code into distributed Java based microservices. We could have been working on features and product but essentially were justifying a grift, adding new and novel bugs as we rebuilt stable APIs from scratch.
The company went under not long after the project was abandoned. Nobody, of course, would be held to account for it. I will no longer touch a tech consultancy like TW with a 10 foot barge pole.
Sounds to me like every startup.
The other one was a microservice architecture in front of the real problem, a Java backend service that hid the real real problem, one or more mainframes. But the consultants got to play in their microservices garden, which was mostly just a REST API in front of a Postgres database that would store blobs of JSON. And of course these microservices would end up needing to talk to each other through REST/JSON.
I've filed this article in my "microservices beef" bookmarks folder if I ever end up in another company that tries to do microservices. Of course, that industry has since moved on to lambdas, which is microservices on steroids.
Programmers coming up through frameworks or functional programming often don't have those, and so the techniques OO unit testers use don't translate well at all. If the first "unit" you build is a microservice, the first possible "unit" test is the isolation test for that service.
I have watched junior engineers crawl over glass to write tests for something because they didn't know how to write testable code yet, and then the tests they write often make refactoring a-la-Martin-Fowler's-book impossible.
(And that is leaving aside the consultancies that want to be able to advertise "100% test coverage!" but don't actually care if the tests make software harder to maintain in the long run because they aren't going to be there.)
Eventually we'll be able to acknowledge that there are a lot of different skills in our profession, and that writing good code isn't about being "smart": it's about knowing how to write code well. But until then people will keep blaming the tools they don't know how to use.
Less mocking, more bang for the buck.
(Mockist tests are fine for people who really want them, as long as you delete them before checking in the code.)
Just use regular sized services
Few cases where microservices makes sense probably when we have a small and well bounded use-case like webhooks management, notifications or may be read scaling on some master dataset
Here's more at Github's docs: https://docs.github.com/en/repositories/managing-your-reposi...
I can't prove this scales up forever but I've been very happy with making sure that things are carefully abstracted out with dependency injection for anything that makes sense for it to be dependency-injected, and using module boundaries internally to a system as something very analogous to microservices, except that it doesn't go over a network. This goes especially well with using actors, even in a non-actor-focused language, because actors almost automatically have that clean boundary between them and the rest of the world, traversed by a clean concept of messages. This is sometimes called the Modular Monolith.
Done properly, should you later realize something needs to be a microservice, you get clean borders to cut along and clean places to deal with the consquences of turning it into a network service. It isn't perfect but it's a rather nice cost/benefit tradeoff. I've cut out, oh, 3 or 4 microservices out of monoliths in the past 5 years or so. It's not something I do everyday, and I'm not optimizing my modular monoliths for that purpose... I do modular monoliths because it is also just a good design methodology... but it is a nice bonus to harvest sometimes. It's one of the rare times when someone comes and quite reasonably expects that extracting something into a shared service will be months and you can be like "would you like a functioning prototype of it next week"?
The only way for significant architectural boundaries at team boundaries to not result in incredibly painful software, especially for a growing team, is to let the software organize the teams. Which means reorging the company whenever you need to refactor, and somehow guessing right about how many changes each component will need in the coming year.
It also means you can't have product and engineers explore a problem together, or manage by objective with OKRs since engineers aren't connected to business outcomes.
I know that all the ex-Amazonians are convinced this is the only way to build software, but it really, really isn't.
I have played around with architectures like this, but I allowed the caller to patch in a dependent function in the call with those function overlay overrides were passed from function to function.
Apologies, used sst
Every service boundary you have to cross is a point of friction and a potential source of bugs and issues so by having more microservices you just have more than go wrong, by definition.
A service needs to maintain an interface for compatibility reasons. Each microservice needs to do that and do integration testing with every service they interact with. If you can't deploy a microservice without also updating all its dependencies then you don't have an independent service at all. You just have a more complicated deployment with more bugs.
The real problem you're trying to solve is deployment. If a given service takes 10 minutes to restart, then you have a problem. Ideally that should be seconds. But more ideally, you should be able to drain traffic from it then replace it however long it takes and then slowly roll it out checking for canary changes. Even more ideally, this should be largely automated.
Another factor: build times. If a service takes an hour to compile, that's going to be a huge impediment to development speed. What you need is a build system that caches hermetic artifacts so this rarely happens.
With all that above, you end up with what Google has: distributed builds, automated deployment and large, monolithic services.
You can get the architectural benefits of microservices by using message-passing-style Object-Oriented programming. It requires the discipline not to reach directly into the database, but assuming you just Don't Do That a well-encapsulated "object" is a microservice that runs in the same virtual machine as the other mircoservices.
Java is the most mainstream language that supports that: whenever you find yourself reaching for a microservice, instead create a module, namespace the database tables, and then expose only the smallest possible public interface to other modules. You can test them in isolation, monitor the connections between them, and bonus: it is trivial to deploy changes across multiple "services" at the same time.
In the Q&A after ward, another local startup CTO asked about problems their company was having with their microservices.
The successful CTO asked two questions: "How big is your microservices tooling team?" and "How big is your Dev Ops Team?"
His point was, if you're development team is not big enough to afford dedicated teams to tooling and dev ops, it's not big enough to afford microservices.
It makes Database Per Customer type apps really easy, and that is something alot of SaaS products could benefit from.
Agree with organizational win, also smaller merge requests in the team were superb.
Around 5-10 devs, monolith, we ran into conflicts more often, deployment, bigger merge requests, releasing by feature was problematic, microservices made team more productive, but rules about tests/docs/endpoints/code were important.
It’s a tool to solve people issues. They can remove bureaucratic hurdles and allow devs to somewhat be autonomous again.
In a small startup, you really don’t gain much from them. Unless if the domain really necessitates them, eg. the company uses Elixir but all of the AI toolings are written in Python/Go.
You can put most of your crud and domain logic in a monolith, but if you have a GPU workload or something that has very different requirements - that should be its own thing. That pattern shouldn't result in 100 services to maintain, but probably only a few boundaries.
Bias for monolith for everything, but know when you need to carve something out as its own.
At scale, you're 100% correct.
Remember the whole topic here is avoiding this tax
I've certainly seen microservices be a total disaster in large (and small) organizations. I think it's especially important that larger organizations have standards around cross-cutting concerns (e.g. authorization, logging, service-to-service communication, etc.) before they just should "OK, microservices, and go!"
it takes skill and taste to use only enough of each. unfortunately a lot of VC $$$ has been spent by cloud companies and a whole generation or two of devs are permasoiled by the micro$ervice bug.
don't do it gents. monolith, until you literally cannot go further, then potentially, maybe, reluctantly, spin out a separate service to relieve some pressure.
While I agree with you regarding microservices (eg language abstractions provide 80% of the encapsulation SOA provides for 20% of the overhead) and I readily acknowledge that 100% test coverage is a quixotic fantasy, I really can't imagine writing reliable software without debuggers, print-statements, or a REPL—all of which TDD replaces in my workflow.
How, I wonder, do you observe the behavior of the program if not through tests? By playing with it? Manually reproducing state? Or, do you simply wait until after the program is written to test its functionality?
I wonder what mental faculties I lack that facilitate your TDD-less approach. Can it be learned?
Like TDD, theyre great if done in the right way for the right reasons.
For large orgs where each service has a dedicated team it starts to make sense... but then it becomes clear that microservices are an organizational solution.
I love it when all my CRUD has to be abstracted over HTTP. /s
- You need to use a different language than your core application. E.g. we build Rails apps but need to use R for a data pipeline and 100% could not build this in ruby.
- You have 1 service that has vastly different scaling requirements that the rest of your stack. Then splitting that part off into it's own service can help
- You have a portion of your data set that has vastly different security and lifecycle requirements. E.g. you're getting healthcare data from medicare.
Outside of those, and maybe a few other edge cases, I see basically no reason why a small startup should ever choose microservices... you're just setting yourself up for more work for little to no gain.
One project that I helped design had to split out a segment of the system b/c the data was eligibility records coming from health plans. This data had very different security and lifecycle requirements (e.g. we have to keep it for 7 or 10 years). Splitting out this service simplified some parts but any time we need to cross the boundary between the 2 services, the work takes probably twice as long as it would if it were in a single service. I don't think it was the wrong decision, but it the service definitely did not come for free
If you need to keep the lights or maintain an SLA and can do so by separating a concern, it can really reduce risk and increase speed when deploying new features on "less important" components.
In monoliths, they generally don't.
There's no logical reason why you couldn't pay as much attention to decomposition and API design between the modules of a monolith. You could have the benefit of good design without all the architectural and operational challenges of microservices. Maybe some people succeed at this. But in practice I've never seen it. I've seen people handle the challenges of microservices successfully, and I've never seen a monolith that wasn't an incoherent mess internally.
This is just my experience, one person's observations offered for what they're worth.
In practice, in the context of microservices, I've seen an entire team work together for two weeks to break down a problem coherently, holding off on starting implementation because they knew the design wasn't good enough and it was worth the time to get it right. I've seen people escalate issues with others' designs because they saw a risk and wanted to address it.
In the context of monoliths, I've never seen someone delay implementation so much as a day because they knew the design was half-baked. I rarely see anyone ask for design feedback or design anything as a team until they've screwed something up so badly that it can't be avoided. People sometimes make major design decisions in a split second while coding. What kind of self-respecting senior developer would spend a week getting input on an internal code API before starting to implement? People sometimes aren't even aware that the code they wrote that morning has implications for code that will be written later.
Theoretically this is okay because refactoring is easy in a monolith. Right? ... It is, right?
I'm basically sold on microservices because I know how to get developers to take design seriously when it's a bunch of services talking to each other via REST or grpc, and I don't know how to get them to take the internal design of a monolith seriously.
Not that I would ever want to give up our monolith, but we do experience the problems you point out.
Every good monolith I've worked in (and I have worked in several, including one that was more than twenty years old) was highly-modular, well-designed with an easy-to-explain architecture.
The other thing they had in common was that code reviews talked about the aesthetics of the code and design, instead of just hunting for errors or skimming for security problems. It was relatively common to throw out the first proposed PR and start over, and that was fine because people were slicing the work small enough they were posting four to six PRs a week anyway.
It took the engineers at the company being willing to collaborate on the craft of software development and prioritize the long-term health of the code over short-term feature delivery. And the result of being willing to go a little bit slower day-to-day was that the actual feature delivery was faster than anywhere else I've ever worked.
Without a functioning professional culture, nothing is going to be great. But at least with microservices people do have to design an API at some point.
My current job insists that they have a “simple monolith” because all the code is in a single repo. But that repo has code to build dozens of python packages and docker containers. Tons of deploy scripts. Different teams/employees are isolated to particular parts of the codebase.
It feels a lot like microservices, but I don’t know what the defining feature of microservices is supposed to be
Which honestly may be the future if LLMs stay in a dev's toolkit. Plugging in an AI model to a monorepo provides so much context that can't be easily communicated across microservices in separate repos.
One should consider if they can dive even deeper into the monolithic rabbit hole. For example, do you really need an external hosted SQL provider, or could you embed SQLite?
From a latency & physics perspective, monolith wins every time. Making a call across the network might as well take an eternity by comparison to a local method. Arguments can be made that the latency can be "hidden", but this is generally only true for the more trivial kinds of problems. For many practical businesses, you are typically in a strictly serialized domain which means that you are going to be forced to endure every microsecond of delay. Assuming that a transaction was not in conflict doesn't work at the bank. You need to be sure every time before the caller is allowed to proceed.
The tighter the latency domain, the less you need to think about performance. Things can be so fast by default that you can actually focus on building what the customer is paying for. You stop thinking about the sizes of VMs, who's got the cheapest compute per dollar and other distracting crap.
You could say this about almost any pattern, if you genuinely tried to make microservices work it could work in ~100% of cases, I'm sure of that.
Its this pattern of dismissing or accepting a solution with strong prejudice you don't evaluate the merits is the real problem. Thats the true behavior we need to get away from.
We as an industry may find, that modular monoliths trend toward the top as a result (I hate to speculate too much, every company is different and there are in fact other patterns of development beyond the two mentioned) but that would be a side effect if true. The real win is moving away from such prejudiced behavior
I spent a solid 3 years of my career attempting to make micro service architecture work in a B2B SaaS ecosystem. I have experience. This is not prejudice.
> modular monoliths
I don't see the meaningful difference between this and microservices.
Stuff like k8s works fine as docker delivery vehicle
Using them makes it easy to build endpoints for things like WhatsApp and other integrations
CI/CD - infra can be as code, shared across, K8s port-forward for local development, better resource utilization, multiple envs end so on, available tooling, if setup correctly, usually keeps working.
Not mentioned plus, usually smaller merge requests, feature can be split and better estimated, less conflicts during work or testing... possibility to share in packages.
Also if there are no tests, doesnt matter if its monorepo or MS, you can break easily or spend more time.
You should afford tests and documentation, keep working on tech debt.
Next common issue I see, too big tech stack cos something is popular.
Though, if you’re on a small team and really want to use micro services two places I have found it to be somewhat advantageous:
* wrapping particularly bad third party APIs or integrations — you’re already forced into having a network boundary, so adding a service at the boundary doesn’t increase complexity all that much. Basically this lets you isolate the big chunk of crappy code involved in integrating with the 3rd party, and giving it a nice API your monolith can interact with.
* wrapping particularly hairy dependencies — if you’ve got a dependency with a complex build process that slows down deployments or dev setup — or the dependency relies on something that conflicts with another dependency — wrapping it in its own service and giving it a nice API can be a good way to simplify things for the monolith.
> ... conflate logical boundaries (how code is written) with physical boundaries (how code is deployed)
It's very easy to read and digest and I think it's a great paper that makes the case for building "modular monoliths".I think many teams do not have a practical guide on how to achieve this. Certainly, Google's solution in this case is far too complex for most teams. But many teams can achieve the 5 core benefits that they mentioned with a simpler setup. I wrote a about this in a blog post A Practical Guide to Modular Monoliths with .NET[1] with a GitHub repo showing how to achieve this[2] as well as a video walkthrough[3]
This approach has proven (for me) to be easy to implement, package, deploy, and manage and is particularly good for startups with all of the qualities mentioned in the Google paper without much complexity added.
[0] https://dl.acm.org/doi/pdf/10.1145/3593856.3595909
[1] https://chrlschn.dev/blog/2024/01/a-practical-guide-to-modul...
1. You get to minimize devops/security/admin work. Really a consequences of using serverless tooling, but you land on a something like a microservices architecture if you do.
2. You get can break out work temporally. This is the big one - when you're a small team supporting multiple products, you often don't have continuity of work. You have one project for a few months, completely unrelated product for another few months. Microservice architectures are easier to build and maintain in that environment.
What planet are you living on?
Heroku is still way easier, though.
Each repo you create is one more set of Dependabot alerts you need to keep on top of.
Context and nuances
The catch is to keep them all in mind and use them in moderation.
Like everything else in life.
- Use one-way async messaging. Making a UserService that everything else uses synchronously via RPC/REST/whatever is a very bad idea and an even worse time. You'll struggle for even 2-nines of overall system uptime (because they don't average, they multiply down).
- 'Bounded context' is the most important aspect of microservices to get right. Don't make <noun>-services. You can make a UserManagementService that has canonical information about users. That information is propagated to other services which can work independently each using the eventually consistent information they need about users.
There's other dumb things that people do like sharing a database instance for multiple 'micro'-services and not even having separately accessible schemas. In the end if done well, each microservice is small and pleasant to work on, with coordination between them being the challenging part both technically and humanly.
asim•1h ago
Basically this. Microservices are a design pattern for organisations as opposed to technology. Sounds wrong but the technology change should follow the organisational breakout into multiple teams delivering separate products or features. And this isn't a first step. You'll have a monolith, it might break out into frontend, backend and a separate service for async background jobs e.g pdf creation is often a background task because of how long it takes to produce. Anyway after that you might end up with more services and then you have this sprawl of things where you start to think about standardisation, architecture patterns, etc. Before that it's a death sentence and if your business survives I'd argue it didn't because of microservices but inspite of them. The dev time lost in the beginning, say sub 200 engineers is significant.
candiddevmike•1h ago
MDGeist•1h ago
devin•1h ago
Espressosaurus•1h ago
Things are different in the embedded space so I don't have personal experience with any of it.
westurner•35m ago
Should the URLs contain a version; like /api/v1/ ?
FWIU OpenAPI API schema enable e.g. MCP service discovery, but not multi-API workflows or orchestrations.
(Edit: "The Arazzo Specification - A Tapestry for Deterministic API Workflows" by OpenAPI; src: https://github.com/OAI/Arazzo-Specification .. spec: https://spec.openapis.org/arazzo/latest.html (TIL by using this comment as a prompt))
alaithea•55m ago
hnthrow90348765•34m ago
It would also nice to have less fear-driven career advice like "your skills go out of date" which drives people to try adopting the latest things.
bityard•30m ago
singron•1h ago
And when you break these out, you don't actually have to split your code at all. You can deploy your normal monolith with a flag telling it what role to play. The background worker can still run a webserver since it's useful for healthchecks and metrics and the loadbalancer will decide what "roles" get real traffic.
elevatedastalt•1h ago
roguecoder•53m ago
If you are concerned about someone else breaking your thing, good! You were going to eventually break it yourself. Write whatever testing gives you confidence that someone else's changes won't break your code, and, bonus, now you can make changes without breaking your code.
tuckerman•47m ago
jimbokun•39m ago
A very large code base full of loosely related functionality makes it more and more likely a change in one part will break another part in unexpected ways.
motorest•7m ago
This assertion is unrealistic and fails to address the problem. The fact that builds can and do break is a very mundane fact of life. There are whole job classes dedicated to mitigate the problems caused by broken builds, and here you are accusing others of doing things wrong. You cannot hide this away by trying to shame software developers for doing things that software developers do.
> Write whatever testing gives you confidence that someone else's changes won't break your code, and, bonus, now you can make changes without breaking your code.
That observation is so naive that casts doubt on whether you have any professional experience developing software. There are a myriad of ways any commit can break something that goes well beyond whether it compiles or not. Why do you think that companies, including FANGs, still hire small armies of QAs to manually verify if things still work once deployed? Is everyone around you doing things wrong, and you're the only beacon of hope? Unreal.
jounker•40m ago
jayd16•1h ago
You'll need services. They're hard. If something is hard but it needs to be done, you should get good at it.
Like every fad, there a backlash from people seeing the fad fall apart when used poorly.
Services are a good pattern with trade offs. Weigh the trade offs, just don't do things to do them.
fallingknife•1h ago
bluefirebrand•59m ago
This is a great optimization once you have high traffic services
Building this way before you have any traffic at all is a great way to build the wrong abstractions because your assumptions about where your load will be might be wrong
dimal•34m ago
Here’s the kicker: They only had a few hundred MAUs. Not hundreds of thousands. Hundreds of users. So all this complexity was for nothing. They burned through $50M in VC money then went under. It’s a shame because their core product was very innovative and well architected, but it didn’t matter.
jghn•6m ago
Way too many companies believe they're really just temporarily embarrassed BigTech.
jimbokun•33m ago
https://blog.khanacademy.org/go-services-one-goliath-project...
They had already scaled the mono service about as far as it could go and had a good sense of what the service boundaries should be based on experience.
motorest•19m ago
I don't see any major epiphany in this. In fact, it reads like a tautology. The very definition of microservice is that it's an independently evolving domain. That's a basic requirement.
tstrimple•15m ago