I like that the author provides both solutions: join (my preferred) or split the share.
And then N4 is a shared utility service that's responsible for e.g. performance tracing or logging or something similar. To make the dependency "harder", we could consider that it's a shared service responsible for authentication and authorization. So it's clear why many root services are dependent on it—they need to make individual authorization decisions.
How would you refactor this to remove an undirected dependency loop?
The only way I can see to avoid this is to have all those cross-cutting concerns handled in the N1 root service before they go into N2/N3, but it requires having N1 handle some things by itself (eg: you can do authorization early), or it requires a lot of additional context to be passed down (eg: passing flags/configuration downstream), or it massively overcomplicates others (eg: having logging be part of N1 forces N2/N3 to respond synchronously).
So yeah, I'm not a fan of the constraint from TFA. It being a DAG is enough.
But what if we add 2 extra nodes: n5 dependent on n2 alone, and n6 dependent on n3 alone? Should we keep n2 and n3 separate and split n4, or should we merge n2 and n3 and keep n4, or should we keep the topology as it is?
The same sort of problem arises in a class inheritance graph: it would make sense to merge classes n2 and n3 if n4 is the only class inheriting from it, but if you add more nodes, then the simplification might not be possible anymore.
You'll probably also have lines pointing to your storage service or database even if the data is isolated between them. You could have them all be separate but that's a waste when you can leverage say a big ceph cluster.
Said less snarky, it should be trivial to define and restrict the dependencies of services (Although there are many ways to do that). If its not trivial, that's a different problem.
If you look at this proposal and reject it, i question your experience. My experience is not doing this leads to codebases so intertwined that organizations grind to a halt.
My experience is in the SaaS world, working with orgs from a few dozen to several thousand contributors. When there are a couple dozen teams, a system not designed to separate out concerns will require too much coordinated efforts to develop against.
A polytree is a planar graph, and the number of edges must grow linearly with the number of edges.
Think more actors/processes in a distributed actor/csp concurrent setup.
Their interface should therefore be hardened and not break constantly, and they shouldn't each need deep knowledge of the intricate details of each other.
Also for many system designs, you would explicitly want a different topology, so you really shouldn't restrict yourself mentally with this advice.
It's a nearly universal rule you'll want on every kind of infrastructure and data organization.
You can get away for some time with making things linked by offline or pre-stored resources, but it's a recipe for an eventual disaster.
A global namespace root with sub namespaces will just desired config and current config will the complexity hidden in the controller.
The second is closer to your issue above, but it is just dependency inversion, how the kubelet has zero info on how to launch a container or make a network or provision storage, but hands that off to CRI, CNI or CSI
Those are hard dependencies that can follow a simple wants/provides model, and depending on context often is simpler when failures happen and allows for replacement.
E.G you probably wouldn’t notice if crun or runc are being used, nor would you notice that it is often systemd that is actually launching the container.
But finding those separation of concerns can be challenging. And K8s only moved to that model after suffering from the pain of having them in tree.
I think a DAG is a better aspirational default though.
“Microservices” was, IIRC, more about rejecting that and returning to the foundations of SOA than anything else. The original description was each would support a single business domain (sometimes described “business function”, and this may be part of the problem, because in some later descriptions, perhaps through a version of the telephone game, this got shortened to “function” and without understanding the original context...)
The name was properly chosen poorly and led to many confusions.
It's a (human) scaling technique for large organizations. When you have thousands of developers they can't possibly keep in communication with each other. You have to draw a line between them. So, we draw the line the same way we do at the global scale.
Conway's Law, as usual.
The rule is obviously wrong.
I think just having no cycles is good enough as a rule.
While I understand the first counterexample, this one seems a bit blurry. Can anybody clarify why a directed acyclic graph whose underlying undirected graph is cyclic is bad in the context of microservice design?
If service A feeds both B and C, and they both feed service D, then D can receive an incoherent view of what A did, because nothing forces B and C to keep their stories straight. But B and C can still both be following their own spec perfectly, so there's no bug in any single service. Now it's not clear whose job it is to fix things.
It would make more sense to say that the event tree should not have any cycles, but anyway this seems like a silly point to make.
However, the reasoning as to why it can't be a general DAG and has to be restricted to a polytree is really tenuous. They basically just say counterexample #2 has the same issues with no real explanation. I don't think it does, it seems fine to me.
People treat the edges on the graph like they're free. Like managing all those external interfaces between services is trivial. It absolutely is not. Each one of those connections represents a contract between services that has be maintained, and that's orders of magnitude more effort then passing data internally.
You have to pull in some kind of new dependency to pass messages between them. Each service's interface had to be documented somewhere. If the interface starts to get complicated you'll probably want a way to generate code to handle serialization/deserialization (which also adds overhead).
In addition to share code, instead of just having a local module (or whatever your language uses) you now have to manage a new package. It either had to be built and published to some repo somewhere, it has to be a git submodule, or you just end up copying and pasting the code everywhere.
Even if it's well architected, each new services adds a significant amount of development overhead.
Polytrees look good, they don't work on orthogonal services
A polytree has the property that there is exactly one path that each node can be reached. If you think of this as a dependency graph, for each node in the graph you know that none of its dependencies have shared transitive dependencies.
I'll give it one though: if there are no shared transitive dependencies then there cannot be version conflicts between services, where two otherwise functioning services need disparate versions of the same transitive dependency.
You absolutely want the same identity service behind all of your services that rely on an identity concept (and no, you can't just say a gateway should be the only thing talking to an identity service - there are real downstream uses cases such as when identity gets managed).
Similarly there's no reason to have multiple image hosting services. It's fine for two different frontends to use the same one. (And don't just say image hosting should be done in the cloud --- that's just a microservice running elsewhere)
Same for audit logging, outbound email or webhooks, acl systems (can you imagine if google docs, sheets, etc all had distinct permissions systems)
Service A: publish a notification indicating that some new data is available.
Service B: consume these notifications and call back to service A with queries for the changed data and perhaps surrounding context.
What would you recommend when something like this is desired?
mapehe•4d ago
btbytes•2h ago