Large, shared database tables have been a huge issue in the last few jobs that I have had, and they are incredibly labor intensive to fix.
It's partly why I've realised more over time that learning computer science fundamentals actually ends up being super valuable.
I'm not talking about anything particularly deep either, just the very fundamentals you might come across in year one or two of a degree.
It sort of hooks back in over time as you discover that these people decades ago really got it and all you're really doing as a software engineer is rediscovering these lessons yourself, basically by thinking there's a better way, trying it, seeing it's not better, but noticing the fundamentals that are either being encouraged or violated and pulling just those back out into a simpler model.
I feel like that's mostly what's happened with the swing over into microservices and the swing back into monoliths, pulling some of the fundamentals encouraged by microservices back into monolith land but discarding all the other complexities that don't add anything.
Why small orgs use microservices: makes it nearly physically impossible to do certain classes of dumb shit
What I want is a lightweight infrastructure for macro-services. I want something to handle the user and machine-to-machine authentication (and maybe authorization).
I don't WANT the usual K8s virtual network for that, just an easy-to-use module inside the service itself.
You should be able to spin up everything localy in a docker-compose container.
> I don't WANT the usual K8s virtual network for that, just an easy-to-use module inside the service itself.
K8s makes sense if you have a dedicated team (or atleast engineer) and if you really need need the advanced stuff (blue/green deployments, scaling, etc). Once it's properly setup it's actually a very pleasant platform.
If you don't need that Docker (or preferable Podman) is indeed the way to go. You can actually go quite far with a VPS or a dedicated server these day. By the time you outgrow the most expensive server you can (reasonable) buy you can probably afford the staff to roll out a "big boy" infrastructure.
We're using Docker/Podman with docker-compose for local development, and I can spin up our entire stack in seconds locally. I can attach a debugger to any component, or pull it out of the Docker and just run it inside my IDE. I even have an optional local Uptrace installation for OTEL observability testing.
My problem is that our deployment infrastructure is different. So I need to maintain two sets of descriptions of our services. I'd love a solution that would unify them, but so far nothing...
3 tier architecture proves time and time again to be robust for most workloads.
Put it into a monorepo so the other teams have visibility in what is going on and can create PRs if needed.
But it is a bit sad that the poster apparently never bought a pizza just for themselves.
I don't want microservices; I want an executable. Memory is shared directly, and the IDE and compiler know about the whole system by virtue of it being integrated.
I have never done this yet.
But I love the idea of it.
The current hell is x years of undisciplined (in terms of perf and cost) new ORM code being deployed (SQLAlchemy).
What we do (physics simulation software) doesn’t need all the complexity (in my option as a long time software developer & tester) and software engineering knowledge that splitting stuff into micro services require.
Only have as much complexity as you absolutely need, the old saying “Keep it simple, stupid” still has a lot of truth.
But the path is set, so I’ll just do my best as an individual contributor for the company and the clients who I work with.
1. Full-on microservices, i.e. one independent lambda per request type, is a good idea pretty much never. It's a meme that caught on because a few engineers at Netflix did it as a joke that nobody else was in on
2. Full-on monolith, i.e. every developer contributes to the same application code that gets deployed, does work, but you do eventually reach a breaking point as either the code ages and/or the team scales. The difficulty of upgrading core libraries like your ORM, monitoring/alerting, pandas/numpy, etc, or infrastructure like your Java or Python runtime, grows superlinearly with the amount of code, and everything being in one deployed artifact makes partial upgrades either extremely tricky or impossible depending on the language. On the operational and managerial side, deployments and ownership (i.e. "bug happened, who's responsible for fixing?") eventually get way too complex as your organization scales. These are solvable problems though, so it's the best approach if you have a less experienced team.
3. If you're implementing any sort of SoA without having done it before -- you will fuck it up. Maybe I'm just speaking as a cynical veteran now, but IMO lots of orgs have keen but relatively junior staff leading the charge for services and kubernetes and whatnot (for mostly selfish resume-driven development purposes, but that's a separate topic) and end up making critical mistakes. Usually some combination of: multiple services using a shared database; not thinking about API versioning; not properly separating the domains; using shared libraries that end up requiring synchronized upgrades.
There's a lot of service-oriented footguns that are much harder to unwind than mistakes made in a monolithic app, but it's really hard to beat SoA done well with respect to maintainability and operations, in my opinion.
This makes it clear when you might want microservices: you're going through a period of hypergrowth and deployment is a bigger bottleneck than code. This made sense for DoorDash during covid, but that's a very unusual circumstance
eternityforest•35m ago
rao-v•27m ago
kaladin-jasnah•21m ago
ethanwillis•19m ago