Still adverse to the monorepo though, but I understand why it's attractive.
However there's a big difference between development and releases. You still want to be able to cut stable releases that allow for cherrypicks for example, especially so in a monorepo.
Atomic changes are mostly a lie when talking about cross API functions, i.e. frontend talking to a backend. You should always define some kind of stable API.
Even if we squash it into main later, it’s helpful for reviewing.
Other than that pretty free how you write commit messages
I can spend hours OCDing over my git branch commit history.
-or-
I can spend those hours getting actual work done and squash at the end to clean up the disaster of commits I made along the way so I could easily roll back when needed.
But also, rewriting history only works if you haven't pushed code and are working as a solo developer.
It doesn't work when the team is working on a feature in a branch and we need to be pushing to run and test deployment via pipelines.
Weird, works fine in our team. Force with lease allows me to push again and the most common type of branch is per-dev and short lived.
Also rebasing is just so fraught with potential errors - every month or two, the devs who were rebasing would screw up some feature branch that they had work on they needed and would look to me to fix it for some reason. Such a time sink for so little benefit.
I eventually banned rebasing, force pushes, and mandated squash merges to main - and we magically stopped having any of these problems.
The Linux kernel manages to do it for 1000+ devs.
My history ends up being: - add feature x - linting - add e2e tests - formatting - additional comments for feature - fix broken test (ci caught this) - update README for new feature - linting
With a squash it can boil down to just “added feature x” with smaller changes inside the description.
Where logical commits (also called atomic commits) really shine is when you're making multiple logically distinct changes that depend on each other. E.g. "convert subsystem A to use api Y instead of deprecated api X", "remove now-unused api X", "implement feature B in api Y", "expose feature B in subsystem A". Now they can be reviewed independently, and if feature B turns out to need more work, the first commits can be merged independently (or if that's discovered after it's already merged, the last commits can be reverted independently).
If after creating (or pushing) this sequence of commits, I need to fix linting/formatting/CI, I'll put the fixes in a fixup commit for the appropriate and meld them using a rebase. Takes about 30s to do manually, and can be automated using tools like git-absorb. However, in reality I don't need to do this often: the breakdown of bigger tasks into logical chunks is something I already do, as it helps me to stay focused, and I add tests and run linting/formatting/etc before I commit.
And yes, more or less the same result can be achieved by creating multiple MRs and using squashing; but usually that's a much worse experience.
That seems better as long as you can keep it standard across the team. I don’t usually check each commit when reviewing since frequent iterative commits mean folks change their mind and I’d review already removed logic when looking at early commits.
I’ve been scraping by on basic git usage so didn’t know about fix-up commits, that’s excellent.
every commit is reviewed individually. every commit must have a meaningful message, no "wip fix whatever" nonsense. every commit must pass CI. every commit is pushed to master in order.
It's just too bad not enough graphical UIs default to `--first-parent` and a drill-down like approach over cluttered "subway graphs".
So one branch had 40x "Deploy to Dev" commits. And those got merged straight into the repo.
They added no information.
No information loss, and every commit is valid on their own, so cherry picks maintain the same level of quality.
When I am ready to make my PR I delete my remote feature branch and then squash the commits. I can use all my granular commit comments to write a nice verbose comment for that squashed commit. Rarely I will have more than one commit if a user story was bigger than it should be. Usually this happens when more necessary work is discovered. At this stage each larger squashed commit is a fully complete change.
The audience for these commits is everyone who comes after me to look at this code. They aren’t interested in seeing it took me 10 commits to fix a test that only fails in a GitHub action runner. They want the final change with a descriptive commit description. Also if they need to port this change to an earlier release as a hotfix they know there is a single commit to cherry pick to bring in that change. They don’t need to go through that dev commit history to track it all down.
- You need to remove trash commits that appear when you need to rerun CI. - You need to remove commits with that extra change you forgot. - You want to perform any other kind of rebase to clean up messages.
I assume in this thread some people mean squashing from the perspective of a system like Gitlab where it's done automatically, but for me squashing can mean simply running an interactive (or fixup) and leaving only important commits that provide meaningful information to the target branch.
Serious question, what's going on here?
Are you using a "trash commit" to trigger your CI?
Is your CI creating "trash commits" (because build artefacts)?
It's harder to debug as well (this 3000line commit has a change causing the bug... best of luck finding it AND why it was changed that way in the first place.
I, myself, prefer that people tidy up their branches such that their commits are clear on intent, and then rebase into main, with a merge commit at the tip (meaning that you can see the commits AND where the PR began/ended.
git bisect is a tonne easier when you have that
Is there overhead to creating a branch?
I'm using a monorepo for my company across 3+ products and so far we're deploying from stable release to stable release without any issues.
Canary/Incremental, not so much
But (in my mind) even a front end is going to get told it is out of date/unusable and needs to be upgraded when it next attempts to interact with the service, and, in my mind atleast, that means that it will have to upgrade, which isn't "atomic" in the strictest sense of the word, but it's as close as you're going to get.
There's a bigger problem though: in practice there's almost always a client that you don't control, and can't switch along with your services, e.g. an old frontend loaded by a user's browser.
The moment you have two production services that talk to each other, you end up with one of them being deployed before the other.
Hell, you lose "atomic" assets the moment you serve HTML that has URLs in it.
Consider switching from <img src=kitty.jpg> to <img src=puppy.jpg>. If you for example, delete kitty from the server and upload puppy.jpg, then change html, you can have a client with URL to kitty while kitty is already gone. Generally anything you published needs to stay alive for long enough to "flush out the stragglers".
Same thing applies to RPC contracts.
Same thing applies to SQL schema changes.
IMO, monorepos are much easier to handle. Monoliths are also easier to handle. A monorepo monolith is pretty much as good as it gets for a web application. Doing anything else will only make your life harder, for benefits that are so small and so rare that nobody cares.
If you have a bajillion services and they're all doing their own thing with their own DB and you have to reconcile version across all of them and you don't have active/passive deployments, yes that will be a huge pain in the ass.
So just don't do that. There, problem solved. People need to stop doing micro services or even medium sized services. Make it one big ole monolith, maybe 2 monoliths for long running tasks.
And yes, it's often okay to ignore the problem for small sites that can tolerate the downtime.
Cherry picks are useful for fixing releases or adding changes without having to make an entirely new release. This is especially true for large monorepos which may have all sorts of changes in between. Cherry picks are a much safer way to “patch” releases without having to create an entirely new release, especially if the release process itself is long and you want to use a limited scope “emergency” one.
Atomic changes - assuming this is related to releases as well, it’s because the release process for the various systems might not be in sync. If you make a change where the frontend release that uses a new backend feature is released alongside the backend feature itself, you can get version drift issues unless everything happens in lock-step and you have strong regional isolation. Cherry picks are a way to circumvent this, but it’s better to not make these changes “atomic” in the first place.
A monorepo only allows you to reason about the entire product as it should be. The details of how to migrate a live service atomically have little to do with how the codebase migrates atomically.
This seems like simply not following the rules with having a monorepo, because the DB Cluster is not running the version in the repo.
Being 17 versions behind is an extreme example, but always having everything run the latest version in the repo is impossible, if only because deployments across nodes aren't perfectly synchronised.
Adding new APIs is always easy. Removing them not so much since other teams may not want to do a new release just to update to your new API schema.
We use Unleash at work, which is open source, and it works pretty well.
First, my philosophy is that long-lived feature branches are bad, and lead to pain and risk once complete and need to be merged.
Instead, prefer to work in small, incremental PRs that are quickly merged to main but dormant in production. This ensures the team is aware of the developing feature and cannot break your in-progress code (e.g. with a large refactor).
This usage of "feature flags" is simple enough that it's fine and maybe even preferable to build yourself. It could be as simple as env vars or a config file.
--
However, feature flagging may also refer to deploying two variants of completed code for A/B testing or just an incremental rollout. This requires the ability to expose different code paths to selected users and measure the impact.
This sort of tooling is more difficult to build. It's not impossible, but comparatively complex because it probably needs to be adjustable easily without releases (i.e. requires a persistence layer) and by non-engineers (i.e. requires an admin UI). This becomes a product, and unless it's core to your business, it's probably better to pick something off the shelf.
Something I learned later in my career is that measuring the impact is actually a separate responsibility. Product metrics should be reported on anyway, and this is merely adding the ability to tag requests or other units of work with the variants applied, and slice your reporting on it. It's probably better not to build this either, unless you have a niche requirement not served by the market.
--
These are clearly two use cases, but share the overloaded term "feature flag":
1. Maintaining unfinished code in `main` without exposing it to users, which is far superior than long-lived feature branches but requires the ability to toggle.
2. Choosing which completed features to show to users to guide your product development.
(2) is likely better served by something off the shelf. And although they're orthogonal use cases, sometimes the same tool can support both. But if you only need (1), I wouldn't invest in a complex tool that's designed to support (2)—which I think is where I agree with you :)
Feature flags are a good idea, but they require a lot of discipline and maintenance. In practice, they tend to be overused, and provide more negatives than positives. They're a complement, but certainly not a replacement for VCS branches, especially in monorepos.
Can you explain this comment? Are you saying to develop directly in the main branch?
How do you manage the various time scales and complexity scales of changes? Task/project length can vary from hours to years and dependencies can range from single systems to many different systems, internal and external.
The complexity comes from releases. Suppose you have a good commit 123 were all your tests pass for some project, you cut a release, and deploy it.
Then development continues until commit 234, but your service is still at 123. Some critical bug is found, and fixed in commit 235. You can't just redeploy at 235 since the in-between may include development of new features that aren't ready, so you just cherry pick the fix to your release.
It's branches in a way, but _only_ release branches. The only valid operations are creating new releases from head, or applying cherrypicks to existing releases.
So you can say that you have short-lived development branches that are always rebased on main. Along with the release branch and cherry-pick process, the workflow you describe is quite common.
They don’t do code reviews or any sort of parallel development.
They’re under the impression that “releases are complex and this is how they avoid it” but they just moved the complexity and sacrificed things like parallel work, code reviews, reverts of whole features.
What there isn't, is long lived feature branches with non-integrated changes.
Ideally you'd do the work in your hotfix branch and merge it to main from there rather than cherry picking, but I feel that mostly because git isn't always great at cherry picking.
And you've personally done this for a larger project with significant amount of changes and a longer duration (like maybe 6 months to a year)?
I'm struggling to understand why you would eliminate branches? It would increase complexity, work and duration of projects to try to shoehorn 2 different system models into one system. Your 6 month project just shifted to a 12 to 24 month project.
In my experience development branches vastly increase complexity by hiding the integration issues until very late when you try to merge.
Either way, I still don't understand how you can reasonably manage the complexity, or what value it brings.
Example:
main - current production - always matches exactly what is being executed in production, no differences allowed
production_qa - for testing production changes independent of the big project
production_dev_branches - for developing production changes during big project
big_project_qa_branch - tons of changes, currently being used to qa all of the interactions with this system as well as integrations to multiple other systems internal and external
big_project_dev_branches - as these get finalized and ready for qa they move to qa
Questions:
When production changes and project changes are in direct conflict, how can you possibly handle that if everyone is just committing to one branch?
How do you create a clean QA image for all of the different types of testing and ultimately business training that will need to happen for the project?
In general, all new code gets added to the tip of main, your only development branch. Then, new features can also be behind feature flags optionally. This allows developers to test and develop on the latest commit. They can enable a flag if they are interested in a particular feature. Ideally new code also comes with relevant automated tests just to keep the quality of the branch high.
Once a feature is "sufficiently tested" whatever that may mean for your team it can be enabled by default, but it won't be usable until deployed.
Critically, there is CI that validates every commit, _but_ deployments are not strictly performed from every commit. Release processes can be very varied.
A simple example is we decide to create a release from commit 123, which has some features enabled. You grab the code, build it, run automated tests, and generate artifacts like server binaries or assets. This is a small team with little SLAs so it's okay to trust automated tests and deploy right to production. That's the end, commit 123 is live.
As another example, a more complex service may require more testing. You do the same first steps, grab commit 123, test, build, but now deploy to staging. At this point staging will be fixed to commit 123, even as development continues. A QA team can perform heavy testing, fixes are made to main and cherry picked, or the release dropped if something is very wrong. At some point the release is verified and you just promote it to production.
So development is always driven from the tip of the main branch. Features can optionally be behind flags. And releases allow for as much control as you need.
There's no rule that says you can only have one release or anything like that. You could have 1 automatic release every night if you want to.
Some points that make it work in my experience are:
1. Decent test culture. You really want to have at least some metric for which commits are good release candidates. 2. You'll need some real release management system. The common tools available like to tie together CI and CD which is not the right way to think about it IMO (example your GitHub CI makes a deployment).
TL:Dr:
Multiple releases, use flags or configuration for the different deployments. They could all even be from the same or different commits.
But how would you create that QA environment when it involves thousands of commits over a 6 month period?
It will be highly dependent on the kind of software you are building. My team in particular deals with a project that cuts "feature complete" releases every 6 months or so, at that point only fixes are allowed for another month or so before launch, during this time feature development continues on main. Another project we have is not production critical, we only do automated nightlies and that's it.
For a big project, typically it involves deploying to a fully functioning QA environment so all functionality can be tested end to end, including interactions with all other systems internal to the enterprise and external. Eventually user acceptance testing and finally user training before going live.
We build a user-friendly way for non-technical users to interact with a repo using Claude Code. It's especially focused on markdown, giving red/green diffs on RENDERED markdown files which nobody else has. It supports developers as well, but our goal is to be much more user friendly than VSCode forks.
Internally we have been doing a lot of what they talk about here, doing our design work, business planning, and marketing with Claude Code in our main repo.
for example I can have a prompt writing playwright tests for happy paths while another prompt is fixing a bug of duplicated rows in a table because of a missing SQL JOIN condition.
What does this mean in context of downloadable desktop apps?
At some point, you will have many teams. And one of them _will not_ be able to validate and accept some upgrade. Maybe a regression causes something only they use to break. Now the entire org is held hostage by the version needs of one team. Yes, this happens at slightly larger orgs. I've seen it many times.
And since you have to design your changes to be backwards compatible already, why not leverage a gradual roll out?
Do you update your app lock-step when AWS updates something? Or when your email service provider expands their API? No, of course not. And you don't have to lock yourself to other teams in your org for the same reason.
Monorepos are hotbeds of cross contamination and reaching beyond API boundaries. Having all the context for AI in one place is hard to beat though.
> you will have the old system using the old schema and the new system using the new schema unless you design for forwards-backwards compatible changes
Of course you design changes to be backwards compatible. Even if you have a single node and have no networked APIs. Because what if you need to rollback?
> Maybe a regression causes something only they use to break. Now the entire org is held hostage by the version needs of one team.
This is an organizational issue not a tech issue. Who gives that one team the power to hold back large changes that benefit the entire org? You need a competent director or lead to say no to this kind of hostage situation. You need defined policies that balance the needs of any individual team versus the entire org. You need to talk and find a mutually accepted middle ground between teams that want new features and teams that want stability and no regressions.
If my code has to be backwards compatible to survive the deployment, then having the code in two different repos isn’t such a big deal, because it’ll all keep working while I update the consumer code.
Multiple repos shouldn't depend on a single shared library that needs to be updated in lockstep. If they do, something has gone horribly wrong.
It’s both. Furthermore, you _can_ solve organizational problems with tech. (Personally, I prefer solutions to problems that do not rely strictly on human competence)
This isn't to say monorepo is bad, though, but they're clearly naive about some things;
> No sync issues. No "wait, which repo has the current pricing?" No deploy coordination across three teams. Just one change, everywhere, instantly.
It's literally impossible to deploy "one change" simultaneously, even with the simplest n-tier architecture. As you mention, a DB schema is a great example. You physically cannot change a database schema and application code at the exact same time. You either have to ensure backwards compatibility or accept that there will be an outage while old application code runs against a new database, or vice-versa. And the latter works exactly up until an incident where your automated DB migration fails due to unexpected data in production, breaking the deployed code and causing a panic as on-call engineers try to determine whether to fix the migration or roll back the application code to fix the site.
To be a lot more cynical; this is clearly an AI-generated blog post by a fly-by-night OpenAI-wrapper company and I suspect they have few paying customers, if any, and they probably won't exist in 12 months. And when you have few paying customers, any engineering paradigm works, because it simply does not matter.
We have a monorepo, we use automated code generation (openapi-generator) for API clients for each service derived from an OpenAPI.json generated by the server framework. Service client changes cascade instantly. We have a custom CI job that trawls git and figures out which projects changed (including dependencies) as to compute which services need to be rebuilt/redeployed. We may just not be at scale—thank God. We're a small team.
> We may just not be at scale—thank God. We a small team.
It's perfectly acceptable for newer companies and small teams to not solve these problems. If you don't have customers who care that your website might go down for a few minutes during a deploy, take advantage of that while you can. I'm not saying that out of arrogance or belittlement or anything; zero-downtime deployments and maintaining backwards compatibility have an engineering cost, and if you don't have to pay that cost, then don't! But you should at least be cognizant that it's an engineering decision you're explicitly making.
months-long delays on important updates due to some large project doing extremely bad things and pushing off a minor refactor endlessly has been the norm for me. but they're big so they wield a lot of political power so they get away with it every time.
or worse, as a library owner: spending INCREDIBLE amounts of time making sure a very minor change is safe, because you can't gradually roll it out to low-risk early adopter teams unless it's feature-flagged to hell and back. and if you missed something, roll back, write a report and say "oops" with far too many words in several meetings, spend a couple weeks triple checking feature flagging actually works like everyone thought (it does not, for at least 27 teams using your project), and then try again. while everyone else working on it is also stuck behind that queue.
monorepos suck imo. they're mostly company lock-in, because they teach most absolutely no skills they'd need in another job (or for contributing to open source - it's a brain drain on the ecosystem), and all external skill is useless because every monorepo is a fractal snowflake of garbage.
Seems like a weird workaround, you could just clone multiple repos into a workspace. Agree with all your other points though.
And monorepo or not, bad software developers will always run into this issue. Most software will not have 'many teams'. Most software is written by a lot of small companies doing niche things. Big software companies with more than one team, normally have release managers.
My tipp: use architecture unit tests for external facing APIs. If you are a smaller company: 24/7 doesn't has to be the thing, just communicate this to your customers but overall if you run SaaS Software and still don't know how to do zero-downtime-deployment in 2025/2026, just do whatever you are still doing because man come on...
The alternative of every service being on their own version of libraries and never updating is worse.
Company website in the same repo means you can find branding material and company tone from blogs, meaning you can generate customer slides, video demos
Going further, Docs + Code, why not also store Bugs, Issues etc. I wonder
The people who say polyrepos cause breakage aren't doing it right. When you depend across repos in a polyrepo setup, you should depend on specific versions of things across repos, not the git head. Also, ideally, depend on properly installed binaries, not sources.
To be fair, this problem is not solved at all by monorepos. Basically, only careful use of gRPC (and similar technology) can help solve this… and it doesn’t really solve for application layer semantics, merely wire protocol compatibility. I’m not aware of any general comprehensive and easy solution.
In a polyrepo environment, either:
- B updates their endpoint in a backward compatible fashion, making sure older stuff still works
OR
- B releases a new version of their API at /api/2.0 but keeps /api/1.0 active and working until nothing depends on it anymore, releasing deprecation messages to devs of anyone depending on 1.0
Never expose your storage/backend type. Whenever you do, any consumers (your UI, consumers of your API, whatever) will take dependencies on it in ways you will not expect or predict. It makes changes somewhere between miserable and impossible depending on the exact change you want to make.
A UI-specific type means you can refactor the backend, make whatever changes you want, and have it invisible to the UI. When the UI eventually needs to know, you can expose that in a safe way and then update the UI to process it.
It's tempting to return a db table type but you don't have to.
Of course, it’s still a pretty rough and dirty way to do it. But it works for small/demo projects.
It's definitely not amazing, code generation in general will always have its quirks, but protobuf has some decent guardrails to keep the protocol backwards-forwards compatible (which was painful with Avro without tooling for enforcement), it can be used with JSON as a transport for marshaling if needed/wanted, and is mature enough to have a decent ecosystem of libraries around.
Not that I absolutely love it but it gets the job done.
It just looks like a normal frontend+backend product monorepo, with the only somewhat unusual inclusion of the marketing folder.
maybe they could be encrypted, and you could say "well its everything but the encryption key, which is owned in physical form by the CEO."
theres a lot of power i think to have everything in one place. maybe github could add the notion of private folders? but now thats ACLs... probably pushing the tool way too far.
maybe they could be encrypted, and you could say "well its everything but the
encryption key, which is owned in physical form by the CEO."
I don't see how this is any different from most projects where keys and the like are kept in some form of secrets manager (AWS services, GHA Secrets, Hashi Vault, etc.).How close do you think this is? Deploys everything but the actual backend/frontend code.
We used p4 rather than git though.
Here were the downsides we ran into
- Getting buy in to do everything through the repo. We had our feature flags controlled via a yaml file in the repo as well, and pretty quickly people got mad at the time it took for us to update a feature flag (open MR -> merge MR -> have CI update feature flag in our envs), and optimizing that took quite a while. It then made branch invariants harder to reason about (everything in the production branch is what is in our live environments, but except for feature flags). So, we moved that out of the monorepo into an actual service.
- CI time and complexity. When we started getting to around 20 services that deployed independently, GitLab started choking on the size of our CI configuration and we'd see a spinner for about 5 minutes before our pipeline even launched. Couple that with special snowflakes like the feature flag system I mentioned above, eventually it got to the point that only a few people knew exactly how rollouts edge cases worked. The juice was not worth the squeeze at that point (the juice being - "the repo is the source of truth for everything")
- Test times. We ran some e2e UI tests with Cypress that required a lot of beefy instances, and for safety we'd run them every single time. Couple that with flakiness, and you'd have a lot of red pipelines when the goal was 100% green all the time.
That being said, we got a ton of good stuff out of it too. I distinctly remember one day that I updated all but 2 of our services to run on ARM without involving service authors and our compute spend went down by 70% for that month because nobody was using the m8g spot instances, which had just been released.
They had to open a whole epic in order to reduce the memory usage, but I think all that work just let us continue to use GitLab as the number of services we grew increased. They recommended we use something called parent/child pipelines, but it would have been a fairly large rewrite of our logic.
wat. You are running the marketing page from the same repo, yet having an LLM make the updates? You have the data file available. Just read the pricing info from your config file and display it?
AI didn’t magically uninvent “let’s have someone else check this over before it’s shipped”.
This is something that is, of course, super relevant given context management for agentic AI. So there's great appeal in doing this.
And today, it might even be the best decision. But this really feels like an alpha version of something that will have much better tooling in the near-future. JSON and
Markdown are beautiful simple information containers, but they aren't friendly for humans as compared with something like Notion or Excel. Again I'll say, I'm confident that in the near-future we'll start to see solutions emerge that structure documentation that is friendly to both AIs and humans.
When a feature touches the backend API, the frontend component, the documentation, and the marketing site—why should that be four repositories, four PRs, four merge coordination meetings?
The monorepo isn't a constraint. It's a force multiplier."
Thank you Claude :)
I'm wondering once the exceedingly obvious LLM style creeps more and more into the public mind if we're going to look back at these blog posts and just cringe at how blatant they were in retrospect. The models are going to improve (and people will catch on that you can't just use vanilla output from the models as blog posts without some actual editing) and these posts will just stand out like some very sore thumbs.
(ps all of the above 100% human written ;)
Fuck yes I love this attitude to transparency and code-based organization. This is the kind of stuff that gets me going in the morning for work, the kind of organization and utility I honestly aspire to implement someday.
As many commenters rightly point out, this doesn't run the human side of the company. It could, though, if the company took this approach seriously enough. My personal two cents, it could be done as a separate monorepo, provided the company and its staff remain disciplined in its execution and maintenance. It'd be far easier to have a CSV dictate employees and RBAC rather than bootstrapping Active Directory and fussing with its integrations/tentacles. Putting department processes into open documentation removes obfuscation and a significant degree of process politics, enabling more staff to engage in self-service rather than figuring out who wields the power to do a thing.
I really love everything about this, and I'd like to see more of it, AI or not. Less obfuscation and more transparency is how you increase velocity in any organization.
Crazy that nobody can be bothered to get rid of the obvious AI-isms "This isn't just for...", "The Challenges (And How We Handle Them)", "One PR. One review. One merge. Everything ships together." It's an immediate signal that whoever wrote this DGAF.
Not to say this post isn't AI generated but you might want a better tool (if one exists)
I've had a blog post kicking around about this for a while, it's CRAZY how much more expensive AI detection is than AI generation.
In my mind content generated today with AI "tells" like the above and a general zero-calorie-feel that also trip an AI detector are very likely AI generated.
A text either has value to you or it doesn’t. I don’t really understand what the level of AI involvement has to do with it. A human can produce slop, an AI can produce an insightful piece. I rely mostly on HN to tell them apart value-wise.
Human articles on HN are largely shit. I would personally prefer to see either AI articles, or human articles by experts (which we get almost none of on HN)
> Last week, I updated our pricing limits. One JSON file. The backend started enforcing the new caps, the frontend displayed them correctly, the marketing site showed them on the pricing page, and our docs reflected the change—all from a single commit.
It's almost as if when you seek to find patterns, you'll find patterns, even if there are none. I think it'd benefit these kinds of people to remember the scientific "rule" of correlation does not equal causation and vice versa.
You could easily scoff the same way about some number of API endpoints, class methods, config options, etc, and it still wouldn't be meaningful without context.
It's ok to split or lump as the team sees fit.
There may not be a universally correct granularity, but that doesn't mean clearly incorrect ones don't exist. 50+ services is almost always too many, except for orgs with hundreds or thousands of engineers.
I look forward to when we see the article about breaking the monorepo nightmare.
Also, are we just upvoting obvious AI gen marketing slop now?
“It’s all there Claude just read it.”
Ok…
- frontend - backend - website
is already confusing to me.
I understand that one commit seems nice, but you could have achieve this with e.g. 3 repos and very easily maintain all of them. There’s a bit of overhead of course, but having some experience working with a team that has a few „monorepos” I know that the cost to actually make it work is significant.
giancarlostoro•1mo ago
I guess I could work with either option now.
emzo•1mo ago
giancarlostoro•1mo ago
valzam•1mo ago
giancarlostoro•1mo ago
ChadNauseam•1mo ago
david422•1mo ago
And if it's not, it breaks everything. This is an assumption you can't make.
jvuygbbkuurx•1mo ago
aylmao•1mo ago
yearolinuxdsktp•1mo ago
I think it’s better to always ask your devs to be concerned about backwards compatibility, and sometimes forwards compatibility, and to add test suites if possible to monitor for unexpected incompatible changes.
nithril•1mo ago
deaux•1mo ago
Opting for a monorepo because you don't want to alias this flag is.. something you can do, I guess.
odie5533•1mo ago
deaux•1mo ago
qingcharles•1mo ago
esafak•1mo ago
servercobra•1mo ago
catlifeonmars•1mo ago
yearolinuxdsktp•1mo ago