I don't think it's for every project, or even every feature in a given project. But there's a niche it works best for, and that niche happens to represent a massive percentage of all software engineering.
The niche is, you're building a web app. It's some shitty Web 8.0 version of a Craigslist or something. No one wants to pay for formalized testing. No one really wants to spend a lot of time on quality. Arguably it's not that important to users. The year is Enshittified 2030 and they're lucky if they can even log into their online banking, let alone slide into someone's DMs on your platform.
What's the solution? TBH it's not tripling down on good engineering process. (Don't get me wrong, good engineering process absolutely has a role, I'm merely suggesting that as you move from flight control software to online classifieds to your neighbor's recipe blog, that role diminishes in favor of less formal feedback.) The solution is radically forcing the code into the light of day in front of people who can say or do something about the biggest problems. The minor bugs will probably slip through this net, but the really nasty ones WILL get worked out if people on your team have to deal with them first, followed by an 'insider' circle of users, finally followed by the general public.
That translates to certain buzzwords actually representing great ideas in this type of development, such as: feature flags, CI, pilot rollouts, and... trunk-based development.
Just get the code/feature out in front of a couple people who matter, as early as the day it was written, and keep expanding until you're public.
Isn't this just another representation of the fallacy that it's possible to deliver faster by cutting quality? That's the kind of thing I expect to hear from ignorant stakeholders who know nothing about developing software, not on HN.
But we obviously do this - practices are significantly different between beta release webapps and software on rivers we send to Mars. And that’s a good thing, the tradeoffs are wildly different.
> Isn't this just another representation of the fallacy that it's possible to deliver faster by cutting quality?
Do you think there’s anything that would increase quality on a project that you are on but also slow it down?
It’s not always true - more haste less speed - but it’s not always false either.
> Isn't this just another representation of the fallacy that it's possible to deliver faster by cutting quality?
If we use a real world analogy. You don't use the same engineering quality practices to build a sandwich as you do to build a space probe.
It would be a mistake to x-ray verify connections between parts in a sandwich, but it would most likely be a mistake not to for a space probe.
The engineering practice that needs to be done correctly is judging the error risk, predicting the consequences of errors, and mitigating errors where the cost of mitigation does not outweigh an estimate of the expected cost of damage from the error, including work to make things right.
If you ship a sandwich where the spread was unevenly applied, inspection likely could have detemined this before shipping, but the cost to have another person carefully review each sandwich as it is being made outweighs the cost of the ocassional dry spot on the bread... Which may not be noticed or may not be bothersome... Worst case, a sandwich can be remade.
Otoh, if an assembly comes apart in space on the way to another planet, it can't be fixed and the mission might be lost.
If it takes weeks, months, or years for changes to be deployed, it makes a lot of sense to invest in procedures that improve quality before shipping, as responding to issues is expensive.
If it takes minutes or seconds to deploy changes, measuring quality in production becomes more reasonable. Some changes are such that errors are likely to result in expensive cleanup, and those need extenstive testing before release... But things where the effects of errors will be mild can be pushed and if errors appear, a rapid response is often fine. It takes experience/wisdom to know which changes need more qualification and which are safe to try... and experience hopefully reduces errors too.
They were also slow because, if memory serves, creating a branch involved creating a logical copy everything (files and history) instead of just tracking the delta using pointers/copy-on-write/whatever-clever-thing locally. It was on the order of tens of minutes in one project I worked on and, needless to say, we stopped doing it. Merging the branch was also (as noted) a headache when this all ran in reverse. Merge conflicts were also much more common, though I can't remember why just now -- presumably because synchronizing changes was equally slow and painful, rebasing was not an option and people couldn't be bothered.
It's hard to emphasize just what a leap forward Git and Mercurial were. In addition to quick and cheap branches, you could also commit changes while working offline! The distributed nature of these tools was also revelatory and is underutilized in the modern age.
Of course I don't actually think there is a religion on how to do these things, but I've never seen any particular quality added by people doing quarterly releases from a feature branch. For the most part I think this part at the very beginning of the article explains a lot of the reason as to why that is:
> When we do trunk-based development, the WIP we commit gets used, before any actual user sees it, by our whole team.
In my eyes all of it comes down to management much more than deployment strategies. The disadvantage of feature branching is that it's very easy for management to cut costs on testing. The disadvantage of trunk-based development is that it requires functional teams of people who can actually work together and accept the "norms".
If you want to release a piece of software to a customer who will be using it as-is for some time (which is one possible business model), then you need a version of your software where all features that the customer sees are working well. To do that, you need to fix even minor bugs in those features without adding any (visible) new features to the product.
However, this doesn't scale. You can't pause all development of new features while someone is re-aligning a few UI elements and cleaning up error messages that were too confusing. So, you have to create a split in the software: one part is getting ready to ship, another is work in progress for future releases.
Now, you can create this split using git branches, or using feature flags. So yes, trunk based development can work, you can absolutely build up the new experimental features into the software you're about to ship, but you'll have to hide them behind a feature flag that is disabled - a runtime branch. Or, you can have a release branch where only bug fixes are allowed, and submit major new work on the main branch. There are pros and cons to either solution, but whatever you do, you need stable vs unstable branches somewhere.
And, of course, sometimes you'll have to release hotfixes, and there you won't get away without a true SCM branch, since you can't just give your customers your latest and greatest software just to patch a security issue in libcurl for a 2 year old release.
Have to agree with this. In a large (lower than 2/3rd in the Fortune 500 list) company with cost cutting as a constant mantra, our ever shrinking team hasn’t had a testing/QA team for several years. We never got to automating a lot of tests either. The developers create something and test it manually as per their limited knowledge. Then it’s the end users who do the bulk of the testing.
And of course internal teams love also doing QA on the side of their existing jobs ;) I think that’s fair enough to ask from developers and project/product managers but a lot of others on the team basically have to suffer lower productivity to test your code.
I think this is a very lopsided way of looking at pull requests. They are about a lot more than just trust. Reviewing and being reviewed is a great way of learning from colleagues, making common practices gel in a team, and keeping up to date with changes to the codebase. It’s not just a barrier.
> money stuck into the system. It is stuck because the organisation invested considerably in creating all this code on parallel branches. However, as long as this code is not merged into mainline, deployed in production and released to users, it does not generate any revenue. Therefore, it is money stuck in the system. But, because we have less WIP, we create less inventory. As such, we have less invested money stuck in the system.
I’ve been telling people that work in flight should be minimised for a long time, but this is the first time I’ve heard the inventory metaphor and I like it. It’s a similar metaphor to tech debt.
Yes GitLab has a lot to it, but much of it is optional; you won’t be hurting if you just stick to the code repo part.
Interesting, I was under the impression that this analogy was a core component of "lean software".
https://www.joelonsoftware.com/2012/07/09/software-inventory...
I am old enough that my first 2 jobs were subversion-based (repo side, client side git-svn) and both teams didn't care about branching. It might have to do with how awkward subversion branches felt.
Anyway, we would commit (in git lingo pull-rebase & commit) directly and we basically maintained the id of the last reviewed commit and jointly did PR reviews commit-by-commit with the code on the projector in the office.
We had a joint look at code, everyone voiced their feedback ("I don't understand a variable name like `xzc`"), "where is the unit test", "I read in a blog post recently you are supposed to not use classes...". etc. pp. Sometimes fixes would be pushed right in the PR review session so you had the variable renamed 4 commits further.
Anyway, in retrospect it worked surprisingly well at helping the team to develop a joint understanding of values & virtues that the team would like to maintain in their code base. This might of course be nostalgia of a dev looking back into their junior years.
When we finally got pull-requests, we really felt thrown into the future. It was just great. But after a while I started to miss the direct conversations about code with fellow humans.
And honestly I couldn't tell whether PR really improved the quality of the code base in the long run. They lowered the probability of bad code being committed to the code base, but also lowered the probability for a dev to just fix awkward things while they stumbled over them.
At the same time, I expect your PM is delighted that you’re not wasting time getting distracted with all that yak shaving nonsense and are instead working on the next burndown ticket that has been assigned a t-shirt size and the appropriate number of story points.
Your development workflow depends on your product, on your team, their skills, on your organization and so on.
As with everything you need to carefully evaluate, if Trunk-based development is a good fit for your product, your team and your organizational structure. Sometimes it fits well, sometimes it’s not a good fit.
I think a lot of people dismiss gitflow and its feature-branched model as too complicated. Yes, it feels so slow, right?
But it gives inexperienced teams a very clear and documented workflow, especially so, if you need to support multiple versions of your product in the wild. It clearly dictates how to do bugfixes and hotfixes, that need to be done in multiple releases of your product.
Could you achieve the same with Trunk-based development? Maybe? Could an inexperienced team come up with it on their own? I don’t think so.
See, it always depends.
while the feature-branch-based flow solved some problems it introduced many more like stall branches and out-of-sync code. this became the default as most devs simply weren't aware or had the discipline to merge early and cut their "tickets" to an appropriate size.
mostly for compliance reasons in corporate and big teams i would still advise feature-request-based flow. but as another colleague noted here - it slows you down and if it's a one or two people shop / product you will reap more benefits by doing basic trunk-based flow.
Being able to lock files centrally is a massive advantage as long as you have to be at the office to do it. This forces conflict resolution in a much more direct and immediate way than the passive aggressive PR submissions that are eventually reviewed for conflicts after all parties have already burned untold amounts of energy on potentially unmergable code.
When i add non-trivial code I want others of my team to review it first. When it is on trunk, it gets less easier to refactor as others already might use it. In this sense using a MR is really good way to get some quality checks before it goes to main and needs to be supported (in most cases) indefinitely. Next, we have system tests that i really want to avoid breaking on main and getting EVERYONE stuck.
The problem arises if I keep that branch for more than a week. Merge conflicts get harder, the financial investment is not leveraged, etc etc.
The problem is with long lived branches. This is something you should avoid, you can take it to the extreme, no branches. But extreme is almost never good.
PS CC is really good at rebasing and resolving merge conflicts.
Trunk based development is about having a single master (sorry, "main"), and generating deployable artifacts from there, and then the remaining environments have only deployments of versioned artifacts.
Unless I'm missing some new debate about the value of PullRequests, but that sounds extreme.
The root cause in my company is we hire the lowest cost outsourcing outfit we can find, which hires inexperienced juniors and pretends they are senior and has a huge staff turnover because who the hell wants to work like that for more than 12 months. But of course changing the repo structure and process will compensate for that.
My observation is that a good engineering team will self-organise a methodology that works. The methodology is unimportant compared to getting a good engineering team. I am fortunate that I work in a small niche pocket in the organisation that has good engineers and has resisted normalising with the process churn of the rest of the organisation.
We run a "dark ages" team really. We were on SVN until 2020-ish, because, well it worked fine for us. Now we're on github on commandment but it is used like SVN. We have light feature branches. Master is always stable. Releases are tagged off master. At no point do we use any other github features not even PRs. CI is makefiles. Push to ECR/S3 by human. Deployment is makefiles. Comms is email because Slack is too distracting and demands immediacy. Issue tracking is a 17 year old JIRA project with a default workflow. We have the lowest defect rate, the lowest time to market and highest ROI per head in the entire org and we don't have or need a platform/sre/devops team supporting us.
We are a disliked because we didn't do anything for over 10 years other than deliver software.
- Posted from my 45 minute long standup
We don't do standups. Drop 15 mins in whoever's calendar and use the phone works. Yes we're that backwards! Every alternate Friday lunch time we meet up in person though.
I think this applies to many "best practices" and "standards" as well. I could mention many examples of old systems/apps written by single/few deeply committed and skilled people were replaced by "better" and more "modern" alternatives that worked less well and that took a team of people to maintain. The latter used all the right tools, processes and practices but they were just not artists, for lack or a better word.
What actually works is trunk-based deployments — keep main always deployable, and ship from there. Simple.
PRs, are underrated. They’re great for sharing context. You get inline comments, CI runs, you can test stuff in isolation by spinning up infra, and teammates actually see what’s changing.
Stacked diffs make juggling multiple PRs manageable. And yeah, PR reviews can slow you down, but honestly, I think that's a plus. Slowing down just enough to have another human look at the code pays off more often than not.
I think there are a lot of interesting questions about using feature flags (a baby branch) vs actual branches. Personally I’m pro flags and anti branches, after a lot of experience in developer tools and CI.
PRs basically have a hard-requirement on branches (or equivalent, like fork), because the code that is being requested to pull needs to be available somewhere. The article also advocates for not using pull requests.
However people who take this position also often advocate for post-merge code review. And more advocate for pair programming or mob programming, which they consider to represent code review as well. So branchless TBD isn’t incompatible with code review, just code review as it is commonly practiced.
This does require your software to have a decent architecture such that feature flags aren't littering every part of your codebase, though. Ideally you want something like a whole module/plugin being enabled/disabled right at the entrypoint of your program/module. But this also pays dividends in the long run.
Is everyone talking about the same thing here?
This works with git (or DVCS in general) because your local "master" or whatever is a short lived branch. People seem to forget but with git you are branching every time you clone. Obviously committing broken stuff and WIPs directly to the trunk isn't going to work.
Yes, this is the best way. Having multiple eternal branches like git flow is complete nonsense. Just don't do it. You do not need anything so ridiculous.
I joined one team that used eternal branches to deploy to various environments (staging, prod etc). What a nightmare! You could just leave a commit out of prod and be none the wiser.
With a single integration branch you either do continuous deployment to prod or you use tags to mark releases. No need for another branch. Release branches are there if you need to patch things but don't want to release everything on master. You just tag a release on the release branch.
I disagree with the linked article that says DVCS has made more branching common. Branching was never difficult, merging was. Git in particular made short lived branches a thing because now they are cheap enough to merge every day.
I think this is where I'm stuck. I rarely work in projects where my team can 'use' the code in any meaningful way. When I wrote code for accountant my team wasn't going to be able to tell if the wash sale calculations went wonky.
Amount of work that was done and amount of bug reports and anyone can write that their customers are happy.
If they worked previously on PRs some numbers how that was going compared to what they do now.
For me it is just fluff and I wasted time reading it.
"Pull requests" by that name maybe. But code reviews existed way before that. As soon as code was written, programmers started to ask for peer reviews of their code.
- Design and code inspections to reduce errors in program development. 31 December 1976. https://ieeexplore.ieee.org/document/5388086
But I guess that today's programmers think that software engineering was created in 2010.
But good luck selling that to some regulated companies where basic risk assessment is a must. Tell a financial institution you'll _never_ fuck their money transfer pipeline, because you don't do code reviews, you "deliver value" even on Fridays and your methodology avoids all sort of errors or malicious modifications.
IMHO, trunk based development removes some safety. Software development is a complex task and we all risk to break things even on a good day. You're not smarter by removing safety from the process.
Unless it doesn't happen. Issues that are dealt with automatically are those that would break the build, or make tests or linting checks fail; everything else can simply accumulate without anyone "wasting time" on it.
Technical debt is often recognized retroactively (e.g. there is a new foreign customer and not enough time to write thousands of neglected translated labels and messages).
Ozzie_osman•6h ago
They also say that "no branches are created". In reality, I don't see how you do any modern development without branches. If anything, contributors need a local branch that is then merged to the trunk (unless, somehow, everyone is live-editing the main branch?). The idea of TBD is just to have smaller, short-lived branches that are frequently merged into main (very doable with pull requests).
ainiriand•6h ago
In our case there are no Pr's, we push to master, the test suite is triggered, the dev environment receives the new code. It moves to other environments if the tests pass. We release to master using a canary instance that gets only a small share of the traffic, if our metrics for success fail then we rollback and start a hotfix. Yes you can have as many branches you want in your local but in the remote there is only one.
coolgoose•6h ago
It feels like you are just canceling a lot of pipeline tests all the time, assuming there's a cadence of 10+ pushes per day.
MichaelNolan•6h ago
Just to clarify, are you saying there is no code review where another engineer looks at your change before you merge to main?
I’ve never seen a TBD project work that way. I can’t imagine how it would work for anything but the smallest teams.
ainiriand•2h ago
Disposal8433•5h ago
If you have a bug due to the last commit, don't you have to write fixes all the time? How do you review the code?
JimDabell•5h ago
Disposal8433•6h ago
One of his previous article (https://thinkinglabs.io/articles/2022/05/30/on-the-evilness-...) takes it one step further. It seems that everyone works on everything at once and people step on each other toes all the time. They would definitely improve by using the feature branches that they shun.
Last but not least, he's talking about quality, high-trust, or on-demand build which are topics unrelated to a branching strategy. Quality is done with local tools like linters and CI to prevent bad merges, and on-demand build can be done with tags in any branching strategy. The whole article is confusing actually.
> everyone is working on a single mainline
It's the same for every strategy. And TBD mentions release branches, so... he's wrong in the very first sentence.
I do like TBD though because it's a clean way to handle release and feature branches, but that's it.
JimDabell•5h ago
It’s not. I’ve used TBD for mobile apps and it’s been great. I do prefer short-lived branches with PRs (which is another form of TBD), but it’s wrong to say that their approach is limited to web apps in a specific context.
Disposal8433•5h ago
I'm not criticizing TBD which I like, I criticize his thoughts that it's related to TBD where in reality the whole article talks about his specific job and the organization of his team, which again is unrelated to any branching strategy.
JimDabell•5h ago
So are you saying I just imagined using this for non-web projects?
> He's saying that "we uncover more problems sooner" which can only be done on systems such as web sites where the tests and their results are done very fast.
I don’t understand why you think this. Uncovering problems quickly is one of the major points of continuous integration and it’s in no way limited to the web.
> Where I work we can have weeks of testing to make sure that every feature has not introduced a bug.
This does not mean that branchless TBD is limited to the web, it means that it’s not suitable for your project.
Disposal8433•5h ago
I like TBD, but the article is NOT about TBD.
> are you saying I just imagined
I use TBD for everything and I like it. I was talking about the writer of the article, not you. The article is, again, NOT about TBD and the writer, most likely an overpriced consultant, is very confused about branching strategies, and most likely believe that a daily standup or some pair programming is part of that strategy. His other articles confirm this.
JimDabell•5h ago
> The article is, again, NOT about TBD and the writer, most likely an overpriced consultant, is very confused about branching strategies, and most likely believe that a daily standup or some pair programming is part of that strategy.
This does not resemble the article we are commenting on.