It’s still very satisfying to watch run though, especially if you write a script that it can run automatically (based on the existing code) to determine if it’s a good or bad commit.
"What if the unit tests have bugs?"
1. Start with working code
2. Introduce bug
3. Identify bug
4. Write a regression test
5. Bisect with new test
In many cases you can skip the bisect because the description of the bug makes it clear where the issue is, but not always.
You can go back and efficiently run a new test across old commits.
Yet, once you've identified a hole, you can write a script to test for it, run `git biset` to identify what commit introduced the hole, and then triage the possible fallout.
Tests can't prove the absence of bugs, only their presence. Bugs or regressions will slip through.
Bisect is for when that happens and the cause is not obvious.
To sum up: bisect and tests are not in opposite sides, they complement each other
"typo fix"
For the 5% of engineers that diligently split each PR into nice semantic changes, I suppose that's nice. But the vast majority of engineers don't do this. Individual commits in a PR are testing and iteration. You don't want to read though that.
Unless, of course, you're asking the engineer to squash on their end before making the PR. But what's the value in that ceremony?
Each PR being squashed to 1 commit is nice and easy to reason about. If you truly care about making more semantic history, split the work into multiple PRs.
For that matter, why merge? Rebase it on top. It's so much cleaner. It's atomic and hermetic.
With an explicit merge, you keep two histories, yet mostly care about the "main" one. With rebase, you're effectively forgetting there ever was a separate history, and chose to rewrite the history when "effectively merging" (rebasing).
There's value in both, mostly seems to come down to human preference. As long as the people that will be working with it agrees, I personally don't care either way which one, granted it's consistently applied.
But a MERGE… is a single commit on Master, while keeping the detailed history!
- We just don’t use the merge because they are ugly,
- And they’re only ugly because the visualizers make them ugly.
It’s a tooling problem. The merge is the correct implementation. (and yet I use the rebase-fast-forward).
I like merge commits because they preserve the process of the review.
I appreciate that, but I still can't square it with my world view.
GitHub or whatever tool you use preserves all the comments and feedback and change history if you ever need to go back and reference it. Such instances are limited, and in my experience it's mainly politics and not technical when this happens. The team's PR discussion itself isn't captured in git, so it's lossy to expect this type of artifact to live in git anyway. It's also much less searchable and less first class than just going back to GitHub to access this.
Ultimately, these software development artifacts aren't relevant to the production state of the software you're deploying. It feels muddled to put an incomplete version of it into your tree when the better source of truth lives outside.
> Ultimately, these software development artifacts aren't relevant to the production state of the software you're deploying. It feels muddled to put an incomplete version of it into your tree when the better source of truth lives outside.
You could make the same claim about the entire history. Git is a development tool, production just needs a working cut of the source.
The advice around PRs rings hollow, after all they were invented by the very people that don't care - which is why they show all changes by default and hide the commits away, commit messages buried after 5 clicks. And because this profession is now filled with people that don't care, add the whole JIRA ticket and fix version rigmarole on top - all kinds of things that show up in some PMs report but not in my console fixing an issue that requires history.
1. It makes review much easier, which is both important because core maintainer effort is the most precious resource in open source, and because it increases the likelihood that your PR will be accepted.
2. It makes it easier for people to use the history for analysis, which is especially important when you may not be able to speak directly to the original author.
These reasons also apply in commercial environments of course, but to a lesser extent.
For me, organizing my PRs this way is second nature and only nominal effort, because I'm extremely comfortable with Git, including the following idiom which serves as a more powerful form of `git commit --amend`:
git add -p
git commit --fixup COMMIT_ID
git stash
git rebase -i --autosquash COMMIT_ID~
An additional benefit is that this methodology doesn't work well for huge changesets, so it discourages the anti-pattern of long-lived topic branches. :)> For that matter, why merge? Rebase it on top.
Yes, that works for me although it might not work for people who aren't going to the same lengths to craft a logical history. I have no interest in preserving my original WIP commits — my goal is to create something that is easy to review.
BUT... the PR should ultimately be merged with a merge commit. Then when you have a bug you can run `git bisect` on merges only, which is good enough.
If you did not approach it through literate programming, I just prefer all of the thousands of lines at once.
Each unit of code (PR, commit, CL, whatever you want to call it) you send for review should be able to stand on its own, or at the very least least not break anything because it's not hooked into anything important yet.
(And for me as well — both the individual commits and the PR-level summary are useful.)
So, those of us who prefer commit-sized chunking don't have to do anything special to accommodate your preference.
It doesn't go the other way, of course, if you present one big commit to me. But so long as your code is well-commented (heh) and the PR isn't too huge (heh heh) and you don't intersperse file renamings (heh heh heh) or code formatting changes (heh heh heh heh) which make it extremely difficult to see what you changed... no problem!
One of the "individual commits saved me" cases was when one of these introduced a bug. They tried to cut the number of lines in half by moving conditions around, not intending to make any functional changes, but didn't account for a rare edge case. It was in a company-shared library and we didn't find it until upgrading it on one of our products a year or two after the change.
The first one definitely won't pass CI, the second one might pass CI depending on the changes and whether the repository is configured to consider certain code quality issues CI failures (e.g., in my "green" commits, I have a bias for duplicating code instead of making a new abstraction if I need to do the same thing in a subtly different way), and then the third one definitely passes because it addresses both the test cases in the red commit and any code quality issues in the green commit (such as combining the duplicated code together into a new abstraction that suits both use cases).
I've been on a maintenance team for ~5 years and this has saved me so many times in svn, where you can't squash, for weird edge cases caused by a change a decade or more ago. It's the reason I'm against blind squashes in git.
My favorite was, around 2022, discovering something that everyone believed was released in 2015, but was temporarily reverted in 2016 while dealing with another bug, that the original team forgot to re-release. If the 2016 reversion had been squashed along with the other bug, I might never have learned it was intended to be temporary.
I'm fine with manually squashing "typo" commits, but these individual ones are the kind where you can't know ahead of time if they'll be useful. It's better to keep them, and use "git log --first-parent" if you only want the overview of merges.
However, this is also just a more specific version of the general problem that long-lived, elaborate topic branches are difficult to work with.
T_T
We keep our git use extremely simple, we don't spend much time even thinking about git. The most we do with git is commit, push, and merge (and stash is useful too). Never need to rebase or get any deeper into git. Doing anything complicated with git is wasting development time. Squashing commits isn't useful to us at all. We have too much forward velocity to worry that some commit isn't squashed. If a bug does come up, we move forward and fix it, the git history doesn't really figure into our process much, if at all.
Other teams at my company are obsessive about squashing, and I am glad I don't work on those teams. They are frustrating to work with, they are notoriously slow to ship anything, and their product still breaks even with all their processes and hurdles to getting anything shipped. Teams that demand squashing simply don't impress me at all.
That said, very few people seem to be like me. Most people have no concept of what a clear commit history is. I think it's kind of similar to how most people are terrible written communicators. Few people have any clue how to express themselves clearly. The easiest way to deal with people like this is to just have them squash their PRs. This way you can at least enforce some sanity at review and then the final commit should enforce some standards.
I agree on rebasing instead of straight merging, but even that's too complicated for most people.
This exactly - if your commit history for a PR is interesting enough to split apart, then the original PR was too large and should have been split up to begin with.
This is also a team culture thing - people won't make "clean" commits into a PR if they know people aren't going to be bisecting into them and trying to build. OTOH, having people spend time prepping good commits is potentially time wasted if nobody ever looks at the PR commit history aside from the PR reviewers.
And please do not come back with “you shouldn’t do a refactor as part of a feature change.” We don’t need to add bureaucracy to avoid a problem caused by failure to understand the power of good version control.
So you can differentiate the plumbing from the porcelain.
If all the helpers, function expansions, typo corrections, and general renamings are isolated, what remains is the pure additional functional changes on its own. It makes reviewing changes much easier.
Splitting work into multiple PRs is unnecessary ritual.
I have never reasoned about git history or paid attention to most commit messages or found any of it useful compared to the contents of the change.
When I used git bisect with success it was on unknown projects. Known projects are easy to debug.
I don't argue with your point (even if I am obsessive about commits separation), but one needs to keep in mind that the reverse also applies, that is, on other end of the spectrum, there are devs who create kitchen-sink PRs which include, for example, refactorings, which make squashed PRs harder to reasons about.
I think cause and effect are the other way around here. You write and keep work-in-progress commits without caring about changes because the history will be discarded and the team will only look at pull requests as a single unit, and write tidy distinct commits because the history will be kept and individual commits will be reviewed.
I've done both, and getting everyone to do commits properly is much nicer, though GitHub and similar tools don't really support or encourage it. If you work with repository history a lot (for example, you have important repositories that aren't frequently committed to, or maintain many different versions of the project) it's invaluable. Most projects don't really care about the history—only the latest changes—and work with pull-requests, which is why they tend to use the squashed pull request approach.
If you mean stacked PRs, yeah GitHub absolutely sucks. Gerrit a decade ago was a better experience.
Here's a simple reason: at my company, if you don't do this, you get fired.
This is basic engineering hygiene.
Those 5% shouldn't be approving PRs without it.
Those 95% shouldn't be code owners.
Nice things can be had, you just have to work for them.
git log --first-parent
git bisect --first-parent
The above gives you clean PR history in the main branch while retaining detailed work history in (merged) feature branches.
I really don't understand why would I squash having git merge --no-ff at my disposal...
[1] https://github.com/rectang/dotfiles/blob/master/gitextra#L11...
Merges lie I worse ways than rebase does. Hands down. With rebase I break my own shit. With merge I can break yours. Since your code is already merges into trunk, it has fewer eyes on it now and it’s on me to make sure my code works with yours and not vice versa.
With a merge commit anyone can see each original path, and the merge result (with its author), and even try the three-way merge again. With a rebase, the original context is lost (the original commits were replaced by the rebased versions).
A rebase is a lossy projection of the same operation. A rebase lies about its history, a merge does not.
https://lucasoshiro.github.io/posts-en/2024-04-08-please_don...
Tick tock. You need competence in depth when you have SLIs.
For more subtle bugs, where there's no hard error but something isn't doing the right thing, yes bisect might be more helpful especially if there is a known old version where the thing works, and somewhere between that and the current version it was broken.
In general, this takes human-interactive time. Maybe not much, but generally more interactive time than is required to write the bisect test script and invoke `git bisect run ...`
The fact that it's noninteractive means that you can do other work in the meantime. Once it's done you might well have more information than you'd have if you had used the same time manually reducing it interactively by trying to reduce the scope of the bug.
Before you find even that, your fire drill strategy is very very important. Is there enough detail in the incident channel and our CD system for coworkers to put their dev sandbox in the same state as production? Is there enough if a clue of what is happening for them to run speculative tests in parallel? Is the data architecture clean enough that your experiments don’t change the outcome of mine? Onboarding docs and deployment process docs, if they are tight, reduce the Amdahl’s Law effect as it applies to figuring out what the bug is and where it is. Which is I. This context also Brooks ‘s Law.
Looking at the history of specific files or functions usually gives a quick idea. In modern Git, you can search the history of a specific function.
>> git log -L :func_name:path/to/a/file.c
You need to have a proper .gitattributes file, though.*.py diff=python
Nothing annoys me more than a codebase with broken commits that break git bisect.
It probably would be slightly faster to jump to bisect. But it’s not in my muscle memory.
If you mean see what commit added code, that's what git-blame is for.
Bisect is for when you don't know which code, just the behavior.
/* snip */
void
Object::do_the_thing (int)
{
}
void
Object::do_the_thing (float)
{
}
/* snip*/
AFAICT, git log will never be able to be told to review the history of the second version.In any high quality codebase I’ve worked in, git bisect has been totally unnecessary. It doesn’t matter which commit the bug was introduced in when it’s simple to test the components of your code in isolation and you have useful observability to instruct you on where to look and what use inputs to test with.
This has been my experience working on backend web services - YMMV wildly in different domains.
Instead of understanding the code you only need to understand the bug. Much easier!
Edit: tracking down where something was introduced can also be extremely helpful for "is this a bug or a feature" type investigations, of which I have done many. Blame is generally the first tool for this, but over the course of years the blame and get obscured.
And also a fair number of bugs filed can be traced back to someone asking for it to work that way.
I got to hear about a particularly irate customer during a formative time of my life and decided that understanding why weird bugs got in the code was necessary to prevent regressions that harm customer trust in the company. We took too long to fix a bug and we reintroduced it within a month. Because the fix broke another feature and someone tried to put it back
You can avoid it via the "just look at every usage of this function and hold the entire codebase in your head" method, of course, but looking at the commit seems a bit simpler.
Well, yes. But `git bisect` is often the quickest way to find that, in a complex system.
Huh. Thanks for pointing that out. I definitely would never have thought about the use case of "Only the end user has specific hardware which can pinpoint the bug."
It sounds like the author doesn't understand the codebase, if you're brute-forcing bug detection by bisecting commit versions to figure out where the issue is, something's already failed. In most cases you should have logs/traces/whatever that give you the info you need to figure out exactly where the problem is.
"Is sftp-ing to prod a merge?"
I get that author learned a new-to-him technique and is excited to share with the world. But to this dev, with a rapidly greying beard, the article has the vibe of "Hey bro! You're not gonna believe this. But I just learned the Pope is catholic."
Binary search is one of the first things you learn in algorithms, and in a well-managed branch the commit tree is already a sorted straight line, so it's just obvious as hell, whether or not you use your VCS to run the bisect or you do it by hand yourself.
"Hey guys, check it out! Water is wet!"
I mean, do you really not know this XKCD?
This phrase immediately turned the rest of my hair gray. I'm old enough to still think of Git as the "new" version control system, having survived CVS and Subversion before it.
At my mid 90s Unix shop, everyone had to use someone’s script which in turn called sccs. I don’t recall what it did, but I remember being annoyed that someone’s attempt to save keystrokes meant I had to debug alpha-quality script code before the sccs man page was relevant.
Adding -x to the shebang line was the only way to figure out what was really going on.
I also think that typically if you have to resort to bisect you are probably in a wrong place. You should have found the bug earlier so if do not even know when the bug came from
- your test coverage isn't good sufficient
- your tests are probably not actually testing what you believe they do
- your architecture is complex, too complex for you
To be clear though I do include myself in this abstract "you".
But few of us have the privilege of working on such codebases, and with people who have that kind of discipline and quality standards.
In reality, most codebases have statement coverage that rarely exceeds 50%, if coverage is tracked at all; tests are brittle, flaky, difficult to maintain, and likely have bugs themselves; and architecture is an afterthought for a system that grew organically under deadline pressure, where refactors are seen as a waste of time.
So given that, bisect can be very useful. Yet in practice it likely won't, since usually the same teams that would benefit from it, don't have the discipline to maintain a clean history with atomic commits, which is crucial for bisect to work. If the result is a 2000-line commit, you still have to dig through the code to find the root cause.
My scenario with the project was:
- no unit/E2E tests - no error occurring, either from Sentry tracking or in the developer tools console. - Many git commits to check through as GitHub's dependabot alerts had been busy in the meantime.
I would say git bisect was a lifesaver - I managed to trace the error to my attempt to replace a file I had with the library I extracted for what it did (http://github.com/anephenix/event-emitter).
It turns out that the file had implemented a feature that I hadn't ported to the library (to be able to attach multiple event names to call the same function).
I think the other thing that helps is to keep git commits small, so that when you do discover the commit that breaks the app, you can easily find the root cause among the small number of files/code that changed.
Where it becomes more complex is when the root cause of the error requires evaluating not just one component that can change (in my case a frontend SPA), but also other components like the backend API, as well as the data in the database.
Background: We had two functions in the codebase with identical names and nearly identical implementations, the latter having a subtle bug. Somehow both were imported into a particular python script, but the correct one had always overshadowed the incorrect one - that is, until an unrelated effort to apply code formatting standards to the codebase “fixed” the shadowing problem by removing the import of the correct function. Not exactly mind bending - but, we had looked at the change a few times over in GitHub while debugging and couldn’t find a problem with it - not until we knew for sure that was the commit causing the problem did we find the bug.
Well, the example of git bisect tells you that you should know of the concept of binary search, but it's not a good argument for having to learn how to implement binary search.
Also just about any language worth using has binary search in the standard library (or as a third party library) these days. That's saner than writing your own, because getting all the corner cases right is tricky (and writing tests so they stay right, even when people make small changes to the code over time).
The most problematic line that most people seem to miss is in the calculation of the midpoint index. Using `mid = (low + high) / 2` has an overflow bug if you're not using infinite precision, but there are several other potential problems even in the simplest algorithm.
> Using `mid = (low + high) / 2` has an overflow bug if you're not using infinite precision, but there are several other potential problems even in the simplest algorithm.
Well, if you are doing binary search on eg items you actually hold in memory (or even disk) storage somewhere, like items in a sorted array (or git commits), then these days with 64 bit integers the overflow isn't a problem: there's just not enough storage to get anywhere close to overflow territory.
A back of the envelope calculation estimates that we as humanity have produced enough memory and disk storage in total that we'd need around 75 bits to address each byte independently. But for a single calculation on a single computer 63 bits are probably enough for the foreseeable future. (I didn't go all the way to 64 bits, because you need a bit of headroom, so you don't run into the overflow issues.)
To be fair, in most computing environments, either indices don't overflow (Smalltalk, most Lisps) or arrays can never be big enough for the addition of two valid array indices to overflow, unless they are arrays of characters, which it would be sort of stupid to binary search. It only became a significant problem with LP64 and 64-bit Java.
I wrote a short post on this:
huflungdung•11h ago
trenchpilgrim•10h ago
I know a lot of professional, highly paid SWEs and DevOps people who never went to college or had any formal CS or math education beyond high school math. I have a friend who figured out complexity analysis by himself on the job trying to fix up some shitty legacy code. Guy never got past Algebra in school.
(If you're about to ask how they can get jobs while new grads can't - by being able to work on really fucking terrible legacy code and live in flyover states away from big cities.)