https://docs.github.com/en/get-started/using-git/about-git-r...
> Warning - Because changing your commit history can make things difficult for everyone else using the repository, it's considered bad practice to rebase commits when you've already pushed to a repository.
A similar warning is in Atlassian docs.
Branches that people are expected to track (i.e. pull from or merge into their regularly) should never rebase/force-push.
Branches that are short-lived or only exist to represent some state can do so quite often.
- the web tooling must react properly to this (as GH does mostly)
- comments done at the commit level are complicated to track
- and given the reach of tools like GH, people shooting their own foot with this is (even experienced ones) most likely generate a decent level of support for these tools teams
In fact, I've been using Jujutsu for ~2 years as a drop-in and nobody complained (outside of the 8 small PRs chained together). Git is great as a backend, but Jujutsu shines as a frontend.
Besides, Magit rebasing is also pretty sweet.
`jj` is the only tool that make me use `rebase` personally. Before, I see as the punishment given by my team wishes :)
This comment sums up the issues better than I could: https://news.ycombinator.com/item?id=31792396
My peers are my reviewers...
If you, however, open a chain of 8 PRs, and merge them in the right order, the individual commits will be persisted in the git history. Potentially worth it if you like to have a "story" of several commits...
The funny thing about this debate for me is that i find it comes down to the committer. If the committer makes small commits in a stacked PR, where each commit is a logical unit of work, part of the "story" being told about the overall change, then i don't personally find it's that useful to stack them. The committer did the hard part, they wrote the story of changes in a logical, easy to parse manner.
If the story is a mess, where the commits are huge or out of logical order, etc - then it doesn't matter much in my view.. the PR(s) sucks either way.
I find stacked PRs to be a workflow solution to what to me is a UI problem.
Though I forget if you can even comment in the individual commits in that view. Complex multi-commit PRs have generally been a nightmare on GitHub in my experience.
When chaining commits it's possible to (for example) have a function that does THING and then have another PR that have a function that uses first one.
It's somewhat PITA when team has no-dead-code hard rule, but otherwise it's quite manageable and invites rich feedback. Reviewer and feedback can focus on atomic change (in example: function that does THING) and not on a grand picture.
But I also often use method of logical PRs and even written about it: https://xlii.space/eng/pr_trick/
Pre-jujutsu, I never rebased unless my team required it. Now I do it all the time.
Pre-jj, I never had linear history, unless the team required it. Now most of my projects have linear history.
A better UI makes a huge difference.
I will try to give Jujutsu a go based on your recommendation!
If there’s a jj post on HN, people come out of the woodworks to say that git is easy and it’s crazy to suggest that anyone finds it difficult or confusing. Also people saying they’ve figured out git is super usable if you only ever use commit, merge, pull.
Then you have git posts where everyone talks about how hard some basic things are, how easy it is to mess up your repo, how frustrating rebase is, etc.
It’s fun to watch.
Linear history is like reality: One past and many potential futures. With non-linear history, your past depends on "where you are".
----- M -----+--- P
/
----- D ---+
Say I'm at commit P (for present). I got married at commit M and got a dog at commit D. So I got married first and got a Dog later, right? But if I go back in time to commit D where I got the dog, our marriage is not in my past anymore?! Now my wife is sneezing all the time. Maybe she has a dog allergy. I go back in time to commit D but can't reproduce the issue. Guess the dog can't be the problem.No. In one reality, you got married with no dog, and in another reality you got a dog and didn't marry. Then you merged those two realities into P.
Going "back in time to commit D" is already incorrect phrasing, because you're implying linear history where one does not exist. It's more like you're switching to an alternate past.
Can't that also happen with a rebase? Isn't it an (all too easy to make) error any time you have conflicting changes from two different branches that you have to resolve? Or have I misunderstood your scenario?
In my experience, when there is a bug, it’s often quicker to fix it without having a look at the past commits, even when a regression occurs. If it’s not obvious just looking at the current state of the code, asking whoever touch that part last will generally give a better shortcut because there is so much more in the person mind than the whole git history.
Yes logs and commit history can brings the "haha" insight, and in some rare occasion it’s nice to have git bisect at hand.
Maybe that’s just me, and the pinnacle of best engineers will always trust the source tree as most important source of information and starting point to move forward. :)
Had you simply rebased you would have lost the ability to separate the initial working implementation of D from the modifications required to reconcile it with M (and possibly others that predate it). At least, unless you still happen to have a copy of your pre-rebase history lying around but I prefer not to depend on happenstance.
I'd say: cleaning that up is an advantage. Why keep that around? It wouldn't be necessary if there was no update on the main branch in the meantime. With rebase you just pretend you started working after that update on main.
Recall that the entire premise is that there's a bug (the allergy). So at some point a while back something went wrong and the developer didn't notice. Our goal is to pick up the pieces in this not-so-ideal situation.
What's the advantage of "cleaning up" here? Why pretend anything? In this context there shouldn't be a noticeable downside to having a few extra kilobytes of data hanging around. If you feel compelled to "clean up" in this scenario I'd argue that's a sign you should be refactoring your tools to be more ergonomic.
It might be worthwhile to consider the question, why have history in the first place? Why not periodically GC anything other than the N most recent commits behind the head of each branch and tag?
The best method for stop being terrified of destructive operations in git when I first learned it, was literally "cp -r $original-repo $new-test-repo && go-to-town". Don't know what will happen when you run `git checkout -- $file` or whatever? Copy the entire directory, run the command, look at what happens, then decide if you want to run that in your "real" repository.
Sound stupid maybe, but if it works, it works. Been using git for something like a decade now, and I'm no longer afraid of destructive git operations :)
And still one step further, just create a new branch to deal with the rebase/merge.
Yes there are may UX pain points in using git, but it also has the great benefits of extremely cheap and fast branching to experiment.
I guess it's actually more of a mental "divider" than anything, it tends to relax people more when they can literally see that their old stuff is still there, and I think git branches can "scare" people in that way.
Granted, this is about people very new to git, not people who understands what is/isn't destructive, and just because a file isn't on disk doesn't mean git doesn't know exactly what it is.
I've been using git almost exclusively since 2012 and feel very comfortable with everything it does and where the sharp edges are. Despite that, I still regularly use the cp -r method when doing something even remotely risky. The reason being, that I don't want to have to spend time unwinding git if I mess something up. I have the understanding and capability of doing so, but it's way easier to just cp -r and then rm -rf && cp -r again if I encounter something unexpected.
Two examples situations where I do this:
1. If I'm rebasing or merging with commits that have a moderate to high risk of merge conflicts that could be complicated. I might get 75% through and then hit that one commit where there's a dozen spots of merge conflict and it isn't straightforwardly clear which one I want (usually because I didn't write them). It's usually a lot easier to just rm -rf the copy and start over in a clean cp -r after looking through the PR details or asking the person who wrote the code, etc.
2. If there are uncommitted files in the repo that I don't want to lose. I routinely slap personal helper scripts or Makefiles or other things on top of repos to ease my workflow, and those don't ever get committed. If they are non-trivial then I usually try to keep a copy of them somewhere else in case I need to restore, but I'm not alway ssuper disciplined about that. The cp -r method helps a lot
There are more scenarios but those are the big two that come to mind.
I think i've seen someone coded user-friendlier `git undo` front for it.
TDLR is: people feel safer when they can see that their original work is safe, while just making a new branch and playing around there is safe in 99% of the cases, people are more willing to experiment when you isolate what they want to keep.
Rebase is a super power but there are a few ground rules to follow that can make it go a lot better. Doing things across many smaller commits can make rebase less painful downstream. One of the most important things is to learn that sometimes a rebase is not really feasible. This isn't a sign that your tools are lacking. This is a sign that you've perhaps deviated so far that you need to reevaluate your organization of labor.
Also, since you can choose to keep the fossil repo in a separate directory, that's an additional space saver.
This is almost exactly what git does, except it's a million times faster. Every commit is one of those copies, and you can instantly jump to any one of them using git checkout.
If you like this mental model, you'll feel right at home with git. You will love git reflog.
In the same class, for commit to not have on which branch they were created as a metadata is a rel painpoint. It always a mess to find what commit were done for what global feature/bugfix in a global gitflow process...
I'll probably be looking into adding an commit auto suffix message with the current branch in the text, but it will only work for me, not any contributors...
I also prefer Fossil to Git whenever possible, especially for small or personal projects.
From your link. The actual issue that people ought to be discussing in this comment section imo.
Why do we advocate destroying information/data about the dev process when in reality we need to solve a UI/display issue?
The amount of times in the last 15ish years I've solved something by looking back at the history and piecing together what happened (eg. refactor from A to B as part of a PR, then tweak B to eventually become C before getting it merged, but where there are important details that only resulted because of B, and you don't realize they are important until 2 years later) is high enough that I consider it very poor practice to remove the intermediate commits that actually track the software development process.
One commit per logical change. One merge per larger conceptual change. I will rewrite my actual dev process so that individual commits can be reviewed as small, independent PRs when possible, and so that bigger PRs can be reviewed commit-by-commit to understand the whole. Because I care about my reviewers, and because I want to review code like this.
Care about your goddamn craft, even just a little bit.
But the git authors are adamant that there's no convention for linearity, and somehow extended that to why there shouldn't be a "theirs" merge strategy to mirror "ours" (writing it out it makes even less sense, since "theirs" is what you'd want in a first-parent-linear repo, not "ours").
Also (and especially) it make it way easier to revert a single feature if all the relevant commits to that feature are already grouped.
For your issue about not knowing which branch the commits are from: that why I love merge commits and tree representation (I personally use 'tig', but git log also have a tree representation and GUI tools always have it too).
Except perhaps crappy gui options in GitHub. I really wish they added that option as a button.
I was working on a local branch, periodically rebasing it to master. All was well, my git history was beautiful etc.
Then down the line I realised something was off. Code that should have been there wasn't. In the end I concluded some automatic commit application while rebasing gobbled up my branch changes. Or frankly, I don't even entirely know what happened (this is my best guess), all I know is, suddenly it wasn't there.
No big deal, right? It's VCS. Just go back in time and get a snapshot of what the repo looked like 2 weeks ago. Ah. Except rebase.
I like a clean linear history as much as the next guy, but in the end I concluded that the only real value of a git repo is telling the truth and keeping the full history of WTF really happened.
You could say I was holding it wrong, that if you just follow this one weird old trick doctor hate, rebase is fine. Maybe. But not rebasing and having a few more squiggles in my git history is a small price to pay for the peace of mind that my code change history is really, really all there.
Nowadays, if something leaves me with a chance that I cannot recreate the repo history at any point in time, I don't bother. Squash commits and keeping the branch around forever are OK in my book, for example. And I always commit with --no-ff. If a commit was never on master, it shouldn't show up in it.
This is false.
Any googling of "git undo rebase" will immediately point out that the git reflog stores all rebase history for convenient undoing.
Shockingly, got being a VCS has version control for the... versions of things you create in it, not matter if via merge or rebase or cherry-pick or whatever. You can of course undo all of that.
And anyway, I don't want to dig this deep in git internals. I just want my true history.
Another way of looking at it is that given real history, you can always represent it more cleanly. But without it you can never really piece together what happened.
The `git log` history that you push is just that curated specific view into what you did that you wish to share with others outside of your own local repository.
The reflog is to git what Ctrl+Z is to Microsoft Word. Saying you don't want to use the reflog to undo a rebase is a bit like saying you don't want to use Ctrl+Z to undo mistakes in Word.
(Of course the reflog is a bit more powerful of an undo tool than Ctrl+Z, as the reflog is append-only, so undoing something doesn't lose you the newer state, you can "undo the undo", while in Word, pressing Ctrl+Z and then typing something loses the tail of the history you undid.)
Indeed, like for Word, the undo history expires after a configurable time. The default is 90 days for reachable changes and 30 days for unreachable changes, which is usually enough to notice whether one messed up one's history and lost work. You can also set it to never expire.
It is fine for people to prefer merge over rebase histories to share the history of parallel work (if in turn they can live with the many drawbacks of not having linear history).
But it is misleading to suggest that rebase is more likely to lose work from interacting with it. Git is /designed/ to not lose any of your work on the history -- no matter the operation -- via the reflog.
None of it is impossible, but IMHO it's a lot of excitement of the wrong kind for essentially no reward.
Yes, but only because of reflog.
Also incremental rebasing with mergify/git-imerge/git-mergify-rebase/etc is really helpful for long-lived branches that aren't merged upstream.
https://github.com/brooksdavis/mergify https://github.com/mhagger/git-imerge https://github.com/CTSRD-CHERI/git-mergify-rebase https://gist.github.com/nicowilliams/ea2fa2b445c2db50d2ee650...
I also love git-absorb for automatic fixups of a commit stack.
Don't erase history. Branch to a feature branch, develop in as many commits as you need, then merge to main, always creating a merge commit. Oftentimes, those commit messages that you're erasing with a squash are the most useful documentation in the entire project.
And if I'm using GitHub/Gitlab, I have pull requests that I can look back on which basically retain everything I want from a feature branch and more (like peer review discussion, links to passing CI tests, etc). Using the Github squash merge approach, every commit in the main branch refers back to a pull request, which makes this super nice.
Reading through git history should be my last resort to figure something out about the codebase. Important knowledge should be written somewhere current (comments, dev docs, etc). If there is a random value being appended to a url, at least a code comment explaining why so I don’t even have to git blame it. Yes, these sources of knowledge take some effort to maintain and sure, if I have a close-knit team on a smaller codebase, then git history could suffice. But larger, long-lived codebases with 100s of contributors over time? There’s just no possible way git history is good enough. I can’t ask new team members to read through thousands of commits to onboard and become proficient in the codebase (and certainly not 5x-10x that number of commits, if we are not squashing/rebasing feature branches into main. Although, maybe now an LLM can explain everything). So I really need good internal/dev documentation anyway, and I want useful git history but don’t care so much about preserving every tiny typo or formatting or other commit from every past feature branch.
Also iirc, with github, when I squash merge via the UI, I get a single squashed commit on main and I can rewrite the commit message with all the detail I like. The PR forever retains the commit history of the feature branch from before the squash, so I still have that feature branch history when I need it later (I rarely do) so I see no reason to clutter up history on main with the yucky feature branch history. And if I tend toward smaller PRs, which is so much nicer for dev velocity anyway, even squashed commits can be granular enough for things like bisect, blame, and so on.
At work though it is still encouraged to rebase, and I have sometimes forgotten to squash and then had to abort, or just suck it up and resolve conflicts from my many local commits.
Rebase only makes sense if you making huge PRs where you need to break it down into smaller commits to have them make sense.
If you keep your PRs small, squashing it works well enough, and is far less work and more consistent in teams.
Expecting your team to carefully group their commits and have good commit messages for each is a lot of unnecessary extra work.
git merge --squashe.g. When clicking the big green "Merge pull request" button, it will automatically squash and merge the PR branch in.
So then I don't need to remind or wait for contributors to do a squash merge before merging in their changes. (Or worse, forget to squash merge and then I need to fix up main).
That way, I don't care if your branch contains 100 commits or 1 commit. I don't need to worry about commit messages like:
- fix 1
- fix 2
- dfljfdlkfdj
- does it work now?
Do whatever you want with your commits on your feature branch. Just make sure the title of your PR is clean and follows our formatting. Git history is always well formatted and linear.
It's the ideal solution.
Wouldn't it be enough to simply back up the branch (eg, git checkout -b current-branch-backup)? Or is there still a way to mess up the backup as well?
The "local backup branch" is not really needed either because you can still reference `origin/your-branch` even after you messed up a rebase of `your-branch` locally.
Even if you force-pushed and overwrote `origin/your-branch` it's most likely still possible to get back to the original state of things using `git reflog`.
I still get confused by vscode’s changing the terms used by Git. «Current» vs «incoming» are not clear, and can be understood to mean two different things.
- Is “current” what is on the branch I am rebasing on? Or is it my code? (It’s my code)
- Is “incoming” the code I’m adding to the repo? Or is it what i am rebasing on to? (Again, the latter is correct)
I find that many tools are trying to make Git easier to understand, but changing the terms is not so helpful. Since different tools seldom change to the same words, it just clutters any attempts to search for coherent information.
This constant reinvention makes the situation even worse, because now the terminology is not only confusing, but also inconsistent across different tools.
If I have a merge conflict I typically have to be very conscious about what was done in both versions, to make sure the combination works.
I wish for "working copy" and "from commit 1234 (branch xyz)" or something informative, rather than confusing catch-all terms.
We'll be migrating to Git this year though so.
For reference, the codebase is over 20 years old, and includes binary dependencies like libraries. Makes it easy to compile old versions when needed, not so easy on the repository size...
It's inherently confusing to juggle different trees, and clearly you need some terminology for it. At least this one has become a bit of a standard.
Not once have a ever debugged a problem that benefited from rebase vs merge. Fundamentally, I do not debug off git history. Not once has git history helped debug outside of looking at the blame + offending PR and diff.
Can someone tell me when they were fixing a problem and they were glad that they rebased? Bc I can't.
I think the question was about situations where you were glad to rebase, when you could have merged instead
All the commits for your feature get popped on top the commits you brought in from main. When you are putting together your PR you can more easily squash your commits together and fix up your commit history before putting it out for review.
It is a preference thing for sure but I fall into the atomic, self contained, commits camp and rebase workflows make that much cleaner in my opinion. I have worked with both on large teams and I like rebase more but each have their own tradeoffs
especially since every developer has a different idea of what a commit should be, with there being no clear right answer
EDIT: I may have read more into GPs post but on teams that I have been on that used merge commits we did this flow as well where we merged from main before a PR. Resolving conflicts in the feature branch. So that workflow isn’t unique to using rebase.
But using rebase to do this lets you later more easily rewrite history to cleanup the commits for the feature development.
git log --oneline --graph --first-parentGit won, which is why I've been using it for more than 10 years, but that doesn't mean it was ever best, it was just most popular and so the rest of the eco system makes it worth it accepting the flaws (code review tools and CI system both have much better git support - these are two critical things that if you use anything else will work against you).
What code review tools do you prefer?
Besides testing for a perf slow down, any other use cases for git bisect + rebase?
Rebasing on main loses provenance.
If you want a clean history doing it in the PR, before merging it. That way the PR is the single unit of work.
Well if I have a diff of the PR with just the changes, then the PR is already a "unit of work," regardless of merge or rebase, right?
Are you saying that you've never used git bisect? If that's the case, I think you're missing out.
It is a tragedy that more people don't know about it.
If the contributor count is high enough (or you're otherwise in a role for which "contribution" is primarily adjusting others' code), or the behaviors that get reported in bugs are specific and testable, then bisect is invaluable.
If you're in a project where buggy behavior wasn't introduced so much as grew (e.g. the behavior evolved A -> B -> C -> D -> E over time and a bug is reported due to undesirable interactions between released/valuable features in A, C, and E), then bisecting to find "when did this start" won't tell you that much useful. If you often have to write bespoke test scripts to run in bisect (e.g. because "test for presence of bug" is a process that involves restarting/orchestrating lots of services and/or debugging by interacting with a GUI), then you have to balance the time spent writing those with the time it'd take for you to figure out the causal commit by hand. If you're in a project where you're personally familiar with roughly what was released when, or where the release process/community is well-connected, it's often better to promote practices like "ask in Slack/the mailing list whether anyone has made changes to ___ recently, whoever pipes up will help you debug" rather than "everyone should be really good at bisect". Those aren't mutually exclusive, but they both do take work to install in a community and thus have an opportunity cost.
This and many other perennial discussions about Git (including TFA) have a common cause: people assume that criticisms/recommendations for how to use Git as a release coordinator/member of a disconnected team of volunteers apply to people who use Git who are members of small, tightly-coupled teams of collaborators (e.g. working on closed-source software).
I actually think that is the most useful time to use bisect. Since this is a situation where the cause isn't immediately obvious, looking through code can make those issues harder to find.
For example, take a made up messaging app. Let's call it ButtsApp. Three big ButtsApp releases releases happened in order that add the features: 1) "send messages"; 2) "oops/undo send"; and 3) "accounts can have multiple users operating on them simultaneously". All of these were deemed to be necessary features and released over successive months.
Most of the bugs that I've spent lots of time diagnosing in my career are of the interacting-known-features variety. In that example, it would be "user A logs in and sends a message, but user B logs in and can undo the sends of user A" or similar. I don't need bisect to tell me that the issue only became problematic when multi-user support was released, but that release isn't getting rolled back. The code triggering the bug is in the undo-send feature that was released months ago, and the offending/buggy action is from the original send-message feature.
Which commit is at fault? Some combination of "none of them" and "all of them". More importantly: is it useful to know commit specifics if we already know that the bug is caused by the interaction of a bunch of separately-released features? In many cases, the "ballistics" of where a bug was added to the codebase are less important.
Again, there are some projects where bisect is solid gold--projects where the bug triage/queue person is more of a traffic cop than a feature/area owner--but in a lot of other projects, bugs are usually some combination of trivially easy to root-cause and/or difficult to fix regardless of whether the causal commit is identified.
My preference for rebasing comes from delivering stacked PRs: when you're working on a chain of individually reviewable changes, every commit is a clean, atomic, deliverable patch. git-format-patch works well with this model. GitHub is a pain to use this way but you can do it with some extra scripts and setting a custom "base" branch.
The reason in that scenario to prefer rebasing over "merging in master" is that every merge from master into the head of your stack is a stake in the ground: you can't push changes to parent commits anymore. But the whole point of stacked diffs is that I want to be able to identify different issues while I work, which belong to different changes. I want to clean things up as I go, without bothering reviewers with irrelevant changes. "Oh this README could use a rewrite; let me fix that and push it all the way up the chain into its own little commit," or "Actually now that I'm here, let me update dependencies and ensure we're on latest before I apply my changes". IME, an ideal PR is 90% refactors and "prefactors" which don't change semantics, all the way up to "implemented functionality behind a feature flag", and 10% actual changes which change the semantics. Having an editable history that you can "keep bringing with you" is indispensible.
Debugging history isn't really related. Other than that this workflow allows you to create a history of very small, easily testable, easily reviewable, easily revertible commits, which makes debugging easier. But that's a downstream effect.
But the main benefit proponents or rebase say its for keeping the history clean which also makes it easier to pinpoint and offending commit.
Personally, a clean commit history was never something that made my job easier.
> Other than that this workflow allows you to create a history of very small, easily testable, easily reviewable, easily revertible commits, which makes debugging easier. But that's a downstream effect.
I would agree that it is important for commits to go from working state to working state as you are working on a task, but this is an argument for atomic commits, not about commit history.
How do you define "clean"? I've certainly been aided by commit messages that help me identify likely places to investigate further, and hindered by commit messages that lack utility.
In the context of merge vs rebase, I think "clean" means linear, without visible parallel lines. Quality of commit messages is orthogonal. I agree with the poster that this particular flavor of "clean" (linear) has never ever helped me one bit.
I think the obsession with a linear master/main is a leftover from the time when everyone used a centralized system like svn. Git wasn't designed like that; the Linux kernel project tells contributors to "embrace merges." Your commit history is supposed to look like a branching river, because that's an accurate representation of the activity within your community.
I think having a major platform like github encourages people to treat git as a centralized version control system, and care about the aesthetics of their master/main branches more than they should. The fact the github only shows the commit history as a linear timeline doesn't help, either.
if you look at it as in investment in understanding the code base more than just closing the ticket as soon as possible, then the 'lets see what really going on here' approach makes more sense.
Me neither, for what is worth. But even if the idea is "when in order to figure out this issue, you have to go to the history", a linear history and a linear log never helped me either. For example, to find where a certain change happened to try to understand what was the intent, what I need is the commit and its neighbors, which works just as well with linear vs branching history because the neighbors are going to still be nearby up and down, not found via visual search.
You can write a test (outside of source control) and run `git bisect good` on a good commit and `git bisect bad` on bad one and it'll do a binary search (it's up to you to rerun your test each time and tell git whether that's a good or a bad commit). Rather quickly, it'll point you to the commit that caused the regression.
If you rebase, that commit will be part of a block of commits all from the same author, all correlated with the same feature (and likely in the same PR). Now you know who you need to talk to about it. If you merge, you can still start with the author of the commit that git bisect found, but it's more likely to be interleaved with nearby commits in such a way that when it was under development, it had a different predecessor. That's a recipe for bugs that get found later than they otherwise would've.
If you're not using git history to debug, you're probably not really aware of which problems would've turned out differently if the history was handled differently. If you do, you'll catch cases where one author or another would've caught the bug before merging to main, had they bothered to rebase, but instead the bug was only visible after both authors thought they were done.
not true. You can use
git bisect run script-command arguments
where script-command is a ... script that will test the result of the build.I like to keep a linear history mainly so I don't have to think very hard about tools like that.
This is especially true if you have multiple repos and builds from each one, such that you can't just checkout the commit for build X.Y.Z and easily check if the code contains that or not (you'd have to track through dependency builds, checkout those other dependencies, possibly repeat for multiple levels). If the date of a commit always reflects the date it made it into the common branch, a quick git log can tell you the basic info a lot of the time.
In this case, rebasing is nice because our changes stay in a contiguous block at the top (vs merging which would interleave them), so it's easy for me and others to see exactly where our fork diverges.
To answer your question directly, if somewhat glibly, I’m glad I rebased every time I go looking for something in the history because I don’t have to think about the history as a graph. It’s easier.
More to your point, there are times when blame on a line does not show the culprit. If you move code, or do anything else to that line, then you have to keep searching. Sometimes it’s easier to look at the entire patch history of a file. If there is a way to repeatedly/recursively blame on a line, that’s cool and I’d love to know about it.
I now manage two junior engineers and I insist that they squash and rebase their work. I’ve seen what happens if they don’t. The merges get tangled and crazy, they include stuff from other branches they didn’t mean to, etc. the squash/rebase flow has been a way to make them responsible for what they put into the history, in a way that is simple enough that they got up to speed and own it.
Except it can be the result of 10 squashed commits.
Without Squash, the main branch history becomes a timeline of your mental struggle.
With Squash, the main branch becomes a catalog of features delivered.
No body needs to take a trip on the struggle bus with me...
For the same reason you have your production history instead of zip file with code)
>while presenting a nice clean set of changes at the end of it
The set, yes, not a single squashed commit.
>The branch is a scratchpad, you should feel empowered within your own branch, rebase
Yes, amend, fixup, rebase. Make it a nice set of small commits.
Which is an argument against GitHub, not clean commit history
- Reshuffle commits into a more logical order. - Edit commit subjects if I notice a mistake. - Squash (merge) commits. Often, for whatever reason pieces of a fix end up in separate commits and it's useful to collect and merge them.
I'd like to make every commit perfect the first time but I haven't managed to do that yet. git-rebase really helps me clean things up before pushing or merging a branch.
Interactive rebasing to write local history on your working branch is incredibly useful, but also doesn't have anything to do with the "rebase vs merge" conundrum, and as long as you're not pushing to a shared branch, it doesn't have much to do with "erasing other's history".*
If you can look at a working branch (with more than a trivial addition or fix) and not feel the need to do a interactive rebase (once you know how) before making a PR, then you're either a magical 100x unicorn dev that makes every commit the perfect commit, or you cheated and made a new branch and cherry-pick-squashed your way to a clean history.
Any workflow that has a review process uses rebase. FULL STOP.
If you don't have your code reviewed and you push code to a shared repo, fine, don't use rebase if you don't want to.
Of course a readable code history aids in debugging. Just as comments and indentation do. None of these are technically necessary, but still a good idea.
Of course running the rebase command doesn't guarantee a readable commit history, but it's hard to craft commits without it. Each and every commit on linux-kernel has been rebased probably a dozen times.
Most committers don't really understand remotes, much less rebasing.
If I diff against master, I see changes in 300+ files, when I've only changed 5 (because other people have changed 300+ files.)
> Fundamentally, I do not debug off git history.
Neither. The usual argument I hear against rebase is that it destroys history. Since I don't debug off git history, I'm quite happy to destroy it, and get back to diffing my 5-file changes against (current) master.
You’ve never run a bisect to identify which commit introduced a specific behavior?
This is when I’ve found it most useful. Having commits merged instead of squashed narrows down and highlights the root problem.
It’s a rare enough situation I don’t push for merge commits over squashed rebases because it’s not worth it, but when I have had to bisect and the commits are merged instead of squashed it is very very useful.
Those commit authors are who I noted as clear thinkers and have tracked over my career to great benefit.
I'm really sorry. Using bisect and log -S saved hours of code debugging
I had it enabled for years, and the time when I tried turning it off the rebasing esperience was a hundred times worst.
This is absolute nonsense. You commit your work, and make a "backup" branch pointing at the same commit as your branch. The worst case is you reset back to your backup.
The common view that a Git GUI is a crutch is very wrong, even pernicious. To me it is the CLI that is a disruptive mediation, whereas in a GUI you can see and manipulate the DAG directly.
Obligatory jj plug: before jj, I would have agreed with the top comment[1] that rebasing was mostly unnecessary, even though I was doing it in GitUp pretty frequently — I didn't think of it as rebasing because it was so natural. Now that I use jj I see that the cost-benefit analysis around git rebase is dominated by the fact that both rebasing and conflict resolution in git are a pain in the ass, which means the benefit has to be very high to compensate. In jj they cost much less, so the neatness benefit can be quite small and still be worth it. Add on the fact that Claude Code can handle it all for you and the cost is down to zero.
[0]: https://gitup.co/
mv folder folder-old
git clone git@github/folderI remain terrified.
Or just do a merge and move on with your life.
lol if 1k words is "not easy" for you, i think you have bigger problems than merge vs rebase.
As an alternative, just create a new branch! `git branch savepoint-pre-rebase`. That's all. This is extremely cheap (just copy a reference to a commit) and you are free to play all you want.
You are a little more paranoid? `git switch -c test-rebase` and work over the new branch.
I still prefer merge. Its simple and gets out of my way as long as I dont care about purity of history
Rebase and other fancy Git things have caused problems in the past so I avoid getting too complex with Git. I'm not a Git engineer, I'm a software engineer.
Merging has always just worked, and I know exactly what to expect. If there's a big hairy branch that I need to merge, and I know there will be conflicts, I create a branch from Main, merge the hairy branch into it, and see what happens. Fix the issues there, and then merge that branch to Main when everything is working. Merge is simple, and I don't have to be master of Git to get things done.
The article discusses why contributors should rebase their feature branches (pull request).
The reason they give is for clean git history on main.
The more important reason is ensure the PR branch actually works if merged into current main. If I add my change onto main, does it then build, pass all tests, etc? What if my PR branch is old, and new commits have been added onto main that I don't have in my PR branch? Then I can merge and break main. That's why you need to update your PR branch to include the newer commits from main (and the "update" could be a rebase or a merge from main or possibly something else).
The downside of requiring contributors to rebase their PR branch is (1) people are confused about rebase and (2) if your repository has many contributors and frequent merges into main, then contributors will need to frequently rebase their PR branch, and each rebase their PR checks need to re-run, which can be time consuming.
My preference with Github is to squash merge into main[1] to keep clean git history on main. And to use merge queue[2], which effectively creates a temp branch of main+PR, runs your CI checks, and then the PR merge succeeds into main only if checks pass on the temp branch. This approach keeps super clean history on main, where every commit includes a specific PR number, and more importantly minimizes friction for contributors by reducing frequent PR rebases on large/busy repos. And it ensures main is never broken (as far as your CI checks can catch issues). There's also basically no downside for very small repos either.
1. https://docs.github.com/en/repositories/configuring-branches...
2. https://docs.github.com/en/repositories/configuring-branches...
I don't follow this. Just abort the rebase?
Everyone using git needs to accept the following. Say it out aloud if you have to: no command in git can ever modify or delete a commit.
After a botched rebase your old work is one simple reset away using the reflog. Then you can have another go or reach out for help.
So do take backups, of the whole local repository.
At the beginning of using git I used to keep dozens of copies around, if the repository is not huge it's trivial (and compressed they typically don't take up much more space than a single copy).
That saved my ass several times
I have never in 15+ years of using git daily encountered one of these "weird states". Because, like I said, no command in git can modify or delete a commit.
The only real mistake you can make is hard reset with uncommitted changes in your working dir. Other than that it's down to git bugs, which I have never encountered.
I think I did run in other bugs that did appear to have messed up the repository.
> like I said, no command in git can modify or delete a commit.
Well, you can end up losing commits by playing with the reflogs or deleting branches (if you haven't checked out the branch in the last by default 30 days, its commits might get deleted quickly)
Garbage collection is a thing, of course, but it could be completely disabled if necessary. Most people don't bother because the defaults are sensible in virtually all cases.
Almost all git problems people have are because they don't know about the reflog and don't understand that git is fundamentally append only.
Weird, to me it happened several times
> Garbage collection is a thing, of course, but it could be completely disabled if necessary
Yeah, I do set reflogs to not expire
When working in a short-lived branch, I like to rebase. I usually get no or simply easy-to-solve conflicts. I like my small and numerous commits stacked on top of the current develop. Regardless or whether we squash or not.
For long-lived branches (and technically for hard merges, though I've been using rerere more and more) merge is a better option.
What kills bisect, IMO, is large commits or commits with multiple subjects/goals. That's the reason I don't like squashed PRs.
The way to do this are pull requests on the remote.
Then if you screw up, even several steps later when it's hard to un-rebase a portion, you just go back to the original. No need to dig through the ever-confusing reflog format, just use branches. For really gnarly ones, it also means you can early compare the two diffs to see if you missed anything.
Once you're fully happy with it, you can push that new one, or just go back to your original and `git reset --hard after-rebase-branch` to adopt the new history.
The struggle is real.
I generally have a "show branches by age" script to help that a bit
That is until I started using graphite, that solved the problem completely for me. The only trick is to never mix graphite and git history editing.
Range diff takes two commit ranges and compares thor commits pairwise, wich is perfect for rebases, since after the rebase all commits still exist and should be mostly identical, just at some other place in the history.
Use it like `git range-diff main..origin/mybranch main..mybranch` to compare the local, rebased branch with the upstream branch.
This let's you easily verify that eitger mothing changed or that any conflicts were resolved well.
You lose your whole reflog and all the unreachable commits that way, and so some errors you might make will lead to unrecoverable losses..!
If you want do begin from a new clone from time to time, but make copies of the old local repository first, and keep them around! You're bound to lose work occasionally, otherwise.
And why even messing around with a remote repository and force pushes, to safeguard yourself from rebase problems you can simply take a local copy of the repository..!
That's of course when it's not a huge repository
An alternative to keep in mind is to use local clones, updating which will take less time than taking a full copy.
You forgot "your backup local branch still exists". Branching in git is effectively free, just duplicate the one you're worried about rebasing: if the rebase goes wonky enough that you can't cleanly abort it, you just start over on the backup branch, making another duplicate and trying again.
It also gives you a nice reference to compare the results of the rebase with your original intentions for the code.
But why push? Rebasing a branch doesn't affect any other branches, so a local backup branch is just as safe as a branch on the remote fork. You shouldn't ever need to nuke your clone unless you're trying something silly like rebasing main (without a local backup branch!), or doing more than just rebasing, like messing with the reflog.
Also, I don't think it's rebasing that scares people, it's the force push that scares people. There is so much out there saying "never force push or all the trees in a 500 mile radius will spontaneously combust" without explaining the nuances of concurrent work on shared branches vs a personal fork used for making contributions upstream, nor the "safety" of force-with-lease. For the latter (fork for contrib), just yolo it: you're the only one working on your fork!
- clean integration branch histories (series of merge commits) - merge commits can contain metadata (topic-level descriptions, trailers for who reviewed, merge request links, test results, etc.) - you can be (pretty) sure that `git bisect --first-parent` will not run into any compilation problems (logical conflicts occur, but are fairly rare; use merge queues to be sure) - none of the "you merged main into your topic" "backwards merges" to deal with too
Merging and rebasing each have their pros and cons, so why not use the pros of each and mitigate a lot of the cons at the same time.
coffeebeqn•3w ago
Understanding of local versus origin branch is also missing or mystical to a lot of people and it’s what gives you confidence to mess around and find things out
Akranazon•3w ago
Replaying commits one-by-one is like a history quiz. It forces me to remember what was going on a week ago when I did commit #23 out of 45. I'm grateful that git stores that history for me when I need it, but I don't want it to force me to interact with the history. I've long since expelled it from my brain, so that I can focus on the current state of the codebase. "5 commits ago, did you mean to do that, or can we take this other change?" I don't care, I don't want to think about it.
Of course, this issue can be reduced by the "squash first, then rebase" approach. Or judicious use of "git commit --amend --no-edit" to reduce the number of commits in my branch, therefore making the rebase less of a hassle. That's fine. But what if I didn't do that? I don't want my tools to judge me for my workflow. A user-friendly tool should non-judgmentally accommodate whatever convenient workflow I adopted in the past.
Git says, "oops, you screwed up by creating 50 lazy commits, now you need to put in 20 minutes figuring out how to cleverly combine them into 3 commits, before you can pull from main!" then I'm going to respond, "screw you, I will do the next-best easier alternative". I don't have time for the judgement.
teaearlgraycold•3w ago
I don’t think the tool is judgmental. It’s finicky. It requires more from its user than most tools do. Including bending over to make your workflow compliant with its needs.
theryan•3w ago
nicoburns•3w ago
You can also just squash them into 1, which will always work with no effort.
DHRicoF•3w ago
Sometimes it's ok to work like this, but you asking git not being judgamental is like saying your roomba should accomodate to you didin't asking you to empty it's dust bag.
PaulDavisThe1st•3w ago
I had a branch that lived for more than a year, ended up with 800+ commits on it. I rebased along the way, and the predictably the final merge was smooth and easy.
layer8•3w ago
I rebase often myself, but I don’t understand the logic here.
PaulDavisThe1st•3w ago
2) small conflicts when rebasing the long lived branch on the main branch
if instead I delayed any rebasing until the long lived branch was done, I'd have no idea of the scale of the conflicts, and the task could be very, very different.
Granted, in some cases there would be no or very few conflicts, and then both approaches (long-lived branch with or without rebases along the way) would be similar.
layer8•3w ago
PaulDavisThe1st•3w ago
just6979•3w ago
"If you do a single rebase at the end, there is nothing to remember, you just get the same accumulated conflicts you also collectively get with frequent rebases."
There is _everything_ to remember. You no longer have the context of what commits (on both sides) actually caused the conflicts, you just have the tip of your branch diffed against the tip of main.
"Hence I don’t understand the benefit of the latter in terms of avoiding conflicts."
You don't avoid conflicts, but you move them from the future to the present. If main is changing frequently, the conflicts are going be unavoidable. Why would you want to wait to resolve them all at once at the very end? When you could be resolving them as they happen, with all the context of the surrounding commits readily at hand. Letting the conflicts accumulate to be dealt with at the end with very little context just sounds terrifyingly inefficient.
just6979•3w ago
It's just like doing merges _from_ main during the lifetime of the branch. If you don't do any, you'll likely have lots of conflicts on the final merge. If you do it a lot, the final merge will go smooth, but your history will be pretzels all the way down.
In other words, frequent rebasing from main moves any conflicts from the future to "right now", but keeps the history nice and linear, on both sides!
joshmarlow•3w ago
And of course, making it easier to rebase makes it more likely I will do it frequently.
BeetleB•3w ago
I always do long lived feature branches, and rarely have issues. When I hear people complain about it, I question their workflow/competence.
Lots of commits is good. The thing I liked about mercurial is you could squash, while still keeping the individual commits. And this is also why I like jj - you get to keep the individual commits while eliminating the noise it produces.
Lots of commits isn't inherently bad. Git is.
just6979•3w ago
Groxx•3w ago
While I agree this is a rather severe downside of rebase... if you structure your commits into isolated goals, this can actually be a very good thing. Which is (unsurprisingly) what many rebasers recommend doing - make your history describe your changes as the story you want to tell, not how you actually got there.
You don't have to remember commit #23 out of 45 if your commit is "renamed X to Y and updated callers" - it's in the commit message. And your conflict set now only contains things that you have to rename, not all renames and reorders and value changes everything else that might happen to be nearby. Rebase conflicts can sometimes be significantly smaller and clearer than merge conflicts, though you have to deal with multiple instead of just one.
echelon•3w ago
I know a lot of people want to maintain the history of each PR, but you won't need it in your VCS.
You should always be able to roll back main to a real state. Having incremental commits between two working stages creates more confusion during incidents.
If you need to consult the work history of transient commits, that can live in your code review software with all the other metadata (such as review comments and diagrams/figures) that never make it into source control.
jameshush•3w ago
fc417fc802•3w ago
hnarn•3w ago
_flux•3w ago
yxhuvud•3w ago
kunley•3w ago
yxhuvud•3w ago
fc417fc802•3w ago
Well there's your problem. Why are you assuming there are non-working commits in the history with a merge based workflow? If you really need to make an incremental commit at a point where the build is broken you can always squash prior to merge. There's no reason to conflate "non-working commits" and "merge based workflow".
Why go out of the way to obfuscate the pathway the development process took? Depending on the complexity of the task the merge operation itself can introduce its own bugs as incompatible changes to the source get reconciled. It's useful to be able to examine each finished feature in isolation and then again after the merge.
> with all the other metadata (such as review comments and diagrams/figures) that never make it into source control.
I hate that all of that is omitted. It can be invaluable when debugging. More generally I personally think the tools we have are still extremely subpar compared to what they could be.
Izkata•3w ago
Having worked on a maintenance team for years, this is just wrong. You don't know what someone will or won't need in the future. Those individual commits have had extra context that have been a massive help for me all sorts of times.
I'm fine with manually squashing individual "fix typo"-style commits, but just squashing the entire branch removes too much.
lanyard-textile•3w ago
If those commits were ready for production, they would have been merged. ;)
Don't put a commit on main unless I can roll back to it.
z3dd•3w ago
lanyard-textile•3w ago
eeperson•3w ago
I strongly disagree. Losing this discourages swarming on issues and makes bisect worse.
> You should always be able to roll back main to a real state. Having incremental commits between two working stages creates more confusion during incidents.
If you only use merge commits this shouldn't be any more difficult. You just need to make sure you specify that you want to use the first parent when doing reverts.
noisem4ker•3w ago
https://0x5.uk/2021/03/15/github-rebase-and-squash-considere...
jillesvangurp•3w ago
I tend to rebase my unpushed local changes on top of upstream changes. That's why rebase exists. So you can rewrite your changes on top of upstream changes and keep life simple for consumers of your changes when they get merged. It's a courtesy to them. When merging upstream changes gets complicated (lots of conflicts), falling back to merging gives you more flexibility to fix things.
The resulting pull requests might get a bit ugly if you merge a lot. One solution is squash merging when you finally merge your pull request. This has as the downside that you lose a lot of history and context. The other solution is to just accept that not all change is linear and that there's nothing wrong with merging. I tend to bias to that.
If your changes are substantial, conflict resolution caused by your changes tends to be a lot easier for others if they get lots of small commits, a few of which may conflict, rather than one enormous one that has lots of conflicts. That's a good reason to avoid squash merges. Interactive rebasing is something I find too tedious to bother with usually. But some people really like those. But that can be a good middle ground.
It's not that one is better than the other. It's really about how you collaborate with others. These tools exist because in large OSS projects, like Linux, where they have to deal with a lot of contributions, they want to give contributors the tools they need to provide very clean, easy to merge contributions. That includes things like rewriting history for clarity and ensuring the history is nice and linear.
cousin_it•3w ago
fc417fc802•3w ago
I think it should be possible to assign different instances of the repository different "roles" and have the tooling assist with that. For example. A "clean" instance that will only ever contain fully working commits and can be used in conjunction with production and debugging. And various "local" instances - per feature, per developer, or per something else - that might be duplicated across any number of devices.
You can DIY this using raw git with tags, a bit of overhead, and discipline. Or the github "pull" model facilitates it well. But either you're doing extra work or you're using an external service. It would be nice if instead it was natively supported.
This might seem silly and unnecessary but consider how you handle security sensitive branches or company internal (proprietary) versus FOSS releases. In the latter case consider the difficulty of collaborating with the community across the divide.
pamcake•3w ago
This is one way to see things and work and git supports that workflow. Higher-level tooling tailored for this view (like GitHub) is plentiful.
> There's no reason a local copy should have the exact same implementation as a repository
...Except to also support the many git users who are different from you and in different context. Bending gits API to your preferences would make it less useful, harder to use, or not even suitable at all for many others.
> git made a wrong turn in this, let's just admit it.
Nope. I prefer my VCS decentralized and flexible, thank you very much. SVN and Perforce are still there for you.
Besides, it's objectively wrong calling it "a wrong turn" if you consider the context in which git was born and got early traction: Sharing patches over e-mail. That is what git was built for. Had it been built your way (first-class concepts coupled to p2p email), your workflow would most likely not be supported and GitHub would not exist.
If you are really as old as you imply, you are showing your lack of history more than your age.
onraglanroad•3w ago
That's exactly what Git is. You have your own local copy that you can mess about with and it's only when you sync with the remote that anyone else sees it.
just6979•3w ago
Who is forcing you to keep a local copy in the exact same configuration at upstream? Nothing at all is stopping you from applying your style to your repos. You're saying that not being opinionated about project structure is a "wrong turn"? I don't think so.
I think most "ground truth" open-source repos do end up operating like this. They're not letting randos push branches willy-nilly and kick off CI. Contributors fork it, work on their own branches, open a PR upstream (hence that name: PULL Request), reviews happen, nice clean commits get merged to the upstream repository that is just being a repository on a server somewhere running CI.
tjpnz•3w ago
recursive•3w ago
However, even better for me (and my team) is squash on PR resolve.
jghn•3w ago
that's not a value judgement in either direction, both initially simpler and longterm simpler have their merits.
CJefferson•3w ago
I've had failures while git bisecting, hitting commits that clearly never compiled, because I'm probably the first person to ever check them out.
Marsymars•3w ago
e.g. I'm currently working on a substantial framework upgrade to a project - I've pulled every dependency/blocker out that could be done on its own and made separate PRs for them, but I'm still left with a number of logically independent commits that by their nature will not compile on their own. I could squash e.g. "Update core framework", "Fix for new syntax rules" and "Update to async methods without locking", but I don't know that reviewers and future code readers are better served by that.
wonger_•3w ago
Where you have two repositories, one "polished" where every commit always passes, and another for messier dev history.
chuckadams•3w ago
If you have expensive e2e tests, then you might want to keep a 'latest' tag on main that's only updated when those pass.
capitainenemo•3w ago
eeperson•3w ago
jayd16•3w ago
astrobe_•3w ago
Sometimes people look sort of "superstitious" to me about Git. I believe this is caused by learning Git through web front-ends such as Github, GitLab, Gitea etc., that don't tell you the entire truth; desktop GUI clients also let the users only see Git through their own, more-or-less narrow "window".
TBH, sometimes Git can behave in ways you don't expect, like seeing conflicts when you thought there wouldn't be (but up to now never things like choosing the "wrong" version when doing merges, something I did fear when I started using it a ~decade ago).
However one usually finds an explanation after the fact. Something I've learned is that Git is usually right, and forcing it to do things is a good recipe to mess things up badly.
bodge5000•3w ago