And also - melding the "changed twice" (or thrice...) mutations into a single commit is a brilliant isolation of a subtle common pattern.
Whenever I sit down to code with a purpose, I'll make a branch for that purpose: git checkout -b wip/[desc]
When I make changes that I think will be a "keyframe" commit, I use: git add . git commit -m "wip: desc of chunk" (like maybe "wip: readme")
if I make refinements, I'll do: git add . git commit --amend
and when I make a nee "keyframe commit": git commit -m "wip: [desc 2]"
and still amend fixes.
Occasionally I'll make a change that I know fixes something earlier (i.e. an earlier "keyframe" commit) but I won't remember it. I'll commit and then do: git add . git commit -m "fixup: wip desc, enough to describe which keyframe commit should be amended"
at the end I'll do a git rebase -i main and see something like:
123 wip: add readme (it's already had a number of amends made to it) 456 wip: add Makefile (also has had amendments) 789 wip: add server (ditto) 876 fixup: readme stuff 098 fixup: more readme 543 fixup: makefile
and I'll use git rebase -i to change it to reword for the good commits, and put the fixups right under the ones they edit. then i'll have a nice history to fast forward into main.
Git commit --fixup lets you attach new commits to previous hashes you specify and then can automatically (or semi-manually depending on settings) squash them in rebases.
$ commit frobinator/ -m "refactor the frobnicator"
[ more work ]
$ commit echaton/ -m "immanentize the eschaton"
[ oops, missed a typo ]
$ commit frobinator/ --fixup :/frobic
0: https://stackoverflow.com/a/52039150The other post about being able to do it on a substring match sounds way more ergonomic though, I’ll have to try that!
The name of the algorithm is “gather”, by Sean Parent and Marshall Clow.
https://listarchives.boost.org/Archives/boost/2013/01/200366...
I gotta say, I don't see the greatness any more than most of the repliers in that Boost thread — it's just two stable_partitions in a row.
"[...] Or is there some optimization that gather provides over (stable_)partition? —— Nope. [...]"
It may be just two stable partitions, but “just” is doing a lot of work there. The algorithm becomes obvious once someone has identified it.
Sadly the 25-line original code isn't presented; the code that is presented is the 5-line replacement using the STL's `find_if` and `rotate`. Bjarne sketches the idea that those five lines can be further condensed into two lines with the non-STL `gather` algorithm:
auto dest = std::find_if(v.begin(), v.end(), contains(p));
stdx::gather(v.begin(), dest, v.end(), [](const auto& elt) { return &elt == &*source; });
But this is overkill — replacing an O(distance(source,dest)) non-allocating rotate with an O(v.size()) potentially-allocating stable_partition — and more importantly it re-complicates the code.Now, I think part of his point is that `stable_partition` is "simpler" than `gather` only because it's in the STL. If we add `gather` to the STL too and everyone learns what it means, then there's no objection to using `gather` for "simplification" like this: it would be a straightforward simplification in almost the same way that `std::equal_range(first, last, x)` is a straightforward simplification of `std::make_pair(std::lower_bound(first, last, x), std::upper_bound(first, last, x))`.
The "almost" is that actually there is an algorithmic advantage to `std::equal_range`: when you're looking for the upper bound, you don't have to consider any of the elements to the left of the lower bound you already found. You get a (very slight) performance boost by using the combined `equal_range` algorithm. `gather`, on the other hand, has no such advantage; and (as we've seen) has a (very slight) performance disadvantage when compared to the `rotate` that Bjarne's correspondent's code actually required.
We're not talking about replacing 25 lines of bespoke code with 1 line of Boost `gather`; we're talking about replacing 2 lines of STL `stable_partition` with 1 line of Boost `gather`. The former is probably worth it. The latter is not.
This is only true in the textual level.
Semantically, re-shuffling commits like this can still cause conflicts. Ie it can break your tests. Not at the end, but for the intermediate commits.
It's enough for the tests to pass at each merge point.
In general, your CI/CD should make sure that each commit that appears in the 'public' history of main builds and passes tests.
Sit in draft until you're ready to use the CI - which you verified locally or run manually in draft, before convert to reviewable - then review, maybe tweak, merge.
Atomic commits would endanger me losing unfinished work or eventual dead-ends with no record. This seems inefficient.
Most pull requests should probably be squashed to appear as a single commit in the final history. But you should have the option of leaving history intact, when you want that, and then your CI/CD should run the checks as above.
If you never look at individual commits in your history, you might as well squash them.
FFS for short, which has suitably disgruntled other exclamatory meanings.
i.e. git-delta -n 2 = 'what changed twice'
or if its just what changed twice in every case then just 'git-delta-delta'
I had a good idea of what it did before reading the article, it is a long name but not Java-long, and none of the suggestions so far are clear to me, even after reading the article.
The only somewhat confusing part is the "twice", because it can be more than twice. But if you think about it, if it has been changed more than twice, it had to be changed twice at some point, so it is not totally wrong.
By the time I finished writing it I had come up with a less crappy name, but I thought I'd leave the question in the post anyway.
what-changed-once-more‘squash-candidates’ would address all of that.
By my usual naming conventions, this one would be `git-repeatedly-changed`.
If I remember early git days correctly, that's how git was implemented: a bunch of separate utilities working together on the database which is the .git folder.
0. https://git-scm.com/docs/git.html#_low_level_commands_plumbi...
I can see the argument in favor of `git-` also.
But I think I'd prefer `git-changed-twice` to be a wrapper that takes a reflist argument, and runs `git-log --stat reflist | what-changed-twice`.
↑ newer
D* fixes bug in crypto.py
C
B* rewrites crypto.sh in Python
A
0 last month’s release
↓ older
In this example, if the release needs the fix in D you’ll also need to cherry pick the rewrite in B.You get false positives and false negatives: if B fixed a comment typo for example it’s not really a dependency, and if C updated a module imported in the new code in D you’d miss it. (For the latter, in Python at least, you can build an import DAG with ast. It’s a really useful module and is incredibly fast!)
So I would say the author’s tool is really multiple tools:
1/ build a dependency graph between commits based on file changes in a range of commits;
2/ automate the reordering and squashing of dependent commits on a private dev branch;
3/ automate cherry-picking commits onto a proposed release branch (which is basically the same as git-rebase -i); and
4/ build a dependency graph based on external analysis (in my example, Python module imports) rather than / as well as file changes.
Their use case is (1) and (2), (3) is a similar but slightly different tool to (2), and (4) is a language specific nicety that goes beyond the scope of simple git changes for, arguably, diminished returns.
Instead of:
``` calendar/seasons.blog 196 40 d1
196 196e749
40 40c52f4
d1 d142598
```The tool should simply display:
``` calendar/seasons.blog 196e749 40c52f4 d142598 ```
That's it!
The second table only complicates the output.
PS:
`what-changed-twice` is a good name.
chris_wot•4mo ago
eru•4mo ago
chris_wot•4mo ago
kruador•4mo ago
Second, Windows doesn't really have a 'fork' API. Creating a new process on Windows is a heavyweight operation compared to
nix. As such, scripts that repeatedly invoke other commands are sluggish. Converting them to C and calling plumbing commands in-process has a radical effect on performance.Git for Windows is more of a maintained fork than a real first-class platform.
Also, I believe it's a goal to make it possible to use Git as a library rather than as an executable. That's hard to do if half the logic is in a random scripting language. Library implementations exist - notably libgit2 - but it can never be fully up to date with the original. Search for 'git libification'.
Many IDEs started their Git integration with libgit2, but subsequently fell foul of things that libgit2 can't do or does inconsistently. Therefore they fall back on executing `git` with some fixed-format output.
1718627440•4mo ago
You can still wrap the interface to the executable in a library.