I absolutely can't imagine not using some kind of tool like this. Feels as vital as VCS to me now.
I'll also say I have absolutely 0 regrets about moving from Nix to Mise. All the common tools we want are available, it's especially easy to install tools from pip or npm and have the environments automanaged. The docs are infinity times better. And the speed of install and shell sourcing is, you guessed it, much better. Initial setup and install is also fantastically easier. I understand the ideology behind Nix, and if I were working on projects where some of our tools weren't pre-packageable or had weird conflicting runtime lib problems I'd get it, but basically everything these days has prebuilt static binaries available.
There's also jbadeau/mise-nix that lets you use flakes in mise, but I figured at that point I may as well just use flake.nix.
https://github.com/jdx/mise/discussions/4720#discussioncomme...
or see the first comment on this thread to see a way to explicitly specify where to find the binaries for each platform:
https://github.com/jdx/mise/discussions/4720#discussioncomme...
Having these kind of "eject" options is one of the reasons I really appreciate Mise. Not sure this would work for you but I'd rather be able to do this than have to manage/support everyone on my dev team installing and maintaining Nix.
Lots of package combinations didn’t work and I was not skilled enough to figure out why.
The error messages are terrible.
They don’t provide enough versions of packages. I want Python 3.10.4 exactly. But Nix packages by default only provide 3.10.something
I would love to use Nix everywhere, but it’s just too cumbersome for me.
And the error messages are ... well, yeah. I don't find the nix language as awful as some do, but it's still a functional language by and for functional programmers, and being lazy, a lot of errors surface in very non-obvious places. Ultimately Nix could use a declarative config format on top of everything, but I'd rather they ironed out the other issues first. Guix seems to be a bit further along there, but its platform options are more limited.
nix run github:user/repo/commit
There is no need to keep anything around, or roll your own nix equivalent, you can just look up the output by commit. $ git describe --dirty
v1.4.1-1-gde18fe90-dirty
The format is <the most recent tag>-<the number of commits since that tag>-g<the short git hash>-<dirty, but only if the repo is dirty>.If the repo isn't dirty, then the hash you get excludes that part:
$ git describe --dirty
v1.4.1-1-gde18fe90
If you're using lightweight tags (the default) and not annotated tags (with messages and signatures and etc) you may want to add `--tags` because otherwise it'll skip over any lightweight tags.The other nice thing about this is that, if the repo is not -dirty, you can use the output from `git describe` in other git commands to reference that commit:
$ git show -s v1.4.1-1-gde18fe90
commit de18fe907edda2f2854e9813fcfbda9df902d8f1 (HEAD -> 1.4.1-release, origin/HEAD, origin/1.4.1-release)
Author: rockowitz <rockowitz@minsoft.com>
Date: Sun May 28 17:09:46 2023 -0400
Create codacy.ymlAlso, if you don't feel ready to commit to tagging your repository you can start with the `--always` flag which falls back to just the short commit hash.
The article's script isn't far from `git describe --always --dirty`, which can be a good place to start, and then it gets better as you start tagging.
https://nikhilism.com/post/2020/windows-deterministic-builds... is a good resource on some of the other steps needed. It's... a non-trivial journey :)
had to be said
(Fyi I just used something like the solution from the article, with the hash embedded in the binary image to be burned to ROM masks. The gaps in toolchain versioning and not building with dirty checkouts can be managed with self discipline /internal checks)
You need to control every single library header version you are using outside your source like stdlibs, os headers, third party, and have a strategy to deal with rand/datetime variables that can be part of the binary.
I do this git tags thing with my projects - it helps immensely if the end user can hover over the company logo and get a tooltip with the current version, git tag and hash, and any other relevant information to the build.
Then, if I need to triage something specific, I un-archive the virtualized build environment, and everything that was there in the original build is still there.
This is a very handy method for keeping large code bases under control, and has been very effective over the years in going back to triage new bugs found, fixing them, and so on.
The Build Machine would be used to make The Gold Master Disc. A physical DVD that would be shipped to the publisher to be reproduced hopefully millions of times. Getting The Gold Master Disc to a shippable state would usually take weeks because it involved burning a custom disc format for each build and there was usually no way to debug other than watching what happened on the game screen.
When The Gold Master Disc was finally finalized, The Build Machine would be powered down, unplugged, labeled "This is the machine that made The Gold Master Disc for Game XYZ. DO NOT DISCARD. Do not power on without express permission from the CTO." and archived in the basement forever. Or, until the company shut down. Then, who knows what happens to it.
But, there was always a chance that the publisher or Sony would come back and request to make a change for 1.0.1 version because of some subtle issue that was found later. You don't want to take any chances starting the build process over on a different machine. You make the minimal changes possible on The Build Machine and you get The Gold Master Disc 1.0.1 out ASAP.
The nicest variant was the inclusion of a "build laptop" in the budget for the projects, so that there was a dedicated, project-specific laptop which could be archived easily enough, serving as the master build machine. In one company, the 'Archive Room' was filled with shelves of these laptops, one for each project, and they could be checked out by the devs, like a library, if ever needed. That was very nice.
For many types of projects, this is very effective - but it does get tripped up when you have to have special developer tooling (or .. grr .. license dongles ..) attached before the compiler will fire up.
That said, we must surely not overlook the number of times that someone finds a "Gold Master Disc" with a .zip file full of sources out there, too. I forget some of the more famous examples, but it is very fun to see accidentally shipped full sources for projects, on occasion, because a dev wanted to be sure the project was future proof, lol.
Incidentally, hassles around this issue is one of the key factors in my personal belief that almost all software should be written with scripting languages, running in a custom engine .. reducing the loss surface when, 10 years later, someone decides the bug needs to be fixed ..
Here's a talk from 2024: https://debconf24.debconf.org/talks/18-reproducible-builds-t...
Several distros are above the 90% mark of all packages being byte-for-byte reproducible, and one or two have hit the 99% mark.
Simply incredible.
Explains F-Droid's recent success with Reproducible Builds (as some F-Droid maintainers are also active in the Debian scene): https://f-droid.org/en/2025/05/21/making-reproducible-builds...
Eliminating nondeterminism from your builds might require some thinking, there are a number of places this can creep in (timestamps, random numbers, nondeterministic execution, ...). A good package manager can at least give you tooling to validate that you have eliminated nondeterminism (e.g. `guix build --check ...`).
Once you control the entire environment and your build is reproducible in principal, you might still encounter some fun issues, like "time traps". Guix has a great blog post about some of these issues and how they mitigate them: https://guix.gnu.org/en/blog/2024/adventures-on-the-quest-fo...
Guix' full-source bootstrap is pretty enlightening on that topic: https://guix.gnu.org/manual/devel/en/html_node/Full_002dSour...
Just use ClearCase/ClearMake, it's been doing all of this software configuration auditing stuff for you since the 1990s.
Git hashes or tags can help identify what was built: the inputs.
You only need to know that for traceability: when you hold the released outputs, but do not hold (or are not sure you hold) the matching inputs.
If builds are reproducible, the traceability becomes more meaningful.
In the TXR project, have a ./configure option called --build-id. This sets an ID that is appended to the version, which is in the executable. It is nothing by default; not used. It is meant to be useful for people who interact with the code, so they can check what they are running (things can get confusing when you are going back and forth among versions, or making local changes).
If you set the build ID it to the word "git", then it is calculated using:
git describe --tags --dirty
that's probably what this author should be using. It gives you a meaningful ID that is related to the most recent release tag, and whether the repo was dirty. $ git describe --tags --dirty
txr-302-20-g77c99b74e-dirty
We are (sadly, only) 20 commits after 302, at a commit whose short hash is 77c99b74e, and the repo is in a modified state.I have it rigged in the Makefile that it actually keeps track of the most recent build ID in a little .build_id file. If the build ID changes relative to what is in that file, the Makefile will force a rebuild of the .o files which incorporate the build ID.
Also, there is no need to be generating dynamic #include material just for this. A simple -Dsymbol=var option in the CFLAGS will define a preprocessor symbol:
CFLAGS += -DMY_BUILD_ID=\"$(my_build_id)\"It's addressing a distinct problem from "if we rebuild any given version, perhaps some later time, do we even get the same binary?" which is what people usually mean by "reproducible builds".
Your tip that injecting build ids can be done with linker flags without needing to generate header files is a great one.
Passing version info without code generation using linker flags can also be done in other languages & toolchains, e.g. with Go projects, the go linker exposes an -x flag that can be used to set the value of a string variable in a package [1] [2].
A step beyond this could be to explicitly build a feature into your software to help the user report bugs or request support, e.g. user clicks a button and the software dumps its own version info, info about what the user is doing & their machine, packages it up and sends in to your support queue. Doesn't make sense doing this for backend services, but you do see support features like this in PC games to help users easily send high quality bug reports.
[1] https://pkg.go.dev/cmd/link
[2] https://www.digitalocean.com/community/tutorials/using-ldfla...
Which golfs to "traceable" != "reproducible"
Someday, Go programs won't have to do this: https://github.com/golang/go/issues/50603
https://github.com/xrootd/xrootd/blob/master/cmake/XRootDVer...
and also the genversion.sh script at the top of the repo.
I use these plus #cmakedefine and git tags to manage the project version without having to do it via commits.
We used the same trick (git hash + git diff to monitor uncommitted changes) in a Python simulation framework we are developing for the JAXA/EU space mission "LiteBIRD." [1]
[1] https://iopscience.iop.org/article/10.1088/1475-7516/2025/11...
j4cobgarby•2mo ago