Output synchronization which makes `make` print stdout/stderr only once a target finishes. Otherwise it's typically interleaved and hard to follow:
make --output-sync=recurse -j10
On busy / multi-user systems, the `-j` flag for jobs may not be best. Instead you can also limit parallelism based on load average: make -j10 --load-average=10
Randomizing the order in which targets are scheduled. This is useful for your CI to harden your Makefiles and see if you're missing dependencies between targets: make --shuffle # or --shuffle=seed/reverse
But not portable. Please don't use them outside of your own non-distributable toy projects.
But there's another huge category: people who are automating something that's not open-source. Maybe it stays within the walls of their company, where it's totally fine to say "build machines will always be Ubuntu" or whatever other environment their company prefers.
GNU Make has a ton of powerful features, and it makes sense to take advantage of them if you know that GNU Make will always be the one you use.
GNU Make is feature rich and is itself portable. It's also free software, as in freedom. Just use it.
Just like optimization, it has its place and time.
Some GNU Make constructs, like pattern rules, are indispensable in all but the simplest projects, but can also be overused.
For some reason there's a strong urge to programmatically generate build rules. But like with SQL queries, going beyond the parameterization already built into the language can be counter productive. A good Makefile, like a good SQL query, should be easy to read on its face. Yes, it often means greater verbosity and even repetition, but that can be a benefit to be embraced (at least embraced more than is instinctively common).
EDIT: Computed variable references are well-defined as of POSIX-2024, including (AFAICT) on the left-hand side of a definition. In the discussion it was shown the semantics were already supported by all extant implementations.
Otherwise you face an ocean of choices that can be overwhelming, especially if you're not very experienced in the problem space. It's like the common refrain with C++: most developers settle on a subset of C++ to minimize code complexity; but which subset? (They can vary widely, across projects and time.) In the case of Make, you can just pick the POSIX and/or de facto portable subset as your target, avoiding alot of choice paralysis/anxiety (though you still face it when deciding when to break out of that box to leverage GNU extensions).
EDIT: There's one exception, and that would be using Guile as an extension language, as that is often not available. However, thanks to conditionals (also not in POSIX, of course), it can be used optionally. I once sped up a Windows build by an order of magnitude by implementing certain things in Guile instead of calling shell (which is notoriously slow on Windows).
However, if "make -j" is saturates a machine, and this is unintentional, I'd assume PEBKAC, or "holding it wrong", in general.
I get that the OS could mitigate this, but that’s often not an option in professional settings. The reality is that most of the time users are expecting ‘make -j $(N_PROC)’, get bit in the ass, and then the GNU maintainers say PEBKAC—wasting hundreds of hours of junior dev time.
They are junior because they are inexperienced, but being junior is the best place to make mistakes and learn good habits.
If somebody asks what is the most important thing I have learnt over the years, I’d say “read the manual and the logs”.
Make does not provide a sane way to run in parallel. You shouldn’t have to compose a command that parses /proc/cpuinfo to get the desired behavior of “fully utilize my system please”. This is not a detail that is particularly relevant to conditional compilation/dependency trees.
This feels like it’s straight out of the Unix Haters Handbook.
[0]: https://web.mit.edu/~simsong/www/ugh.pdf see p186
I would put that in the “using it improperly” category. I never use⁰ --jobs without specifying a limit.
Perhaps there should have been a much more cautious default instead of the default being ∞, maybe something like four¹, or even just 2, and if people wanted infinite they could just specify something big enough to encompass all the tasks that could possibly run in the current process. Or perhaps --load-average should have defaulted to something like min(2, CPUs×2) when --jobs was in effect⁴.
The biggest bottleneck hit when using --jobs back then wasn't RAM or CPU though, it was random IO on traditional high-latency drives. A couple of parallel jobs could make much better use of even a single single-core CPU, by the CPU-crunching of a CPU-busy task or two and the IO of other tasks ending up parallel, but too many concurrent tasks would result in an IO flood that could practically stall the affected drives for a time, putting the CPU back into a state of waiting ages for IO (probably longer than it would be without multiple jobs running) - this would throttle a machine² before it ran out of RAM even with the small RAM we had back then compared to today. With modern IO and core counts, I can imagine RAM being the bigger issue now.
--------
[0] Well, used, I've not touched make for quite some time
[1] Back when I last used make much at all small USB sticks and SD cards were not uncommon, but SSDs big++quick+hardy enough for system or work drives were an expensive dream. With frisby-based drives I found a four job limit was often a good compromise, approaching but not hitting significantly diminishing returns if you had sufficient otherwise unused RAM, while keeping a near-zero chance of effectively completely stalling the machine due to a flood of random IO.
[2] Or every machine… I remember some fool³ bogging down the shared file server of most of the department with a vast parallel job, ignoring the standing request to run large jobs on local filesystems where possible anyway.
[3] Not me, I learned the lesson by DoSing my home PC!
[4] Though in the case of causing an IO storm on a remote filesystem, a load-average limit might be much less effective.
Personally, I don’t think these footguns need to exist.
Maybe the make authors could compile a list of options somewhere and ship it with their program, so users could read them? Something like a text file or using some typesetting language. This would make that knowledge much more accessible.
Will give you the command line options. And GNU make has decent documentation online for everything else:
https://www.gnu.org/software/make/manual/html_node/index.htm...
(`make --help` will only print the most common options)
Can’t the OS scheduler handle it?
Makefiles are eerily lisplike turing tarpits. GNU Make even has metaprogramming capabilities. Resisting the urge to metaprogram some unholy system inside the makefile can be difficult. The ubiquitousness of GNU Make makes it quite tempting.
Basically it's considered too hard if not impossible to statically define the target's dependencies. This is now done dynamically with tools like `clang-scan-deps` [2]
[1] https://cmake.org/cmake/help/latest/manual/cmake-cxxmodules....
[2] https://llvm.org/devmtg/2019-04/slides/TechTalk-Lorenz-clang...
People who care about build systems are a special kind of nerd. Programmers are often blissfully ignorant of what it takes to build large projects - their experience is based around building toy projects, which is so easy it doesn't really matter what you do.
In my experience, once a project has reached a certain size, you need to lay down simple rules that programmers can understand and follow, to help them from exploding the build times. Modules make that extra hard.
The interesting bits are for example the -MMD flag to gcc, which outputs a .d file you can -include ${wildcard *.d} and you get free, up to date dependencies for your headers etc.
That and 'vpath' to tell it where to find the source files for % rules, and really, all the hard work is done and your 1/2 page Makefile will stay the same 'forever' and wills still work in 20 years...
Not saying make is strictly better here but at least you can find plenty of examples and documentation on it.
- Task (Go): https://github.com/go-task/task
- Cake (C#): https://github.com/cake-build/cake
- Rake (Ruby): https://github.com/ruby/rake
Or an entirely different concept: Makedown, as discussed on HN 8 months ago: https://news.ycombinator.com/item?id=41825344
And you see this on the other side of the problem area too, where large and ugly tools like cmake are trying to do what older large and ugly software like autotools did, and trying to replace make. And they suck too.
I continue to believe the GNU make in the late 80's was and remains a better generic tool than everything in the modern world in all ways but syntax (and in many cases, again c.f. cmake, it had better syntax too). Had the original v7 syntax used something other than tabs, and understood that variable names longer than 1 byte were a good thing, we might never have found ourselves in this mess.
There are some uses of make, especially by people who have never used it to build C/C++ projects, which makes more sense to replace with just. It doesn't have the baggage that make does, and they're not using it to actually make files. They also quite likely don't know the conventions (e.g. what a lot of us expect "make install" to do), and I support them in not learning the conventions of make—as long as they use something else. :)
Other uses of make will need other modern replacements, e.g. Cmake or Bazel.
It is possible that Kids These Days can say "no thanks" when someone tries to teach them make, and that the future of make is more along the lines of something us greybeards complain about. Back in _my_ day, etc.
I knew there was a lot of weirdness and baggage but I am frankly kind of horrified to learn about these "implicit rules" that seemingly automatically activate the C compiler due to the mere presence of a rule that ends in ".c" or ".o"
.SUFFIXES:
$ make foo
Hello foo
This ran too!
That's a contrived example, but some of these take a bit too much thought parsing the example Makefile alone to understand the execution order and rule selection.It would just be very helpful to have clear examples of when I run this, I get this.
Oh no. I have never worked with Makefiles but I bet that causes pain and suffering.
I've lost so many hours to missing/extraneous spaces in YAML files that my team recently agreed to get rid of YAML from our Spring Boot codebase.
Of course the real solution is: just use CMake, you dweeb.
I learned Makefiles a bit, using it in one tiny project. Than checked Autotools and everything in my brain refused to learn this awkward workaround-engine. At the same time Meson[1] appeared and the thing with Builds, Dependencies and Testing is solved :)
PS: Dependency handling with Meson is awesome.
Agree about yaml though. I still have to look up how to align a multiline string every single time.
I also realised at one point how naturally the idea extends to other tasks that I do. Going by the picture at the top of this site, it seems the author realised a similar thing to me: you can understand food recipes better if you think about them declaratively like makefiles, rather than imperatively like scripts, which is how recipes are traditionally written down.
I wrote about it here: https://blog.gpkb.org/posts/cooking-with-make/
I always scribble down recipes in a way that I can read like a Makefile and take that into the kitchen with me. I'm curious if anyone has tried typesetting or displaying recipes in this way as I feel like it would save a lot of time when reading new recipes as I wouldn't have to convert from a script to a makefile myself.
Of course the tradeoff is that you have to resolve the dependency graph yourself. That’s more work on you when you just want a set of pre-serialised, sequential steps to follow.
People sometimes treat it as a generic “project specific job runner”, which it’s not a good fit for. Even simple conditionals are difficult.
I’ve seen several well-intentioned attempts at wrapping Terraform with it, for example, which have ended terribly.
Edit: Sorry, it looks like I totally misunderstood what you meant by "job runner".
People keep writing and using other alternatives (like just), which provide a very slight improvement on pure shell at the cost of installing yet another tool everywhere.
I stick with bash, write every task as a separate function, and multiplex between them with a case statement (which supports globs et al. and is very readable).
thanks for the story nonetheless
Is this true? Doesn't macOS ship with BSD make?
[~/home] $ which make
/usr/bin/make
[~/home] $ make --version
GNU Make 3.81
Copyright (C) 2006 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.
This program built for i386-apple-darwin11.3.0
so for all intents and purposes, it's a different make than what most people think about when they say gnu make.
$ gmake --version
GNU Make 4.4.1
Built for aarch64-apple-darwin24.0.0
Copyright (C) 1988-2023 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Though at that point I kinda wonder why they bother shipping bash at all, when their default shell is zsh, and it's entirely possible to have system shell scripts run by the BSD-licensed dash, rather than bash.
There's probably also something to be said about the choice of running ancient GNU make rather than BSD make, but I don't know enough about the differences to say it.
SOURCE_FILES := $(wildcard $(SRC_DIR)/*.c)
HEADER_FILES := $(wildcard $(SRC_DIR)/*.h)
OBJ_FILES := $(patsubst $(SRC_DIR)/%.c,$(BUILD_DIR)/%.o,$(SOURCE_FILES))
.PHONY: build clean
build: $(BUILD_DIR)/$(TARGET)
clean:
rm -rf $(BUILD_DIR)
$(BUILD_DIR):
mkdir $(BUILD_DIR)
$(BUILD_DIR)/$(TARGET): $(OBJ_FILES) | $(BUILD_DIR)
$(LINK.o) $^ $(LDLIBS) -o $@
$(BUILD_DIR)/%.o: $(SRC_DIR)/%.c $(HEADER_FILES) | $(BUILD_DIR)
$(COMPILE.c) $< -o $@
So when you don't fiddle with inter-file/shared interfaces, you get an incremental rebuild. When you do — you get a full rebuild. Not ideal, but mostly fine, in my experience.P.S. I just love the way that Make names its built-in variables. The output is obviously $@, but can you quickly tell which of $^ and $< give you only the first of the inputs? What about $> and $∨, do you remember what they do?
The standalone makedepend(1) that does the work is available in package xutils-dev on Ubuntu.
In fact, I don't think there is any other space-insensitive language except from early versions of FORTRAN.
It has existed for 8+ years and still evolving. Give it a try if you're looking for something fresh, and don't hesitate to ask any questions.
Seriously. :o)
Make cares only about how to generate files from other files.
I point this out because this is one of the classic misunderstandings about dependencies that beginners (and sometimes old hands) make. The code inside main.cpp might well depend on code in one.cpp to work, but you don't generate main.cpp from one.cpp et al. You generate the final binary from those other files, and that's what make cares about.
One right way to show it would be to have one.o depend on both one.cpp and on one.h (yes, this is confusing at first), likewise for two.o, and main.exe depend on one.o, two.o, libc and libm. Another way would be to omit the object files completely (as now), and just have main.exe depend directly on all other targets, but this makes for a less helpful example.
ETA: I'd also appreciate it if you would mention in the "Recursive use of make" section that calling make recursively is a Bad Idea [0]. (Why? In short, because no dependency information can cross that process barrier, so there's always a risk that you don't build things in the right order, forget to build something you should, etc. If you have a hierarchy of projects in a subdir hierarchy, it's much better to use a separate "Makefile fragment" file in each subdir, and then "include" them all into a single top-level Makefile, so that make has a chance to ensure that, ya know, things get built before other things that need them.) I realise the GNU docs themselves don't say so, and GNU make has a ton of hacks to accommodate recursive make (suggesting that this is a blessed way), but that is simply unfortunate.
[0]: "Recursive Make Considered Harmful", https://accu.org/journals/overload/14/71/miller_2004/
My teammates gave me a hard time for adding and maintaining .PHONY on all our recipes since we use make as a task runner.
Clark Grubb has a great page explaining a style guide for make files:
https://clarkgrubb.com/makefile-style-guide
Does anyone else use this style guide? Or for phony recipes marking phony at the recipe declaration vs a giant list at the top of the file?
I would love to have a linter that enforced this…
https://gittup.org/tup/ex_dependencies.html
It is a build system that automatically determines dependencies based on file system access, so it can work with any kind of compiler/tool.
The GNU make documentation is excellent - some of the best technical writing I've come across.
signa11•4h ago