Or (for cmake or ninja) use a CSV that says how long each object takes to build and use it to estimate how much is left ?
> Many software projects take a long time to compile. Sometimes that’s just due to the sheer amount of code, like in the LLVM project. But often a build is slower than it should be for dumb, fixable reasons.
What limits your tool to compiler/build tools, can it be used for any arbitrary process?
This looks like a really cool tool!
Hundreds of thousands of processes were normal.
I am stuck in an environment with CMake, GCC and Unix Make (no clang, no ninja) and getting detailed information about WHY the build is taking so long is nearly impossible.
It's also a bit of a messy build with steps like copying a bunch of files from the source into the build folder. Multiple languages (C, C++, Fortran, Python), custom cmake steps, etc.
If this tool can handle that kind of mess, I'll be very interested to see what I can learn.
I can get it to work for some sub-sets of our project, but for quite a bit of it I get the following error:
cc1: error: cannot load plugin /opt/rh/gcc-toolset-13/root/usr/lib/gcc/x86_64-redhat-linux/13/plugin/externis.so: /opt/rh/gcc-toolset-13/root/usr/lib/gcc/x86_64-redhat-linux/13/plugin/externis.so: undefined symbol: _Z14decl_as_stringP9tree_nodei
I suspect this is because these are C or Fortran sub-projects. I'm looking for some clean way to tell Cmake to apply externis to all the C++ only subprojects if possible. I'll see what I can come up with.
I'd also like to know, if multiple GCC commands end up pointing to the same trace.json, especially in a parallel build, will externis automagically ensure that it doesn't step over itself?
target_compile_options(${NAME} PUBLIC
$<$<COMPILE_LANGUAGE:CXX>:-fplugin=externis -fplugin-arg-externis-trace-dir=(where I want to put traces)>
)
But as I suspected, it is not a single trace file. It's thousands of trace files. Is there some way to collate all the data into one larger picture of how the build progressed?
If the compiler is working harder wouldn't that result in a more compact binary? Maybe I'm thinking too much from an embedded software POV.
I suppose the compiler does eventually do IO but IO isn't really the constraint most of the time right?
There may be times when CMake itself is the bottleneck but it is almost certainly an issue with your dependencies and so on. CMake has many features to assist you in speeding up your compile and link time too. But it would take a series of blog posts to describe how you should try to speed it up.
Just trying to add that argument with 3.26.5 on Rocky Linux 9, I get 'Unknown argument --profiling-format=google-trace'.
Not sure why, as cmake --help clearly states it should be there...
--profiling-format=<fmt> = Output data for profiling CMake scripts.
Supported formats: google-trace
--profiling-output=<file> = Select an output path for the profiling data
enabled through --profiling-format.Anyway, it looks like it only profiles the configure/generate steps. Not much use on Linux, but on Windows/macOS, perhaps. Due to lack of any standard package manager, it's a good idea to build every dependency from source on those OSs, and the time can mounts up.
My project is not that large, but it takes 1 minute to configure from scratch on Windows, and 10 minutes (!) on macOS.
Bizarre that you see 10 minutes on MacOS. Something's definitely busted there. It's not even that bad for me on Windows, and that's saying something.
On Windows: 50 try_compiles, about half and half SDL2 and libuv, and each one takes ~1 second.
I don't think either of these try_compile turnaround times is acceptable (what is it doing?! I bet it's like 50 ms on Linux) but the total figure does now feel a bit less mysterious.
I also want to point out that if you want Ninja, it is a very minimal binary to build with no dependencies that wouldn't already be installed.
This is one of the simpler ways to find out what's going on: https://alangrow.com/blog/profiling-every-command-in-a-makef... Instead of replacing the shell you can also replace CC or CXX variables (if I recall properly) either in CMake or the Makefile to replace your compiler with a wrapper script that logs stuff.
More generalized Makefile profiling: https://github.com/konturio/make-profiler
Profiling with Clang: https://aras-p.info/blog/2019/01/16/time-trace-timeline-flam... (Maybe what I used last time...)
I am pretty sure you can get shiny outputs from a free profiling tool, but I think you can search for that yourself. It should be easy to find something. Good luck!
I have a similar problem, with a tangential question that I think about from time to time without really having the time to investigate it further, unfortunately.
I notice sometimes that CMake recompiles files that shouldn't have been affected by the code changes made previously. Like recompiling some independent objects after only slight changes to a .cpp file without any interface changes.
So I often wonder if CMake is not making some file more inter-dependent than what they are, leading to longer compilation times.
Is there a version available for MacOS today?? I'd love to give it a whirl... For Rust, C++ / Swift and other stuff.
Thanks!
Developers always get it wrong and do it badly.
I’ve noticed on my huge catkin cmake project that cmake is checking the existence of the same files hundreds of times too. Is there anything that can hook into fork() and provide a cached value after the first invocation?
- switch to ninja to avoid that exact issue since CMake + Make spawns a subprocess for every directory (use the binary from PyPi for jobserver integration)
- catkin as in ROS? rm /opt/ros/noetic/etc/catkin/profile.d/99.roslisp.sh to remove 2 python spawns per package
This is an important observation that is often overlooked. What’s more, the changes to the information on which this “baked in” build logic is based is not tracked very precisely.
How close can we get to this “speed of light” without such “baking in”? I ran a little benchmark (not 100% accurate for various reasons but good enough as a general indication) which builds the same project (Xerces-C++) both with ninja as configured by CMake and with build2, which doesn’t require a separate step and does configuration management as part of the build (and with precise change tracking). Ninja builds this project from scratch in 3.23s while build2 builds it in 3.54s. If we omit some of the steps done by CMake (like generating config.h) by not cleaning the corresponding files, then the time goes down to 3.28s. For reference, the CMake step takes 4.83s. So a fully from-scratch CMake+ninja build actually takes 8s, which is what you would normally pay if you were using this project as a dependency.
kbuild handles this on top of Make by having each target depend on a dummy file that gets updated when e.g. the CFLAGS change. It also treats Make a lot more like Ninja (e.g. avoiding putting the entire build graph into every Make process) -- I'd be interested to see how it compares.
I would think about a different name. Often names are either meant to be funny or just unique nonsense but something short and elegantly descriptive (like BuildViz etc.) can go a long way to making it seem more legitimate and being more widely used.
Meaning I only can try the software after sign up, or did I miss the obvious repo link?
What language is this project written in, what build system does it use itself?
I just don't feel comfortable pasting my email address into a Google site. I can use a temp mail, but I will loose access to it in a few minutes/hours, so I don't know if that would annoy you.
What kind of times do you expect in the form, serial or parallel build time? What kind of file do you want to be modified. When I modify main.c, basically nothing gets rebuild, when I modify the central header file, it will be like a total rebuild. Can you clarify that in the form?
Whilst we regrettably never came around to package it as a propper product, we found it immensly valuable in our consulting work, to pinpoint issues and aid the conversion to BUCK/Bazel. We used graphviz, https://perfetto.dev/ and couple other tools to visualise things
Recently we cicled back to this too but with a broader usecase in mind.
There are some inherent technical challanges with this approach & domain:
- syscall logs can get huge - especially when saved to disk. Our strace logs would get over 100GB for some projects (llvm was around ~50GB)
- some projects also use https and inter process communications and that needs ot be properly handled too. (We even had a customer that was retriving code from a firebird database via perl as part of the compilation step!)
- It's runtime analysis - you might need to repeat the analysis for each configuration.
I've used it with projects generated by UBT and CMake. I can't remember if it provides any info that'd let you assess the quality of build parallelism, but it does have some compiler front end info which is pretty straightforward to read. Particularly expensive headers (whether inherently so, or just because they're included a lot) are easy to find.
This would have been really useful 6 months ago, when I was trying to figure out what on earth some Jetson tools actually did to build and flash an OS image.
> Here it is recording the build of a macOS app:
> <gif>
At the top of the page, it should be right under the header.
You made a thing, so show the thing. You can waffle on about it later. Just show the thing.
It visualizes each crate's build, shows the dependencies between them, shows when the initial compilation is done that unblocks dependents, and soon will have link information.
This is an ad not a helpful announcement.
Without trying to devalue it, note that VS and XCode have similar visualization tools.
I've used clang's -ftime-trace option in the past and that's also really good. It's a pity gcc has nothing similar.
bgirard•5mo ago
I did a lot of work to improve the Mozilla build system a decade ago where I would have loved this tool. Wish they would have said what problem they found.
dhooper•5mo ago
My call with the Mozilla engineer was cut short, so we didn't have time to go into detail about what he found, I want to look into it myself.
bvisness•5mo ago
- Lots of constant-time slowness at the beginning and end of the build
- Dubious parallelism, especially with unified builds
- Cargo being Cargo
Overall it mostly looks like a soup of `make` calls with no particular rhyme or reason. It's a far cry from the ninja example the OP showed in his post.
epage•5mo ago