bool is_leap_year(uint32_t y) { // Works for Gregorian years in range [0, 65535] return ((!(y & 3)) && ((y % 25 != 0) || !(y & 15))); }
I didn't find any way to get a compiler to generate a branchless version. I tried clang and GCC, both for amd64, with -O0, -O5, -Os, and for clang, -Oz.
But do you know what's not free? Memory accesses[1]. So when I'm optimizing things, I focus on making things more cache friendly.
[1] http://gec.di.uminho.pt/discip/minf/ac0102/1000gap_proc-mem_...
It's hard to imagine being in a situation where the cost of the additional "year divisible by 100" check, or even the "year divisible by 400" check is too much to bear, and it's trivial enough that the developer overhead is negligible, but you never know when you need those extra handful of bytes of memory I guess...
I'm amazed by the fact there is always someone who will say that such optimization are totally unnecessary.
What does this mean? Free? Optimisations are totally unnecessary because... instructions are free?
The implementation in TFA is probably on the order of 5x more efficient than a naive approach. This is time and energy as well. I don't understand what "free" means in this context.
Calendar operations are performed probably trillions of times every second across all types of computers. If you can make them more time- and energy-efficient, why wouldn't you?
If there's a problem with modern software it's too much bloat, not too much optimisation.
Not to mention that the final code is basically a giant WTF for anybody reading it. It will be an attractive nuisance that people will be drawn to, like moths to a flame, any time there is a bug around calendar operations.
How many people are rolling their own datetime code? This seems like a totally fine optimization to put into popular datetime libraries.
How is cache / memory access relevant in a subroutine that performs a check on a 16bit number?
> Not to mention that the final code is basically a giant WTF for anybody reading it. It will be an attractive nuisance that people will be drawn to, like moths to a flame, any time there is a bug around calendar operations.
1: comments are your friend
2: a unit test can assert that this function is equivalent to the naive one in about half a millisecond.
> The world could run on older hardware if software optimization was a priority
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110001
Binary searches on OpenZFS B-Tree nodes are faster in part because we did not wait for the compiler:
https://github.com/openzfs/zfs/commit/677c6f8457943fe5b56d7a...
Eliminating comparator function overhead via inlining is also a part of the improvement, which we would not have had because the OpenZFS code is not built with LTO, so even if the compiler fixes that bug, the patch will still have been useful.
https://cr.yp.to/talks/2015.04.16/slides-djb-20150416-a4.pdf (though see https://blog.regehr.org/archives/1515: "This piece (...) explains why Daniel J. Bernstein’s talk, The death of optimizing compilers (audio [http://cr.yp.to/talks/2015.04.16/audio.ogg]) is wrong", citing https://news.ycombinator.com/item?id=9397169)
https://blog.royalsloth.eu/posts/the-compiler-will-optimize-...
http://lua-users.org/lists/lua-l/2011-02/msg00742.html
https://web.archive.org/web/20150213004932/http://x264dev.mu...
They'll "optimize" your code by deleting it. They'll "prove" your null/overflow checks are useless and just delete them. Then they'll "prove" your entire function is useless or undefined and just "optimize" it to a no-op or something. Make enough things undefined and maybe they'll turn the main function into a no-op.
In languages like C, people are well advised to disable some problematic optimizations and explicitly force the compiler to assume some implementation details to make things sane.
For example:
if (p == NULL) return;
if (p == NULL) doSomething();
It is safe to delete the second one. Even if it is not deleted, it will never be executed.What is problematic is when they remove something like memset() right before a free operation, when the memset() is needed to sanitize sensitive data like encryption keys. There are ways of forcing compilers to retain the memset(), such as using functions designed not to be optimized out, such as explicit_bzero(). You can see how we took care of this problem in OpenZFS here:
char *allocate_a_string_please(int n)
{
if (n + 1 < n)
return 0; // overflow
return malloc(n + 1); // space for the NUL
}
This code seems okay at first glance, it's a simple integer overflow check that makes sense to anyone who reads it. The addition will overflow when n equals INT_MAX, it's going to wrap around and the function will return NULL. Reasonable.Unfortunately, we cannot have nice things because of optimizing compilers and the holy C standard.
The compiler "knows" that signed integer overflow is undefined. In practice, it just assumes that integer overflow cannot ever happen and uses this "fact" to "optimize" this program. Since signed integers "cannot" overflow, it "proves" that the condition always evaluates to false. This leads it to conclude that both the condition and the consequent are dead code.
Then it just deletes the safety check and introduces potential security vulnerabilities into the software.
They had to add literal compiler builtins to let people detect overflow conditions and make the compiler actually generate the code they want it to generate.
Fighting the compiler's assumptions and axioms gets annoying at some point and people eventually discover the mercy of compiler flags such as -fwrapv and -fno-strict-aliasing. Anyone doing systems programming with strict aliasing enabled is probably doing it wrong. Can't even cast pointers without the compiler screwing things up.
No one does this
You aren't going to beat the compiler if you have to consider a wide range of inputs and outputs but that isn't a typical setting and you can actually beat them. Even in general settings this can be true because it's still a really hard problem for the compiler to infer things you might know. That's why C++ has all those compiler hints and why people optimize with gcc flags other than -O.
It's often easy to beat Blas in matrix multiplication if you know some conditions on that matrix. Because Blas will check to find the best algo first but might not know (you could call directly of course and there you likely won't win, but you're competing against a person not the compiler).
Never over estimate the compiler. The work the PL people do is unquestionably useful but they'll also be the first to say you can beat it.
You should always do what Knuth suggested (the often misunderstood "premature optimization" quote) and get the profile.
More broadly, it 100% depends on your workload. You'd be surprised at how many workloads are compute-bound, even today: LLM inference might be memory bound (thus how it's possible to get remotely good performance on a CPU), but training, esp. prefill, is very much not. And once you get out of LLMs, I'd say that most applications of ML tend to be compute bound.
And since you are talking about memory. Code also goes in memory. Shorter code is more cache friendly.
I don't see a use case where it matters for this particular application (it doesn't mean there isn't) but well targeted micro-optimizations absolutely matter.
The bigger issue is that probably you don't need to do leap-year checks very often so probably your leap-year check isn't the place to focus unless it's, like, sending a SQL query across a data center or something.
> modern CPUs are so fast that instructions are almost free.
Please don't.These things compound. You especially need to consider typical computer usage involves using more than one application at a time. There's a tragedy of the commons issue that's often ignored. It can be if you're optimizing your code (you're minimizing your share!) but it can't be if you're not.
I guarantee you we'd have a lot of faster things if people invested even a little time (these also compound :). Two great examples might be Llama.cpp and FlashAttention. Both of these have had a huge impact of people (among a number of other works) but don't get nearly the same attention as other stuff. These are popular instances but I promise you that there's a million problems like these waiting to be solved. It's just not flashy, but hey plumbers and garbagemen are pretty critical jobs too
There is no silver bullet.
It's >3 machine instructions (and I admire the mathematical tricks included in the post), but it does do thousands of calculations fairly quickly :)
And I call this thing “bit gymnastics”.
> How does this work? The answer is surprisingly complex.
I don't think anyone is surprised in the complexity of any explanation for that algorithm :D
The is no year 0, it goes 1 BC, 1 AD. So testing whether 0 is a leap year is moot.
Not true if you use astronomical year numbering: https://en.m.wikipedia.org/wiki/Astronomical_year_numbering
Which is arguably the right thing to do outside of specific domains (such as history) in which BCE is entrenched
If your software really has to display years in BCE, I think the cleanest way is store it as astronomical year numbering internally, then convert to CE/BCE on output
So what happens when it's 1582? (sorry, currently no time to articulate a good wiki fix)
Without that design constraint, testing for leap years becomes locale-dependent and very complex indeed.
Everything before the introduction of the gregorian calendar is moot:
"In 1582, the pope suggested that the whole of Europe skip ten days to be in sync with the new calendar. Several religious European kingdoms obeyed and jumped from October 4 to October 15."
So you cannot use any date recorded before that time for calculations.
And before that it gets even more random:
"The priests’ observations of the lunar cycles were not accurate. They also deliberately avoided leap years over superstitions. Things got worse when they started receiving bribes to declare a year longer or shorter than necessary. Some years were so long that an extra month called Intercalaris or Mercedonius was added."
The optimizations that compilers can achieve kind of amaze me.
Indeed, the latest version of cal from util-linux keeps it simple in the C source:
return ( !(year % 4) && (year % 100) ) || !(year % 400);
https://github.com/util-linux/util-linux/blob/v2.41/misc-uti...
qingcharles•6h ago
Does anyone have a collection of these things?
masfuerte•6h ago
kurthr•6h ago
genewitch•6h ago
bobmcnamara•2h ago
qingcharles•6h ago
ryao•4h ago
https://github.com/ridiculousfish/libdivide
godelski•3h ago
JdeBP•8m ago
It was that generally the fast hardware multiplication operations in ALUs didn't have very many bits in the register word length, so multiplications of wider words had to be done with library functions that did long multiplication in (say) base 256.
So this code in the headlined article would not be "three instructions" but three calls to internal helper library functions used by the compiler for long-word multiplication, comparison, and bitwise AND; not markedly more optimal than three internal helper function calls for the three original modulo operations, and in fact less optimal than the bit-twiddled modulo-powers-of-2 version found halfway down the headlined article, which would only need check the least significant byte and not call library functions for two of the 32-bit modulo operations.
Bonus points to anyone who remembers the helper function names in Microsoft BASIC's runtime library straight off the top of xyr head. It is probably a good thing that I finally seem to have forgotten them. (-: They all began with "B$" as I recall.
tylerhou•6h ago
owl_vision•6h ago
https://en.wikipedia.org/wiki/Hacker's_Delight
qingcharles•6h ago
ryao•4h ago
https://graphics.stanford.edu/~seander/bithacks.html
It is not on the list, but #define CMP(X, Y) (((X) > (Y)) - ((X) < (Y))) is an efficient way to do generic comparisons for things that want UNIX-style comparators. If you compare the output against 0 to check for some form of greater than, less than or equality, the compiler should automatically simplify it. For example, CMP(X, Y) > 0 is simplified to (X > Y) by a compiler.
The signum(x) function that is equivalent to CMP(X, 0) can be done in 3 or 4 instructions depending on your architecture without any comparison operations:
https://www.cs.cornell.edu/courses/cs6120/2022sp/blog/supero...
It is such a famous example, that compilers probably optimize CMP(X, 0) to that, but I have not checked. Coincidentally, the expansion of CMP(X, 0) is on the bit hacks list.
There are a few more superoptimized mathematical operations listed here:
https://www2.cs.arizona.edu/~collberg/Teaching/553/2011/Reso...
Note that the assembly code appears to be for the Motorola 68000 processor and it makes use of flags that are set in edge cases to work.
Finally, there is a list of helpful macros for bit operations that originated in OpenSolaris (as far as I know) here:
https://github.com/freebsd/freebsd-src/blob/master/sys/cddl/...
There used to be an Open Solaris blog post on them, but Oracle has taken it down.
Enjoy!
JdeBP•2h ago