This kind of optimization, complete loop removal and computing the final value for simple math loops, is at least 10 years old.
Seems like the author is both surprised and delighted with an optimization they learned of today. Surely you’ve been in the same situation before.
In topics like compiler optimization, is not like there are many books which describe this kind of algorithms.
https://github.com/llvm/llvm-project/blob/release/21.x/llvm/...
https://github.com/llvm/llvm-project/blob/release/21.x/llvm/...
> but in vim searching for the exact symbol under the cursor is a single character shortcut
* requires two key presses which is identical to <C-]> or g]https://kulkarniamit.github.io/whatwhyhow/howto/use-vim-ctag...
The idea of good abstractions and such falls apart the moment the target environment itself is not a good abstraction.
If it was 16K lines of modular "compositional" code, or a DSL that compiles in some provably-correct way, that would make me confident. A single file with 16K lines of -- let's be honest -- unsafe procedural spaghetti makes me much less confident.
Compiler code tends to work "surprisingly well" because it's beaten to death by millions of developers throwing random stuff at it, so bugs tend to be ironed out relatively quickly, unless you go off the beaten path... then it rapidly turns out to be a mess of spiky brambles.
The Rust development team for example found a series of LLVM optimiser bugs related to (no)aliasing, because C/C++ didn't use that attribute much, but Rust can aggressively utilise it.
I would be much more impressed by 16K lines of provably correct transformations with associated Lean proofs (or something), and/or something based on EGG: https://egraphs-good.github.io/
gcc was and is an incredible achievement, but it is traditionally considered difficult to implement many modern compiler techqniques in it. It's at least unpleasant, let's put it this way.
https://github.com/gcc-mirror/gcc/blob/master/gcc/tree-chrec... https://github.com/llvm/llvm-project/blob/release/21.x/llvm/...
On the other hand, I find MSVC and especially ICC output to be quite decent, although I have never seen their source code.
Having inspected the output of compilers for several decades, it's rather easy to tell them apart.
1. organized data flow analysis
2. recognizing a pattern and replacing it with a faster version
The first is very effective over a wide range of programs and styles, and is the bulk of the actual transformations. The second is a never-ending accumulation of patterns, where one reaches diminishing returns fairly quickly.
The example in the linked article is very clever and fun, but not really of much value (I've never written a loop like that in 45 years). As mentioned elsewhere "Everyone knows the Gauss Summation formula for sum of n integers i.e. n(n+1)/2" and since everyone knows it why not just write that instead of the loop!
Of course one could say that for any pattern, like replacing i*2 with i<<1, but those pattern replacements are very valuable because they are generated by high level generic coding.
And you could say I'm just being grumpy about this because my optimizer does not do this particular optimization. Fair enough!
A hard problem in optimization today is trying to fit code into the things complex SSE-type instructions can do. Someone recently posted an example where they'd coded a loop to count the number of one bits in a word, and the compiler generated a "popcount" instruction. That's impressive.
dejj•1mo ago
emih•1mo ago
If you mean graph coloring restricted to planar graphs, yes it can always be done with at most 4 colors. But it could still be less, so the answer is not always the same.
(I know it was probably not a very serious comment but I just wanted to infodump about graph theory.)