Strategic optimisations is often basically free if you have domain expertise. It's that easy to know that the business wants x outcome and algorithm y is the right choice etc if its all internal thought processes. Whereas if you don't know enough then you're likely to make very expensive to undo decisions.
> There is no way to provide both optimized assembly and equivalent C code and let the compiler use the former in the general case and the latter in special cases.
This is true, but can be seen as a failure of language and tooling. For example, Halide [1] pioneered (AFAIK) the concept of separating algorithm from implementation at the language level. This separation lets you express the algorithm once, and then "schedule" it by specifying parallelism, vectorization, etc. You can provide multiple schedules for one algorithm, which allows you to specialize / make different choices depending on varying factors.
It's a really interesting concept, though maybe limited in practice to DSLs. I'm not sure a general purpose language would be a good fit for this model, but then, for general purpose programs written in general purpose languages, perf optimization at the level TFA discusses is frequently limited to just specific hot sections. Those hot sections could be extracted out into specialized components written in such a DSL.
https://gcc.gnu.org/onlinedocs/gcc/Function-Multiversioning....
you don't need to go all the way to Halide to do what the article is claiming isn't possible - you can do it just by including a "micro-kernel" in your library and have the code branch to that impl (depending on something at runtime) instead of whatever the C code compiled down to. this is done every single day in every single GPU lib (famously cublas ships with hundreds/thousands of these of such ukernels for gemms depending on shapes).
> There is no way to provide both optimized assembly and equivalent C code and let the compiler use the former in the general case and the latter in special cases.
this is manifestly obviously possible (as i've said).
what you're talking about is something completely different goes by many names and uses many techniques (symbex, conex, sccp, scev, blah blah blah). many of these things are implemented in eg LLVM.
E.g. I wrote a `pextComptime` function, which will compile to just a `pext` instruction on machines that have a fast implementation, otherwise it will try to figure out if it can use a few clever tricks to emit just a couple of instructions, but if those aren't applicable it will fallback on a naïve technique.
https://github.com/Validark/Accelerated-Zig-Parser/blob/8782...
Your suggestions introduce, in effect, a hypothetical `if` statement, only one branch of which is taken. I can change the condition arbitrarily, but ultimately it's still going to be either one or the other.
I want the `if` to take both branches at once. I want the compiler to assume that both branches trigger the exact same side effects and return the same results. I want it to try both approaches and determine the better one depending on the environment, e.g. the number of free registers, (lack of) inlining, facts statically known about the input -- all those things that you can't write a condition for on the source level.
Think about it this way. A standard compiler like LLVM contains passes which rewrite the program in order. If something has been rewritten, it will never be rolled back, except it another pass performs a separate rewrite that explicitly does that. In contrast, e-graphs-based compilers like Cranelift maintain an equivalence graph that represents all possible lowerings, and after the whole graph is built, an algorithm finds a single optimal lowering.
Existing solutions make me choose immediately without knowing all the context. The solution I'd like to see would delay the choice until lowering.
Do you have a good entrypoint reference for learning about how this works? This (and the associated mention in the article) is the first time I've heard of this approach.
I'm glad to know that's the design of Cranelift, since that's how I would think it should be done, although I haven't written a middle or backend for a compiler yet.
this is called symbolic execution - as i have already told you, many compilers do this in the form sccp and scev. you don't need silly things like egraphs for this.
> If something has been rewritten, it will never be rolled back
this is patently false - not only do passes get rolled back to catch/correct errors but there is a fixed-point iteration system (at least in MLIR) that will apply passes as long as they are "profitable".
> not only do passes get rolled back to catch/correct errors but there is a fixed-point iteration system (at least in MLIR) that will apply passes as long as they are "profitable"
Huh? I'm not arguing against fixed-point iteration, that's perfectly fine because it's still a unidirectional process. What do you mean by "passes get rolled back to catch/correct errors" though? Certain rewrites can certainly be not performed in the first place if they pessimize the code, but that's not what I'm talking about.
If there's pass 1 that chooses between rewrite A and B, pass 2 that chooses between rewrite C or D, and pass 3 choosing between E or F, in a typical compiler, this choices would be made one by one mostly greedily. An e-graph style approach allows all of those eight combinations to be tried out without necessarily leading to a combinatorial explosion.
- https://internals.rust-lang.org/t/expose-llvms-or-disjoint-i...
- https://github.com/rust-lang/libs-team/issues/373
- https://github.com/rust-lang/rust/pull/124601
Which ultimately culminated in an opinion that should sound familiar - https://github.com/rust-lang/rust/pull/124601#issuecomment-2....
One of my early-career successes was just creating a framework for generating every permutation of perf optimizations for every (log-scaled -- clz is very fast) input size and checking which was best, dropping the results into a lookup table of function pointers to branch on. The university had a large supply of heterogeneous computers, replete with all the normal problems like being able to double floating-point addition throughput on Haswell CPUs by abusing the fmadd instruction, so I made a framework (probably closer to a DSL) for encoding your algorithms in a way that you could analyze perf tradeoffs at compile time and tune your result for the given computer. It's kind of like what ATLAS does for some linear algebra tasks.
Such practices are almost never optimal, but they're pretty easy to implement, and the results are near-optimal for almost all inputs. In the tradeoff between human and computer performance, I think it's a nice option.
This seems like a nonsensical statement to me. How could measuring be a substitute for thinking/analyzing/predicting/forming a plan?
Measuring/profiling just means observing the system you want to optimize in a systematic way. You certainly won't be very effective at optimizing anything if you don't observe it.
Theoretical calculations means you've formed a model of what's happening and you're devising a plan to optimize against that model. But of course, a useful model needs to represent the significant aspects of your system (and a good model should exclude most insignificant ones). Failing to observe your system means your model could be bad -- focused on insignificant aspects and missing significant ones -- and you'd never know.
Measuring doesn't mean don't think. Measuring and thinking are two different things. You need to do them both to optimize effectively.
The fact remains most projects that do small trivial modular prototypes first will ultimately know which paths are viable before painting themselves into a corner algorithmically.
Best of luck =3
You do profiling because it's WAY too easy to get obsessed about theoretical problems when a simple measurement will show you the actual problems.
You do the math on the actual problem location, not a method with O(n!) which only gets called with n=3.
You still have to look at the entire call stack when profiling (which means thinking about the overarching algorithm).
If a problem area is so intuitively obvious, why would you introduce the problem in the first place? In reality, performance optimizations are usually needed where you least expect them. Which means that you can't get there intuitively. Hence, the suggestion of using profiling to help track down where the problem is instead.
Pretty much everyone I know will throw down an O(n^2) algorithm or whatever in their first pass and replace it with something more thought out once they have the time to think deeply about it.
If you're fretting about optimization at every stage of development, you're really doing it wrong. This is precisely the type of early optimization that everyone and their dog will tell you is bad. You should focus first on making your program work and laying out the logic and data flows. Optimization does not matter until the program is almost finished.
Most of the times I've seen this, the faster algorithm is literally just a dictionary with a well-defined key.
I honestly do not understand why that's not the first solution most devs think of and why n^2 seems to dominate.
As an example, I've seen code like this quiet a bit
result = []
for (item: items) {
for (item2: items) {
if (item != item2 && item.foo == item2.foo) {
result.add(item)
}
}
}
Easily replaced and massively easier to read as result = []
itemFoo = HashSet()
for (item: items) {
if (itemFoo.contains(item.foo))
result.add(item)
else
itemFoo.add(item.foo)
}
The mindset of re-looking for values in a collection you are already iterating through is just foreign to me. The first solution for something like this in my mind is always utilizing dictionaries.A pretty common attitude that I've ran into. People take the "premature optimization" quote to mean "Never think about big oh when writing code".
If n is usually less than 100 then sure maybe it does make sense to do the more brute force approach. However, what's the impact of using the hash table first? If n is usually less than 100, now this method is slower usually, but is that a problem? Well, you should use a profiler to find out if it is or isn't. However, that may be entirely tolerable performance wise.
On the other hand, if your assumption that "n < 100" doesn't hold up and, indeed you start seeing n > 100, or 1000, or 10000. Now all the sudden this method becomes a fault in the application you developed that needs to be rewritten.
What's devious about the "n > 100" problem is when you first write the code, you may have very well been correct that "n < 100". But time and inputs change the way code is executed.
A lot of the code I've fixed has been due to this. I've seen the counter where really searching a list/array was the better option only a handful of times.
To me, premature optimization is using a known algorithmically inferior approach, of the same complexity as the superior approach, because you know that for small n, it is faster without knowing whether or not that will be the case.
Not really. You're just exploring an unknown frontier. Once the frontier is known and you actually understand what you are trying to solve you're still going to put in the right algorithm.
I mean, you might not if you're vibe coding, which I guess is apparently a thing somehow, but I'm assuming we're talking about actual engineering here, not speaking the sordid tales of cowboys. The "premature optimization" line was definitely not directed at those who think code is a feeling.
When the frontier is unknown, whatever algorithm gets something operating as soon as possible is the right one. The given example may be so contrived that the hash map would be what you see as the quickest and dirtiest solution anyway, but with a little more complexity loops do tend to be easier to reason about on a first-pass basis.
> Now all the sudden this method becomes a fault in the application you developed that needs to be rewritten.
In the same way that finding out that a blue button doesn't resonate with users, requiring you to rewrite your application to make the button green. Such is the life of building software. But that's why engineers get paid the big bucks – nobody else is stupid enough to do it.
> A lot of the code I've fixed has been due to this.
You're not really fixing it, you're updating it to reflect the information you gleaned during the exploratory phase. I mean, unless you're coding on vibes or something, but, again, assuming actual engineering is taking place here. Nobody should be coding non-prototypes by the guide of their feelings.
Good code should follow something along these lines:
- A reasonable understanding of what needs to happen. What are the requirements. Including data set sizes. Including how "hot" this code path is.
- A reasonable pick of the algorithm to perform what needs to happen.
Sure. O(N^2) for very small data sets could outperform an O(NlogN) or O(N). The conflation of big oh notation with actual performance is just another form of sloppiness as is picking a bad algorithm. In both cases it reflects a lack of understanding of how computers work and how to do things efficiently.
Let's look at what we're trying to do here. We want to find all values that occur more than once in a vector. Using a hash map means you have to:
- Create this data structure on the fly.
- Hash the values.
- Do the insertions/lookup.
You should consider the size of your dataset, you should consider locality/caches etc.
100 entries squared is 10k though. If this is part of a critical code path, let's say your frontend code does this for every DOM element or every notification it needs to process this can end up the bulk of CPU usage. This sort of stuff is why I've seen applications cause the fans to go on or the browser to crash. I would say bad O(N^2) algorithms for large Ns are pretty common.
I would be surprised if e.g. a C++ implementation of the hash version doesn't win over the double loop for data sets of 100. But maybe that's borderline. For a dataset of 1000 there's no way the double loop is winning.
Other alternatives might be something like an efficient small bloom filter that can be stored in a register, sorting the array. I mean at least cut the iterations in half by not iterating over the entire range in the inner loop!
Then you won't have the trouble with:
if (item != item2 && item.foo == item2.foo) {
being a bit less readable then you might think.The reason I wrote it that way is because the actual common way I find this is in Java where the team wants to use the `stream` api.
IE
items.stream()
.filter((item)->items.stream()
.filter((item2)->item2 != item)
.findAny((item2)->Object.equals(item.foo, item2.foo)).isPresent())
.toList();
That's not really super readable or universalizable to other languages.Even more commonly, instead of doing the `findAny` I often see the trip up being someone doing a `contains` or other O(n) operations on `List` objects inside iterations of that list.
It's this sort of fluent/functional style of programming that often seems to get tripped up on doing n^2 (or more) algorithms.
Disagree
> faster for small n (which is the common case)
How can you know that? And for smaller n, the choice hardly matters.
> can not run out of memory due to the hash table
If n gets large enough that hash table size becomes a problem, n is large enough that you need the hash table or some other structure that reduces the complexity down from O(n^2).
> compiles to less code
This is just nonsense. Any codebase of significant size already has a hashtable setup and unless you are using a language like C++, the code size of using a hash table anywhere is constant. The cost is already paid.
The actual amount of compiled code from the switch is identical.
> does not need item to be hashable
True.
There would be reasons not to choose option 2, but it should be default until it's been proven to be a problem. And even then, the next step is almost certainly not going to be option 1.
But then this could also be language and application specific. I deal primarily with backend services/Java where memory is plentiful, and allocation is fast. Algorithmic complexity is for my applications very often the better indicator of what good code should be.
Bring in a performance expert and their intuition can quickly identify many performance improvements. The tough part for the business is when and where do you pay for the experience of performant code, and when is good enough to leave alone?
Imagine: function F() { for (i = 0; i < 10; i++) { A(); B(); C(); } }
If we profile this code, we might find out, e.g. B takes the majority of the time--let's say 90%. So you spend hours, days, weeks, making B 2X faster. Great. Now you removed 45% of execution time. But the loop in the outer function F is just a few instructions, it is not "hot"--it won't show up in profiles except for ones that capture stacks.
If you're just stuck in the weeds optimizing hot functions that show up in profiles, it's possible to completely overlook F. That loop might be completely redundant, causing 10X the workload by repeatedly computing A, B, and C, which may don't need to be recomputed.
There are bazillions of examples like this. Say you find out that a function is super, super hot. But it's just a simple function. There are calls to it all over the code. You can't make it any faster. Instead you need to figure out how to not call it at all, e.g. by caching or rethinking the whole algorithm.
> How could measuring be a substitute for thinking/analyzing/predicting/forming a plan?
This happens more than you think. Understanding how the system works in enough detail and also at a high level to formulate a plan is in short supply. Jumping in and hacking in things, like a cache or something, is surprisingly common.
I don't think I've ever used a profiler that couldn't report you were in F() here. One that only captures your innermost functions really doesn't seem that useful, for exactly the reasons you give.
IMO, those are (generally) nowhere near as useful as a flame/icicle graph.
Not saying they are never useful; Sometimes people do really dumb things in 1 function. However, the actual performance bottleneck often lives at least a few levels up the stack.
I think I've only been able to get good call stacks when I build everything myself with the right compilation options. This is a big contrast with what I remember working with similar tools under MSFT environments (MS Profiler or vTune).
You can get it to work though but it's a pain.
One of my best examples of this, I had a function reported as still taking 10% of cumulative run time, after I’d tweaked it as much as I could. But I’d set up a benchmark that called a code path a deterministic number of times and this function was getting called twice as much as it should. I found two sibling methods asking the same question and rearranged them so the answer came as an argument and nixed the duplicate call. I reran the benchmark and instead of getting a reduction of 5% (10/2), I got 20%. That was all memory pressure.
The worst memory pressure I ever fixed I saw a 10x improvement by removing one duplicate call. Now, there was a quadratic part of that call but it was a small enough n that I expected 3x and hoped for 4x, and was as shocked as anyone when it went from 30s to 3s with one refactor.
But, in the real world. Code bases are massive. And it is hard to predict when worlds collide. Most things does not matter until they do. So measuring is the way to go I believe.
There’s so much noise at that point that even people who would usually catch problems start to miss them.
There’s usual response to this is, “well you can turn caching off to do profiling” but that’s incorrect because once people know they can get a value from the cache they stop passing it on the stack. So your function that calls A() three times that should have called it 2? You find now that it’s being called ten times.
And the usual response to that is, “well it’s free now so who cares?” Except it’s not free. Every cache miss now either costs you multiple, or much more complex cache bookkeeping which is more overhead, and every hit resets the MRU data on that entry making it more likely that other elements get evicted.
For instance in NodeJS concurrent fetches for the same resource often go into a promise cache, but now the context of the closure for the promise is captured in the cache, and it doesn’t take much to confuse v8 into keeping a bunch of data in scope that’s not actually reachable. I’ve had to fix that a few times. Hundreds of megabytes in one case because it kept an entire request handler in scope.
You make still run into problems where you expect A and B to have a relationship between them that doesn’t hold if there’s a gap between looking them up, but it’s often less likely or severe than if for instance half a page has the data in state S and half of it is in state T.
But it's also very easily to mislead yourself that way, many "optimizations" might do much less than you think. So you should avoid implementing more complex or harder to understand code just because you think it is faster, but otherwise I'd certainly try to write faster code by default in areas I know well enough to judge that.
Profilers fill in gaps in our mental model of code execution but they are not a substitute for it. Computer behavior is largely knowable from first principles, albeit requiring a considerable degree of expertise and detail. For some people in some contexts, the profiler adds little information to what they already understand. I know a few people that do almost all of their optimization work using pencil and paper with great effectiveness and precision. Not something I would recommend for most software engineers but not unreasonable at the limit either.
If you optimize infrequently, or haven't profiled code like this recently, or haven't profiled this specific codebase recently, then your mental model is probably due a refresh.
I've spent a lot of time optimizing video codecs and while it is maybe possible to reason about these without a profiler it's much faster to profile. The profiler isn't going to tell you what you need to do though.
Once you “find” a candidate change you measure it to see if what you did made things worse and you put it back if it did, or maybe you try it in combination with other changes to see if its value is complementary.
But people fuck up all the time reading the initial telemetry, which is often where I come in. I get tired of hearing people say, “we’ve done all we can, look how flat this chart is,” and hand someone my beer. You won’t find all of the good candidates in the percentage of run time list. That’s not all the data that’s there, and not every change that works needs to even be supported by the initial data. It only needs to be supported by the delta afterward.
> This seems like a nonsensical statement to me. How could measuring be a substitute for thinking/analyzing/predicting/forming a plan?
I believe the mantra “intuition doesn’t work, profile your code” is understood to mean “don't rely on your intuition alone; profile your work.” It's a warning that when it comes to performance optimization, intuition is often unreliable, so you need to supplement your mental model with actual data if you want to avoid big mistakes.
I don't know why the author of the blog post is interpreting the mantra literally, as if it claims intuition is obsoleted by profiling.
Profiling to find suboptimal code is perfectly fine. Then you need to figure out how to fix it. Many people don't understand how performance optimization works, so they blindly add caching, improve constant time by invoking more low-level methods, etc. This obviously doesn't work, yet intuitively (to those people, anyway) it should produce good results.
That's why the mantra exists: don't trust your intuition, don't believe it when it says these changes improve performance, instead measure that performance and only apply changes that work. This is also perfectly fine, but this is a double-edged sword, and I've seen people go too far in this direction.
For example, they refuse to do any changes that don't immediately improve performance according to the profiler. If they modify one computation and performance decreases, they abandon this path altogether. They treat optimization as a game with a dense fog of war, and they refuse to apply deductive reasoning and, of course, intuition to apply changes that, according to the profiler at least, are not immediately rewarding.
Adding caches or switching to lower-level calls is definitely something learned, and I wouldn't call it "intuitive".
What I think you are referring to is that sometimes, simply reading and understanding the code can tell you where the problem really is — still, my experience is that you want to measure the before and after to at least identify the general area you should be looking at, and more often than not, I could figure out what needs optimizing and how without having to get detailed profiles.
I did end up being surprised a few times, but it was mostly due to external code like buggy library implementations that didn't do the things they promised (eg. async library really synchronizing everything with a badly placed mutex).
At the same time, it's wrong to focus on a single case or a single profile (executing one code path in one set of circumstances), but what you are describing simply sounds like bad engineering — the fact that you can misuse the tool does not make the tool bad, it's still the engineer wielding it who's at fault.
Caching and lower level calls are generic solutions that work everywhere, but are also generally the last and worst way to optimise (thus why they need such careful analysis since they so often have the opposite effect).
Better is to optimise the algorithms, where actual profiling is a lesser factor. Not a zero factor of course, as a rule of thumb it’s probably still wise to test your improvements, but if you manage to delete an n^2 loop then you really don’t need a profiler to tell you that you’ve made things better.
Eg: indexing or partitioning a database table may appear to make things slower if you don't have both a representative amount of data and representative query patterns when you're measuring the change.
You should still measure your changes, but sometimes you need to be careful about measuring them in the right way, and possibly simulating a future context (eg: more scale) before drawing a conclusion.
Intuition about how the context will evolve and what effect that might have on the tradeoffs of different approaches is helpful
It's really just debugging and troubleshooting, but with a different goal in mind.
I have found that many engineers write complex code faster than simple code.
You're given requirements like: "the program should do W when the user does A, X when the user does B, Y when the user does C, and Z when the user does D." And a naive programmer will happily trot off and write a pile of code for each of those cases, often with a lot of redundancy between them.
It takes more time and judgement to analyze those cases, see what they have in common, and distill a simpler underlying model for the behavior that encompasses all of the requirements.
They are very much not the simplest possible implementation of the algorithm and they run an order of magnitude faster for some algorithms.
Optimization of computer programs is a fundamentally uncomputable affair due to Rice's theorem – in the general case you can't check if two programs are equivalent.
Not that it would be easy.
In my experience most webapps can fix so much low hanging performance issues by mapping the API in a way that matches how its used in the client. It can remove so much mapping and combining for data all over.
The fact that sometimes optimization work is tricky or requires some pre-thinking, or is even gasp counter-intuitive is such a hilarious way to say "this is hard". That's just table stakes starting points for so-so-so much work.
Edit: Heck, even deciding whether to prioritize optimization work or feature work is usually a harder problem than the actual optimization work itself.
I've since noticed this many times. Optimizing is like meditation to me. It's very mechanical (measure), with a sprinkle of creative work (once you know what is slow, it's quite obvious how to make it faster, but just challenging enough to be engaging), and it has a very nice tight feedback loop: Something is slow. I make a change. Now it's fast. Next.
Optimizing is my happy place.
Unfortunately I had to give up when I was still 10 times slower than the reference lol
Code cleanup in general is the same for me, but it’s really hard to justify putting much time into that when running your own company solo.
Then there was a system for benchmarking the software as a whole on a wide variety of architectures, including NUMA. With lots of plots and statistics.
Usually you’d eventually end up at a point where the improvements are below the noise floor or they help on some systems and cause regression on others. The rule was usually “no regressions”
VTune for multithreading optimization. Built a fibers and lockfree system for efficient scheduling.
Interesting. For me, it's refactoring.
It seems there's less appreciation for this kind of work these days.
I try to keep a few of these tasks lying around marked “unblocked” in case I need to unwind. It keeps me from doing it right after noticing, which is an extra bonus.
1. Tracking i.e. navigating through jungles for hunting or find where the bottleneck is.
2. The thrill of the hunt. The joy after finding the hotspot or the bottleneck. It makes your day.
3. The actual hunt i.e. shooting an arrow/spear or using clever way of fixing.
4. Brag about the hunt or writing a blog post or tech talk on how difficult and awesome the hunt was.
Also optimizing a critical path has multiple takers in the org:
1. Product Managers are happy due to improved user experience.
2. Management is happy as in some cases it actually saves a lot of money.
3. Your boss is happy because he/she gets to score points as "team achievement" when evaluation is around the corner.
4. Engineers get to do nerdy things which otherwise would not find traction in the org.
All in all its a win-win-win-* situation.
And when you finally "fixed" the issue through optimization, your mind allowed itself to let go of these open loops (borrowing the GTD terminology), leading to relaxation?
fundamentally disagree. First it is a building of a mental model of what happens, a kind of analysis stage, and then compare it to the mental model of how it should or could work or producing a more efficient algorithm/way of accomplishing the target task.
When people try to brute-force, lets try this or this, without having the model in mind, that is frequently a waste, and even when/if it produces some improvement there is no understanding and guarantee what use cases the improvement will cover, whether it would regress some use cases, whether it still be there after we push those new features/fixes/etc.
My father had a PhD in Operations Research/Industrial Engneering.
Grock was a famous Swiss clown and once the highest paid entertainer in Europe.
The non-clown word you’re looking for is grok and Robert Heinlein coined it in 1961.
That doesn't seem to be what this post it talking about. It seems to talking about well worn areas trying to improve the state of the art. An example that illustrates it for me is DeepMind's AlphaTensor finding a better way to multiply matrices[0] in 2022. It wasn't a brute-force solution, but the scale of it makes it appear so.
> On 4x4 matrices, AlphaTensor unexpectedly discovered a solution with 47 multiplication steps, an improvement over the 49 required with Strassen’s algorithm of 1969, albeit restricted to mod 2 arithmetic. Similarly, AlphaTensor solved 5x5 matrices with 96 rather than Strassen's 98 steps. Based on the surprising discovery that such improvements exist, other researchers were quickly able to find a similar independent 4x4 algorithm, and separately tweaked Deepmind's 96-step 5x5 algorithm down to 95 steps in mod 2 arithmetic and to 97[24] in normal arithmetic.[25] Some algorithms were completely new: for example, (4, 5, 5) was improved to 76 steps from a baseline of 80 in both normal and mod 2 arithmetic.
This to me shows that direct profiling and observation wouldn't have led to the optimization. Improvements needed a sort-of but not actually brute-force effort of many people trying, but also being clever with their attempts.
[0] https://en.wikipedia.org/wiki/Matrix_multiplication_algorith...
"Even Apple’s LLVM fork lacks scheduling annotations for Apple Silicon. How am I supposed to write efficient code when Apple doesn’t bother to tune their own compiler?"
In addition to its public fork of LLVM, Apple also keeps a very private fork where (one assumes) they keep all the really juicy stuff.
I agree that it is frustrating not to have the data, but don't confuse "I don't have the data" with "no one has the data".
In C++, you can achieve performance using the often-denigrated standard template library, if you would only pay attention to the documented performance requirements for given function calls. But even there it's not the panacae that it could be, because it often provides an amortized cost while handwashing the access pattern (for example: std::unordered_map is great in algorithmic cost theory and terrible in memory access patterns).
What's the algorithmic cost (big-O) of this function call? What's the memory footprint of that function call? Worse: is that documented big-O estimated across a contiguous dataset, discontiguous paged datasets, or does it have to dereference pointers? What's the network cost of a given call? It's hard to know if you don't know that `write()` to a socket could incur 40 or 400 bytes across a network, and don't know whether that network is 1mbps, 10mbps, 1gbit, or localhost, etc; and with how much latency.
For example, when I hand-rolled some x86_64 SIMD instructions to analyze some DNA, I found the Intel Intrinsics Guide [0] 1000% helpful because many of the instructions detailed what sort of performance to expect on specific processor architectures (or, at least, in general). If you read the Intel Instruction Set Reference [1], a lot of similar performance information can be found. The performance I achieved was approximately the theoretical bottleneck between CPU and main RAM; getting better performance would have required a complete algorithm change which would have been Ph.D paper publishing worthy.
Of course, sometimes even low level hardware can have performance-killing bugs.
[0]: https://www.intel.com/content/www/us/en/docs/intrinsics-guid...
[1]: https://www.intel.com/content/www/us/en/developer/articles/t...
For example: do you compression data sent over the network? What level of compression do you use? Changing the data format can affect the optimal compression level, and vice versa, using a higher compression level means you can keep the underlying data simpler. For example, you can replace deserialization with zero-cost casts. But that might mean you spend more memory. Do you have that much memory? If you do, would giving that memory to the database for use as cache be better? And so on.
The individual choices are simple, but they compound and affect the overall performance in unpredictable ways. The only way to be sure you aren't missing something obvious is to check all, or at least most combinations.
Very likely we agree on many points but perhaps disagree at the outcome :)
So to expand & reply...
> do you compression data sent over the network?
Depends on size of data, complexity of data, and performance of your system, and performance of the receiving system.
The choice you make might be correct for one set of variables but there might be a better choice for a different set of variables. If the receiving system is overloaded, then decompression might add to that. If the sending system is overloaded, then compressing might add to that.
But compression over the network is often a moot point since most networks should be encrypted whereas compression can be used to defeat encryption (see compression-oracle-attacks such as BREACH [0] and CRIME [1]).
[0]: https://en.wikipedia.org/wiki/BREACH_(security_exploit)
[1]: https://en.wikipedia.org/wiki/CRIME
> using a higher compression level means you can keep the underlying data simpler
Why? Moreover, I would never rely on data to be compressed-well to then leave data simpler (but perhaps more data).
> would giving that memory to the database for use as cache be better?
As in, shared-memory? Or do you mean to implement your own database?
> The individual choices are simple, but they compound and affect the overall performance in unpredictable ways.
I disagree about unpredictability. I've rarely found some effect to be unpredictable. Generally when something is unpredictable it actually represented something I didn't understand.
> The only way to be sure you aren't missing something obvious is to check all, or at least most combinations.
Would you check all combinations of inputs for a given continuous function? No, of course not, that would be effectively impossible.
The engineer should identify the edge cases and the corner cases and ensure that the product works within acceptable behavior for the systems being targeted. There's a saying: "you only care about performance of the things you benchmark, and be careful what you benchmark". So if you don't have any performance benchmarks then you don't care about performance. If you do have performance benchmarks, then use them in the systems you intend to deploy to.
Ultimately, it takes a seasoned engineer to understand a whole system and especially to understand what a given change might incur in terms of performance to that whole system. If a seasoned engineer can do it then AI is just around the corner to do it automatically. That's not a brute-force task. We just don't have a whole lot of seasoned engineers; but we do have a ton of engineers with deep domain-specific knowledge though, and those domain-specific knowledge engineers often would brute-force some aspect they don't understand. They can usually fix it in a patch, after all.
Performance is a trade-off between various aspects of various systems. That doesn't make it a brute-force task. The trade-off decisions can be made with the right data. The right data can be found either: via brute force with benchmarks, or with proper documentation about the system. The system might tell you about its performance if you ask it (eg, the system is self-documenting), but you also have to beware that the performance can change -- and neither benchmark (brute force) nor documentation will tell you unless you look for it (re-run benchmarks or re-query documentation).
For them, the systematic way to optimize goes: profile your code, and then apply domain knowledge of the product to optimize the hotspots with these common techniques:
Don't do repeated work. (If you have an expensive invariant calculation in a loop or function call, move it out, to a higher level of the program where it can be done once.)
Save your work. (Caching and memoization.)
Do less work. (Alter the product requirements to use less computationally-intensive techniques in cases where users won't notice the difference.)
Do work when the user isn't looking. (Move computations to background tasks, apply concurrency, perform async requests.)
If all else fails, call in a performance engineer like the author to micro-optimize your building blocks.
You can often get speedups of 3-6 orders of magnitude by applying these techniques, simply because the original code is so brain-dead. Performance engineers like the author tend to work on code that has already been tightly optimized, and so there is less low-hanging fruit to pick.
I'm not so sure. How many cycles would you expect this code to take?
mov dword [rsi], eax
add dword [rsi], 5
mov ebx, dword [rsi]
According to Agner Fog, these 3 instructions have a latency of 15 cycles on an AMD Zen 1 processor. On Zen 2, its latency is 2 cycles. This is because the CPU was given the ability to assign a register to `dword [rsi]`, overcoming the limit of 16 registers.This optimization is subject to problems, obviously pointer aliasing will enable the CPU to make the wrong assumption at times, and cause a situation not entirely unlike a branch mispredict.
There are constraints imposed by the micro-architecture for this feature. For you and I, a big one is it only works with general purpose registers. But is there a reason it couldn't or shouldn't be done for vectors? It seems like a micro-arch issue to me. Perhaps in a few years or in a lot of years, we'll have a CPU that can do this optimization for vectors.
I've known that similar optimizations exist, namely store-to-load forwarding, but I didn't know that AMD has experimented with mapping in-flight writes straight into the register file. Sounds like they've abandoned this approach, though, and Zen 3 doesn't feature this, supposedly because it's expensive to implement. So for all intents and purposes, it doesn't exist anymore, and it probably won't be brought back in the same fashion.
I do still think this is something better solved by ISA changes. Doing this on the uarch level will either be flaky or more costly. It is absolutely possible, but only with tradeoffs that may not be acceptable. The APX extension doubles the number of GPRs and improves orthogonality, so there's at least work in that direction on the ISA level, and I think that's what we're realistically going to use soon.
It’s a process of continuous uncovering, and unless you have visibility across the whole stack — from kernel to cluster — you’ll spend all your time slicing through surface layers with lots of tears being shed.
Fortunately, there are software automation solutions to this.
Being how I fall under any developer, I'm going to bite the bullet and ask:
How are they supposed to be equivalent? One is a HashSet, other is boolean? And what is `c`.
After narrowing the smallest leaf, think about another branch and if some optimizations studied could lead to nice results there too.
With time they learn which branch is more promising and which optimizations are good beforehand with their problem.
I guess this could be called branch and bound with memoization instead of brute force aha.
Note: we write code in cuda
Not so much as collateral damage when different optimizations negatively interact though.
Even when optimization is not further possible[0] a careful avoidance of distinct pessimizations themselves can lead toward the same kind of beneficial outcomes.
[0] Or at the other extreme, not the least bit possible.
For this we need programmable languages in which users can code their own optimization cases.
There is such a thing as "I would like a certain optimization, that should not be upstreamed into the compiler".
That's exactly what's in this article.
The construct:
HashSet::from([a, b]).contains(&c);
has some definite AST pattern (or can be made to have one with a sufficiently pinned-down language). We can think about adding a pattern match for that AST pattern to the compiler, along with a right hand side to rewrite it into.
MatthiasWandel•2mo ago
jandrewrogers•2mo ago