The argument is wrong because the best optimizer is the stuff between your ears, not a compiler. C and to a lesser extent C++ have always won because they allow you to spend those neurons thinking about the underlying machine with a thin(ner) abstraction layer instead of wrapping it in layers of conceptually elegant abstractions that get dissected by a sufficiently smart compiler.
But yes, humans can still usually optimize better.
The other thing you see with high level languages is a death by a thousand cuts due to things like cache locality or instruction level parallelism. It’s very, very hard to write a VM with stuff going on like JIT and GC and a heap allocator that gives you good locality or ILP. A major problem is that yes you can optimize for that, including in real time, but that not only adds work but adds work that implies a cache flush.
The latter point — cache locality and ILP — is why some C/C++/Rust code is faster when compiled optimizing for space instead of for “speed.” Less code means it fits in cache.
All that being said, HLLs usually offer superior programmer productivity, especially for novice to mid career devs who aren’t quite up to things like comprehending the Rust borrow checker. Machine time has to be weighed against human time. The latter is usually more expensive (but not always at scale!).
also, consider reading the linked post about how assembly instructions are no longer a good approximation of how your computer works: https://queue.acm.org/detail.cfm?id=3212479. in general, writing in a language that is not close to the hardware allows the compiler to adapt when the hardware changes; for example futhark has the ability to execute using either SIMD or GPUs precisely because it’s not over-determined by the source language. C ties processors to the model of the PDP-11, which hasn’t been manufactured for 30 years.
wavemode•3h ago
gizmo686•2h ago
Athas•1h ago
jynelson•58m ago