Sometimes because C is lingua franca of low level.
Some noalias optimizations Rust had didn't work in LLVM, because no one bothered using it before in C.
This goes even further to hardware. C-like null terminated string search SIMD is faster than a saner (pointer + len ) string view. So it's faster to append null to end of string view than to invoke the SIMD on string slice.
C standards with the 'restrict' keyword to allow aliasing optimisations have been existing longer than Rust. LLVM just never bothered, despite the early claims that all the intermediate language and "more modern" compiler would enable more optimisation potential. LLVM is still slower than GCC, even in plain C.
Where is the problem to quickly null-terminate a pointer+len string and use the quick null-terminated SIMD search? Should only be 1 move slower than the native C. "Space after that string" shouldn't be a problem, since in higher-level languages, allocations can make enough room, it is higher-level after all (and you need null-termination for syscalls anyways).
It is always the same story. Compiler and programming language people make claims about future optimisations but never ever follow through. It's always just theory, never implementation.
They did bother but no one seemed to be using it, so a bug snuck in. It took Rust exercising that corner of LLVM to find the bug.
> LLVM is still slower than GCC, even in plain C.
Citation needed. Last time I checked LLVM beat GCC on -O3. Unless you mean compilation performance.
> Where is the problem to quickly null-terminate a pointer+len string and use the quick null-terminated SIMD search?
Why should two nearly idenfical operations have such wildly different performance? And why isn't the safer/saner interface more performant?
My point if compilers and hardware are optimizing for C, it's no surprise no one can approach its speed.
It's like asking why when everyone is optimizing for Chrome no other web browser can approach its speed.
> Citation needed. Last time I checked LLVM beat GCC on -O3.
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
https://www.phoronix.com/review/gcc-clang-eoy2023/8
There is tons more, just ask your favourite internet search engine.
> Why should two nearly idenfical operations have such wildly different performance? And why isn't the safer/saner interface more performant?
Because in hardware, checking a zero-flag on a register is very cheap. Big NAND over all bits of a sub-register, big OR over all those flags for the whole SIMD register. Very short path, very few transistors.
Checking a length is expensive: Increment a length register through addition, including a carry, which is a long path. Compare with another register to out if you are at the last iteration. Compare usually is also expensive since it is subtract with carry internally, even though you could get away with xor + zero flag. But you can't get around the length register addition.
There can be an optimisation because you do have to increment the address you fetch from in any case. But in most modern CPUs, that happens in special pointer registers internally, if you do additional arithmetics on those it'll be expensive again. x86 actually has more complex addressing modes that handle those, but compilers never really used them, so those were dropped for SIMD.
for (int i = 0; i < len; i++)
/* do something */
is going to compile down to something like: label: ; blah blah blah
INC r0
CMP r0, r1
JNZ label
(Okay, to be fair, compilers tend to change the loop bounds so that the final comparison is a comparison against 0. This will reduce the register usage of the loop, but otherwise doesn't change the analysis). Taking into account micro-op fusion, there is at best two actual hardware ops in that assembly pattern, since the comparison and branch will be fused, and the increment(/decrement) is also a decent candidate for fusion. The branch can also be predicted with a high degree of fidelity--an advanced processor should be able to predict the branch with exactly 100% accuracy.A null-terminated loop instead compiles down to:
label: ; blah blah blah
LD r0, r1
INC r1
CMP r0, 0
JNZ label
You still have the some basic pattern at the end of the loop, except that the INC (which might not have been previously fused) is no longer in the critical path, but now the critical path instead has a load in it. That load is a much slower instruction than a test-against-0 or a comparison: no load completes in one cycle, whereas both tests (bitwise and) and comparisons (subtraction) complete in one cycle. Splitting hairs by saying that the test-against-0 is going to be less expensive than a subtract-and-check-carry, so therefore null-terminated is faster ignores the fact that they execute in the same time and brushes the extra load, which is definitely slower, that exists in the null-terminated string.It goes worse, though. When you have a pointer+length combo, the length of the array, and the legal dereferenceability, is now known to the compiler in symbolic form that is easy to compute. That makes it possible to do things like unroll loops, or vectorize loops, or rewrite loop bounds in better forms, etc. When it's null-terminated, the length comes from reading memory, and the amount of memory you can read is not known ahead of time. Unrolling the loop or other steps are no longer really possible since you don't know how many iterations the loop will take, and the extra loop exits you'd generate tend not to be tolerable in optimization.
And loads are amortized by tons of speculative preloading. Which isn't (yet, unfortunately) optimized by preload-length hints.
Oh, and since we are talking about comparing strings, there is no extra load. This isn't strlen(), this is strchr() or strstr() we are talking about. You always have to load the string into some register.
The critical (data) path of the null-terminated loop, however, does not include the load -- the actually loaded value is not a loop-carried dependency in your example. The re-steering of the branch at the end of the loop might happen much later, however.
Vectorization with null-terminated strings is possible and done, but requires alignment checking, which adds some cost.
I don't think that it's convincing evidence that clang is slower than GCC.
I was thinking about, and meaning to post that one, but got the wrong search result:
https://www.phoronix.com/review/clang20-gcc15-amd-znver5/5
But they swapped places a few times over the last years, so basically there doesn't seem to be a fundamental difference in performance anymore.
Then Rust made it trivial to use correctly, and found that what LLVM did have was quite buggy, because it hadn’t been exercised. These were bugs that could in theory have been found in C/C++, but in practice never would have been. And so for quite some years Rust would toggle the noalias optimisations on and off, depending on whether there were any known bad-codegen issues at the time. I think the last toggle was a few years ago by now, this stuff is finally actually stable and appreciated.
My recollection is that the figures for the Rust compiler are in the vicinity of a 5% improvement from emitting noalias in the LLVM IR.
And it’s likely that quite a bit more is possible.
Eventually that lets you optimize things more at the compiler level because you have the extra time to maintain it as well.
My primary argument is that a more restrictive language, if designed correctly, has programs which are resistant against bit-rot over time. You can take a program from 15 years ago and compile it. It'll run faster due to optimizations, and hardware improvements. And the program will still be the "right thing to do."
In contrast, if you take C code that's older, you'd shudder and immediately begin identifying things left and right which needs to be redone because the modern machine is different from the aged machine. Part of the efficiency comes from the fact we are building the C code such that it has affordance on current hardware for efficient execution. But when the hardware updates, then the code needs amendment to keep up. Meanwhile, your SQL statement is the same because it's written declaratively.
I must have imagined everything I've read about query planners.
And here we are, 20 years later.
Especially profile-guided-optimization was hailed as the next big thing, only JIT-ed languages were the ones to be fast, because after some warmup time they would adapt to your data and hardware. Java is still dog-slow...
And in particular that garbage collection can be faster than malloc/free.
This is technically true in the case you have a naive C program. There are tons of C programs out there, and some are slow. (But there are more slow Java programs out there)
But it's 100% clear now that GC has a significant cost, and while you can optimize a bad C program, it can be very hard in many cases to optimize the cost of GC away.
C offers more freedom and thus more footguns, but the freedom also means you don't really have a performance cap.
And I prefer GC -- most programs I write will have GC; I just recognize the performance cap, and design accordingly. (In fact I wrote a GC for a subset of C++ for https://oils.pub -- I actually think this is a language with nicer performance properties than Java, mainy due to being AOT)
I will very easily write a faster parallelizable program in Java, before I get the C one even remotely correct and then we haven't even taken into account every future maintenance cost. Also, the way C devs often hide the cost is simply.. less efficient code like copying stuff right and left, etc, which can actually be more "bravely" avoided in managed languages.
The reality is that it's quite the opposite. Java boxed types, lack of control over memory layouts leading to page flapping and false sharing, lack of fast synchronisation primitives, lack of a compiler capable of hinted or automatic loop parallelization lead to Java being totally unsuitable here. The crown goes to FORTRAN in that regard, with C and C++ being second and third places.
Of course theoretically a magic fairy-tale Java compiler could look at the code, recognize all the aforementioned problems and fix them. Of course the language would allow for this to happen totally without changing semantics. But no Java compiler even comes close...
Even better in Rust. You get great memory safety guarantees from the borrow checker helping guide your parallelization, while still being fast and GCless.
It's the best of both worlds.
>Also, the way C devs often hide the cost is simply.. less efficient code like copying stuff right and left, etc, which can actually be more "bravely" avoided in managed languages.
I'd say Rust is again the easiest way to avoid unnecessary copies without sacrificing safety.
For certain other kind of concurrent algorithms, you are in a world of pain and Rust's borrow checker simply refuses to compile a large set of otherwise correct programs.
To this day, Java applications are the slowest and most memory hungry long-running server applications by far. Only some scripting languages are worse, but only by performance, almost never by memory.
https://kstefanj.github.io/2021/11/24/gc-progress-8-17.html
Note: this is comparing to Java 17 to Java 8 (released in 2014), and we are now at Java 24!
There are no serious Java performance comparisons on stuff like transaction throughput, which would be what the "long running server application" usecase should aim for. You yourself claimed that enterprise data processing is what you are talking about. There are benchmarks that even with long warmup times make Java come up far worse than all the compiled languages (even Golang nowadays). https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Imho, people have stopped benchmarking Java ages ago, since it is known to be hopeless.
If you exclude stuff like SSE intrinics, it's usually 50% slower.
What about the median performance benchmark tasks?
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
GC usually also comes in "copying" variants, which can have advantages like reducing memory fragmentation.
Memory fragmentation can more easily be reduced by pooled allocators. For long-running server processes, it can also be possible to just kill the worker processes/threads from time to time. And I've never seen memory fragmentation in manually allocated languages lead to more memory consumption or worse performance than in Java.
Here is your transaction throughput. But I'm sure AWS's whole platform team, Alibaba, third of Google, etc don't know what they are doing.
No idea how you got that impression. Here is an incomplete set of important changes:
- major developments on Graal and native code compilation for applications that prioritize startup time and memory use (I think Graal was started before 2011, but has changed a huge amount since then).
- the concurrent G1 garbage collector became the default, making most applications have much lower pause times (I believe it was released in Java 7 in 2011, but was not mature, only became a default in Java 9, and has regular improvements since then).
- the Shenendoah and ZGC concurrent garbage collectors were not started until well after 2011
- project lilliput, which reduces the header size of Java objects from 16 to 8 bytes is landing in Java 25, which comes out this fall
- specializing strings to use 1 byte/character for ascii strings, and 2 bytes for non-ascii strings
- virtual threads shipped in Java 22
- work on a vectorization API
- plenty of work on reducing the startup time of the JDK and modularizing it
One where I'm not sure about the timeline:
- https://docs.oracle.com/javase/8/docs/technotes/guides/vm/cl... class data sharing to lower overhead of running multiple JVMs on a single node
P.S. I'm not writing this to comment on the the parent's claim that Java is faster in some contexts.
> Yes, Java can be faster in certain contexts: the kind of contexts Java is usually employed in addressing these days. Stuff like enterprise backend systems and big data processing, where startup time or precise memory control are not important.
The only thing that might be relevant in your list is the vectorization API.
The rest of your list is just a bunch of band-aids for the worst problems that Java specifically has, that other languages lack. That is at best playing catch-up with everything else, since in C there is no GC overhead, almost no startup time, no object overhead, no GC-induced memory hogging, no GC pauses. Yes, I agree that things did happen. But nothing in there makes Java any more desirable for relevant workloads, it just makes it less undesirable.
Are you even aware of the word "tradeoff"?
I am not a fan of Java, but the slow part is just plain wrong. On properly sized hardware Java is blisteringly fast. Did it exceed the speed of C as hyped? No, in most cases it falls solidly short, but I'm sure there are isolated cases where it does. Regardless, 2-3x slower than C using a JIT is a serious feat of engineering.
All GC languages require more memory, and Java is even worse given lack of value types, but again, memory/hardware is cheap compared to programmer time and if you have enough of it, it generally isn't a problem.
All this does mean you will need a bit more hardware than a C program, so perhaps that is the posters critique, and that is true, but compared to other high level languages, it does great in the speed dept, and is often only a bit higher in the memory dept.
Your second paragraph is just thoroughly uninformed, though. And unused RAM is useless, so in a server application not making use of it is just dumb. Maybe think about why Java is used extensively by pretty much every FAANG company. Also, Java has improved steadily after Oracle's takeover, and is in fact the open-source reference implementation. Again, uninformed bullshit.
Java programmers are a dime a dozen. Universities churn out heaps of Java code slaves, so of course every big company will use Java for their shovelware.
On the other hand, the results of this true fact are already seen in practice in real systems. Rust's iterator adapters, for example, allow high level "functional style" with closures to be compiled to code that is exactly the same as hand rolled C loops without unamortized bounds checks - despite guaranteeing the absence of out of bound accesses even in the face of programmer error - because the constraints of these interfaces and the design of the language enable tons of optimizations by LLVM. This has been true since Rust 1.0 more than a decade ago, it's not a promise of the future.
https://www.w3.org/DesignIssues/Principles.html#PLP
By restricting the power of a language, you enable it to be used in more ways than just “execute it and see what the result is”.
But if the language itself strongly obfuscates (or makes it difficult or impossible to express/guarantee) reliable relationships, you miss out on all the desired, and leveragable, certainty that you still intend.
That it's supposedly easier might be true. But that usually doesn't lead to someone really doing it.
It is the most easily optimized language I know. If you write a working program in it, you know you're doing it in the most optimal way for that language already.
When we're using Turing complete languages for pretty much everything, constraints are pretty much the only thing that semantically differentiates the code we write in them at all. To me, this is basically the concept people are trying to convey when they argue for picking "the right tool for the right job". At the end of the day, what makes a language useful is just as much defined by what you _can't_ say as exact you can.
One example would be integers. There are bigint-by-default languages like Haskell, where the type you usually use is an arbitrary-sized bigint. But usually you don't need that, and you usually know that something like a 32bit integer is sufficient. Often you get a int32 type that does that stuff for you. But then the question becomes about overflow behaviour, signedness, existence of -0, behaviour of -INT_MAX and stuff like that. Even in C, you are in undefined-behaviour-launch-the-rockets territory very quickly. And there are usually no "screw it, I don't care, give me whatever the CPU does" types. You don't get to choose your overflow/underflow behaviour. Often no bounded types either (unsigned integer day from 1 to 365). And if those types are available in libraries, the compiler won't be able to optimize.
There are tons more of those examples. It's always a compromise between the expressiveness that we want and that would be great, and the complexity that will make an implementation non-viable. So you get as much expressivenes as the language designer thinks he could maybe cram into his compiler. But it's always to much to be really optimized all the way through. And it's always too little for what one would actually need to be really exact.
As ever, hardware is an underappreciated factor. We have make-C-go-fast boxes today. That's what the engineering effort has been thrown towards. They could provide other capabilities that prefer different models of computing, but they don't because the path dependency effect is very strong. The major exceptions come from including alternate modalities like GPU, hardware DSP and FPGA in this picture, where the basic aim of the programming model is different.
That is a very common misconception. There have been numerous attempts at architectures that cater to higher-level languages. Java-bytecode-interpreting CPUs have been tried and were slower than contemporary "normal" CPUs at executing bytecode. Phones and smartphones were a supposed hot market for those, didn't fly, native bytecode execution is dead nowadays. Object-orientation in CPUs has been tried, like in Intels iAPX432. Type-tagged pointers and typed memory has been tried. HLLCAs were all the rage for some time ( https://en.wikipedia.org/wiki/High-level_language_computer_a... ). Early CISC CPUs had linked lists as a fundamental data type. Lisp machines did a ton of stuff in hardware, GC, types, tagging, polymorphism. All of it didn't work out, more primitive hardware with more complex interpreters and compilers always won. Not because C was all the rage at the time, but because high-level features in hardware are slow and inflexible.
What came of it was the realization that a CPU must be fast and flexible, not featureful. That's why we got RISC. That's why CISC processors like x86 internally translate down to RISC-like microcode. The only thing that added a little more complexity were SIMD and streaming architectures, but only in the sense that you could do more of the same in parallel. Not that HLL constructs were directly implemented into a CPU.
Also, C is in no way special, not even particularly low-level.
Most useful Python programs could be hard-compiled to machine code, because they're not doing anything weird enough to require a JIT compiler. But some low-level library might be, which breaks hard compilation.
("But then my domain-specific language / interface to another language / code generator / tracing package / debugger won't work!")
* The type of arguments passed to the function. Everything might point to the function always being passed values of a particular type, but you still can't be sure.
* If you call another function, you can't be sure that you'll actually call the function you expect, because any function or global variable may be overwritten at runtime.
* There are many places in the language that might call an invisible function call (method overloads, garbage collection methods, etc) and any of those invisible calls could potentially monkey patch something
All of these complexities, and more, are why JIT compilers like PyPy exist. Instead of optimizing at compile time, they observe things at runtime, optimize optimistically assuming that the program won't try anything funny, but stays ready to roll back the optimizations in case the program does try something funny.
Because that seem to require the compiler to be an AGI. Its very hard to explain why its hard to make a truly intelligent program, but you just have to trust that tons of people have tried for entire careers to solve this and so far none have solved it.
greatgib•6mo ago
But easier to optimize doesn't necessarily means that they could be optimized to be more performant than less constrained languages.