What would be the performance improvement in average java services?
Are there specific types of applications that would benefit a lot?
Does this make string.intern() more valueable? String caches?
It would be faster but not as blindingly fast. Combined with an immutable map, what it means is that the JVM can directly replace your key with its value, like the map is not even there. Because the key's hashcode won't ever change, and the map won't ever change.
> Does this make string.intern() more valueable?
No, String.intern() does a different job, it's there to save you memory - if you know a string (e.g. an attribute name in an XML document) is used billions of times, and parsed out of a stream, but you know you only want one copy of it and not a billion copies). The downside is that it puts the string into PermGen, which means if you start interning normal strings, you'll run out of memory quickly.
But interned strings can also reuse their hashcode forever.
https://docs.oracle.com/en/java/javase/17/docs/api/java.base...
How it tells the JVM this? It uses the internal annotation @jdk.internal.ValueBased
https://github.com/openjdk/jdk/blob/jdk-17-ga/src/java.base/...
In the same way that if you wrote this C code:
const int x[] = {20, 100, 42};
int addten(int idx) { return x[idx] + 10; }
the C compiler would "just know" that anywhere you wrote x[2], it could substitute 42. Because you signalled with the "const" that these values will never change. It could even replace addten(2) with 52 and not even make the call to addten(), or do the addition.The same goes for Java's value-based classes: https://docs.oracle.com/en/java/javase/17/docs/api/java.base...
But it's a bit more magical than C, because _some_ code runs, to initialise the value, and then once it's initialised, there can be further rounds of code compilation or optimisation, where the JVM can take advantage of knowing these objects are plain values and can participate in things like constant-folding, constant propagation, dead-code elimination, and so on. And with @Stable it knows it that if a function has been called once and didn't return zero, it can memoise it.
> What if there are bucket collisions? Do immutable maps expand until there aren't any? Moreover, what if there are hash key collisions?
I don't know the details, but you can't have an immutable map until it's constructed, and if there are problems with the keys or values, it can refuse to construct one by throwing a runtime exception instead.
Immutable maps make a lot of promises -- https://docs.oracle.com/en/java/javase/17/docs/api/java.base... -- but for the most part they're normal HashMaps that are just making semantic promises. They make enough semantic promises internally to the JVM that it can constant fold them, e.g. with x = Map.of(1, "hello", 2, "world") the JVM knows enough to replace x.get(1) with "hello" and x.get(2) with "world" without needing to invoke _any_ of the map internals more than once.
What wasn't working until now was strings as keys, because the JVM didn't see the String.hash field as stable. Now it does, and it can constant fold _all_ the steps, meaning you can also have y = Map.of("hello", 1, "world", 2) and the JVM can replace y.get("hello") with 1
Probably depends on the use case, though I'm having trouble thinking of such a use case. If you were dynamically creating a ton of different sets that had different instances of the same strings, then, maybe? But then the overhead of calling `.intern` on all of them would presumably outweigh the overhead of calling `.hash` anyway. In fact, now that `.hash` is faster, that could ostensibly make `.intern` less valuable. I guess.
> Computing the hash code of the String “malloc” (which is always -1081483544)
Makes sense. Very cool.
> Probing the immutable Map (i.e., compute the internal array index which is always the same for the malloc hashcode)
How would this work? "Compute" seems like something that would be unaffected by the new attribute. Unless it's stably memoizing, but then I don't quite see what it would be memoizing here: it's already a hash map.
> Retrieving the associated MethodHandle (which always resides on said computed index)
Has this changed? Returning the value in a hash map once you've identified the index has always been zero overhead, no?
> Resolving the actual native call (which is always the native malloc() call)
Was this previously "lazyinit" also? If so, makes sense, though would be nice if this was explained in the article.
The index is computed from the hashcode and the size of the array. Now that the hash code can be treated as a constant, and the size of the array is already a constant, the index can be worked out at compile time. The JVM can basically inline all the methods involved in creating and probing the map, and eliminate it entirely.
I guess a @Stable attribute on the array underlying the map would allow for the elimination of one redirection: in a mutable map the underlying array can get resized so its pointer isn't stable. With an annotated immutable map it could be (though IDK whether that'd work with GC defrag etc). But that seems like relatively small potatoes? I don't see a way to "pretend the map isn't even there".
In short: a general purpose String substitute in Java would be an extremely poor idea.
Could you solve the empty string hashes to zero problem by just adding one when computing hash codes?
The Stable annotation is an optimization mechanism: a promise the developer makes to the compiler. It is on the developer to uphold.
This is detailed in implementation notes comment here: https://github.com/openjdk/jdk/blob/56468c42bef8524e53a929dc...
But, it's actually desirable to have some cryptographic properties in a faster one way function for making hash tables. Read about SipHash to see why.
Because Java didn't (and as others have discussed, now can't) choose otherwise the hash table structures provided must resist sabotage via collision, which isn't necessary if your attacker can't collide the hash you use.
I assume Java has to add some other layer to avoid this, rather than using a collision resistant hash scheme.
the application creator needed to have anticipated this threat model, and they can prepare for it (for example, salt the keys).
But to put onto every user a hash that is collision resistent, but costs performance, is unjustified because not every user needs it.
The reason why this kind of thing should be the default is because it's unreasonable to expect this level of understanding from your average coder, yet most software is written by the later. That's why PL and framework design has been moving towards safety-by-default for quite some time now - because nothing else works, as proven by experience.
Second, this risk was reliably mitigated in Java as soon as it was discovered. Just because hash collisions may exist, it doesn’t mean they are exploitable. CVE for JDK was not fixed, because it has been taken care of elsewhere, in Tomcat etc, where meaningful validation could take place.
Context matters.
For example, Python back in 2012: https://nvd.nist.gov/vuln/detail/cve-2012-1150.
s[0]*31^(n-1) + s[1]*31^(n-2) + ... + s[n-1]
Per Hyrum's law, there's no changing it now.https://docs.oracle.com/javase/8/docs/api/java/lang/String.h...
For things that need to be secure, there are dedicated libraries, standard APIs, etc. that you probably should be using. For everything else, this is pretty much a non issue that just isn't worth ripping up this contract for. It's not much of an issue in practice and easily mitigated by just picking things that are intended for whatever it is you are trying to do.
https://docs.python.org/3/reference/datamodel.html#object.__...
Over the years, the implementation of Java’s String class has been improved again and again, offering performance improvements and memory usage reduction. And us Java developers get these improvements with no work required other than updating the JRE we use.
All the low-hanging fruit was taken years ago, of course. These days, I’m sure most Java apps would barely get any noticeable improvement from further String improvements, such as the one in the article we’re discussing.
I would, but unfortunately I got a NullPointerException.
I suggest you try Rust instead; its borrow checker will ensure you can't share pointers in an unsafe manner.
You can't share "pointers" in an unsafe manner in Java. Even data races are completely memory safe.
Depending on what sort of document you're looking for, you might like either the JEP: https://openjdk.org/jeps/254
or Shipilev's slides (pdf warning): https://shipilev.net/talks/jfokus-Feb2016-lord-of-the-string...
Shipilev's website (https://shipilev.net/#lord-of-the-strings), and links from the JEP above to other JEPS, are both good places to find further reading.
(I think I saw a feature article about the implementation of the string compression feature, but I'm not sure who wrote it or where it was, or if I'm thinking about something else. Actually I think it might've been https://shipilev.net/blog/2015/black-magic-method-dispatch/, despite the title.)
These kinds of niche optimizations are still significant. The OOP model allows them to be implemented with much less fanfare. This is in the context of billion-dollar platforms. With some basic performance testing and API replays, we're saving thousands of dollars a day. Nobody gets a pat on the back. Maybe some pizza on Friday.
Then there's C# which most anyone who's enthusiastic about software dev will find far nicer to work with, but it's probably harder for bargain basement offshore sweatshops to bang their head against.
There are worse fundamental problems in Java. For example the lack of a proper numeric tower. Or the need to rely on annotations to indicate something as basic as nullabilty.
This statement surprised me. I can't even remember last time I ran any opensource Java.
There are a couple of Java projects, and even one or two kind of successful ones. But Java in open source is very rare, not the boring workhorse.
If I worked on a project that used Bazel, then sure, I'd use Bazel every day.
But which is "the boring workhorse" of open source, if I gave you the option of Java, Make, Linux, gcc, llvm, .deb, would Java really be "the" one?
Sure, maybe you could exclude most of those as not being "boring", like llvm. But "make" wins by any measure. And of course, it's almost by definition hard to think about the boring workhorse, because the nature of it is that you don't think about it.
Checking now, the only reason I can find java even being installed on my dev machines is for Arduino IDE and my Android development environment. Pretty niche stuff, in the open source space.
Also, not using as much memory in these types of GCs is a direct hit to performance. And this actually shows splendidly on GC-heavy applications/benchmarks.
Java has went over this evolution, implemented generics, lambdas, etc and I believe it strikes a very good balance in not being overly complex (just look at the spec - it's still a very small language, compared to its age, unlike C++ or C#).
Go tried to re-invent this evolution, without having learnt Java's lessons. They will add more and more features until their "simple" will stop applying (though I personally believe that their simple was always just simplistic), simply because you need some expressivity for better libraries, which will later on actually simplify user code.
Also relevant: https://www.tedinski.com/2018/01/30/the-one-ring-problem-abs...
That sounds like a recipe for disaster though, as it generally makes code much harder to read.
Then you can get to benefit from Java's unparalleled ecosystem of enterprise hardened libraries, monitoring etc.
In my experience OOP is actually pretty pleasant to work with if you avoid extending classes as much as possible.
> These kinds of niche optimizations are still significant. The OOP model allows them to be implemented with much less fanfare.
If you're referring to the optimization in the article posted then I would argue an OOP model is not needed for it, just having encapsulation is enough.
For a long time, Java was like, every classes is a library, i do not think it's a failure of OOP, it's a failure of Java.
But I'm optimistic, I choose to see recent additions like records and pattern matching has a step in the right direction.
Whether or not this is an endorsement of OOP or a criticism is open to interpretation.
My thoughts exactly. Give me more classes with shallower inheritance hierarchies. Here is where I think go’s approach makes sense.
Turns out you can write java without the stuff. No getters and setters, no interfaces or dependency injection, no separate application server (just embed one in your jar). No inheritence. Indeed no OOP (just data classes and static methods).
Just simple, c-like code with the amazing ecosystem of libraries and the incredibly fast marvel that is the JVM and (though this is less of a deal now with LLM autocomplete) a simple built in type system that makes the code practically write itself with autocomplete.
It's truly an awesome dev experience if you just have the power / culture to ignore the forces pressuring you to use the 'stuff'.
But that free-thinking definition of Java clashes with the mainstream beliefs in the Java ecosystem, and you'll get a lot opposition at workplaces.
So I gave up on Java, not because of the language, but because of the people, and the forced culture around it.
Have you tried updating production usage of a JRE before??
With projects like OpenRewrite [1] and good LLMs, things are a lot easier these days.
Java 8 -> 9 is the largest source of annoyances, past that it's essentially painless.
You just change a line (the version of the JRE) and you get a faster JVM with better GC.
And with ZGC nowadays garbage collection is essentially a solved problem.
I worked on a piece of software serving almost 5 million requests per second on a single (albeit fairly large) box off a single JVM and I was still seeing GC pauses below the single millisecond (~800 usec p99 stop the world pauses) despite the very high allocation rate (~60gb/sec).
The JVM is a marvel of software engineering.
Even if the map is crucial for some reason, why not have the map take a simple value (like a unint64) and require the caller to convert their string into a slot before looking up the function pointer. That way the cost to exchange the string becomes obvious to the reader of the code.
I struggle to find a use case where this would optimize good code. I can think of plenty of bad code usecases, but are we really optimizing for bad code?
The most common such usage in modern web programming is storing and retrieving a map of HTTP headers, parsed query parameters, or deserialized POST bodies. Every single web app, which arguably is most apps, would take advantage of this.
I dont have the profiling data for this, so this is pure theoretical speculation. At the time you're shoving http headers, which is dynamic data that will have to be read at runtime, into a heap allocated datastructures inside the request handling. It kinda feel like doing a little xor on your characters is a trivial computation.
I don't envision this making any meaningful difference to those HTTP handlers, because they were written without regard for perfomance in the first place.
Yes
commit a136f37015cc2513878f75afcf8ba49fa61a88e5
Author: Kaz Kylheku <kaz@kylheku.com>
Date: Sat Oct 8 20:54:05 2022 -0700
strings: revert caching of hash value.
Research indicates that this is something useful in
languages that abuse strings for implementing symbols.
We have interned symbols.
* lib.h (struct string): Remove hash member.
* lib.c (string_own, string, string_utf8, mkustring,
string_extend, replace_str, chr_str_set): Remove
all initializations and updates of the removed
hash member.
* hash.c (equal_hash): Do not cache string hash value.
> This improvement will benefit any immutable Map<String, V> with Strings as keys and where values (of arbitrary type V) are looked up via constant Strings.Wait, what? But, that's inherently constant foldable without reasoning about string hash codes; we don't need them at all.
We examine the expression [h "a"]: lookup the key "a" in hash table h, where h is a hash literal object, that we write as #H(() ("a" "b)). It contains the key "a", mapping it to "b":
1> (compile-toplevel '[#H(() ("a" "b")) "a"])
#<sys:vm-desc: 8eaa130>
What's the code look like? 2> (disassemble *1)
data:
0: "b"
syms:
code:
0: 10000400 end d0
instruction count:
1
#<sys:vm-desc: 8eaa130>
One instruction: just return "b" from the static data register d0. The hash table is completely gone.The keys don't even have to be strings; that's a red herring.
Their goal wasn't to improve key lookups in hash tables, that is more or less just an example.It was to improve optimization of variables with lazy initialisation overall and the hash of String uses lazy initialisation.
At first I thought the article was describing something similar to Ruby’s symbols
only strings that are known at compile time could possibly be compile-time hashed?
But the article is talking about strings in a running program. The performance improvements can apply to strings that are constants, but is created at run time.
I mean the developer has to create the StableValue field, but its access is optimized away.
If you mean the headline, Strings are a universal data type across programming, so claims of improving their performance gets more clicks than "this annotation that you have never heard about before makes some specific code faster", especially when it comes to getting the attention of non-Java programmers.
It will be a very impactful work; I'm excited to see. Probably even a 1% improvement in String::hashCode will have an impact on global carbon footprint or so.
gavinray•2d ago
https://openjdk.org/jeps/502
https://cr.openjdk.org/~pminborg/stable-values2/api/java.bas...
I don't understand the functional difference between the suggested StableValue and Records, or Value Classes.
They define a StableValue as:
Records were defined as: And Value Objects/Classes as: Both Records and Value Objects are immutable, and hence can only have their contents set upon creation or static initalization.sagacity•2d ago
Records are also immutable, but you can create them and delete them throughout your application like you would a regular class.
gavinray•2d ago
sagacity•2d ago
gavinray•2d ago
sagacity•2d ago
pkulak•17h ago
I assume it's static in the context of it's containing object. So, it will be collected when it's string is collected.
leksak•17h ago
What level are you suggesting lateinit happens at if not on the JVM?
Tmpod•13h ago
w10-1•16h ago
Yes, but remind people it's not static in the sense of being associated with the class, nor constant for compile-time purposes.
Perhaps better to say: A stable value is lazy, set on first use, resulting in pre- and post- initialization states. The data being set once means you cannot observe a data change (i.e., appears to be immutable), but you could observe reduction in resource utilization when comparing instances with pre-set or un-set values -- less memory or time or other side-effects of value initialization.
So even if data-immutable, a class with a stable value ends up with behavior combinations of two states for each stable value. Immutable records or classes without stable values have no such behavior changes.
But, writ large, we've always had this with the JVM's hotspot optimizations.
For String, it becomes significant whether hashcode is used when calculating equals (as a fast path to negative result). If not, one would have two equal instances that will behave differently (though producing the same data), at least for one hashcode-dependent operation.
whartung•17h ago
Which, to me, means, potentially, two things.
One, that the JVM can de-dup "anything", like, in theory, it can with Strings now. VOs that are equal are the same, rather than relying on object identity.
But, also, two, it can copy the contents of the VO to consolidate them into a single unit.
Typically, Java Objects and records are blobs of pointers. Each field pointing to something else.
With Value Objects that may not be the case. Instead of acting as a collection of pointers, a VO with VOs in it may more be like a C struct containing structs itself -- a single, continuous block of memory.
So, an Object is a collection of pointers. A Record is a collection of immutable pointers. A Value Object is (may be) a cohesive, contiguous block of memory to represent its contents.
blacklion•16h ago
`@Stable` annotation (only internal for now) and `StableValue<>` (for user code in future) says JIT that programmer guarantee (swear by his life!) that no dirty tricks are played with these values in whole codebase and JIT can constant-fold these values as soon as they are initialized.
_old_dude_•15h ago
blacklion•15h ago
It means, that `StableValue<>` can be used in simple classes (where `final` fields are still not constant-folded) and, additionally, supports late initialization.
cempaka•1m ago
_old_dude_•15h ago
drob518•13h ago
layer8•10h ago
The implementation of a value object will be able to use StableValue internally for lazy computation and/or caching of derived values.
elric•7h ago
Additionally, having to define a record FooHolder(Foo foo) simply to hold a Foo would be a lot more cumbersome than just saying StableValue<Foo> fooHolder = StableValue.of(); There's no need for an extra type.
- Value classes have no identity, which means they can't have synchronized methods and don't have an object monitor. While it would be possible to store a value object inside a StableValue, there are plenty of use cases for an identity object inside a StableValue, such as the Logger example inside the JEP: one could easily imagine a fictional logger having a synchronized method to preserve ordering of logs.
I wouldn't say these are all entirely orthogonal concerns, but they are different concepts with different purposes.