HashMap implements Extend, so just h0.extend(h1) and you're done, the people who made your HashMap type are much better equipped to optimize this common operation.
In a new enough C++ in theory you might find the same functionality supported, but Quality of Implementation tends to be pretty frightful.
Are you sure? I'm not very used to reading Rust stdlib, but this seems to be the implementation of the default HashMap extend [1]. It just calls self.base.extend. self.base seems to be hashbrown::hash_map, and this is the source for it's extend [2]. In other words, does exactly the same thing, just iterates through hash map and inserts it.
Maybe I'm misreading something going through the online docs, or Rust does the "random seed" thing that abseil does, but just blinding assuming something doesn't happen "because Rust" is a bit silly.
[1]: https://doc.rust-lang.org/src/std/collections/hash/map.rs.ht...
[2]: https://docs.rs/hashbrown/latest/src/hashbrown/map.rs.html#4...
Note that the worst case is we ate a single unneeded growth, while the best case is that we avoided N - 1 grows where N may be quite large.
I tried extend() anyway. It didn't work well. Based on your description, extend() implements a variation of preallocation (i.e. Solution II). However, because it doesn't always reserve enough space to hold the merged hash table, clustering still happens depending on N. I have updated the rust implementation (with the help of LLM as I am not a good rust programmer). You can try it yourself with "ht-merge-rust 1 -e -n14m" or point out if I made mistakes.
> HashMap will by default be randomly seeded in Rust
Yes, so it is with Abseil. The default rust hash functions, siphash in the standard library and foldhash in hashbrown, are ~3X as slow in comparison to simple hash functions on pure insertion load. When performance matters, we will use faster hash functions at least for small keys and will need a solution from my post.
> In a new enough C++ in theory you might find the same functionality supported, but Quality of Implementation tends to be pretty frightful.
This is not necessary. The rust libraries are a port of Abseil, a C++ library. Boost is as fast as Rust. Languages/libraries should learn from each other, not fight each other.
Ah! Yes, I apologise. I missed the + in += and I'm not used to a hash table which defaults initialization for unseen entries (as the C++ hash tables all tend to because its native container types behave that way) so I wasn't looking for it.
The SipHash will be noticeably slower, no question about it, and so if you need to and know what you're paying you can replace the hash, including with integer_hasher which gives you what you'd likely know from many C++ stdlib implementations - an identity function presented as a hash.
> This is not necessary. The rust libraries are a port of Abseil, a C++ library.
More specifically HashBrown is a port [edited: actually a re-implementation I think, design->Rust not C++->Rust] of Abseil's Swiss Tables, and these days Rust's HashMap (and HashSet of course) use HashBrown but that's not what I was getting at here
I was thinking about analogues of Extend (because as I wrote above, I didn't notice that you're accumulating not overwriting) and modern C++ has this kind of feature in Ranges::to however it doesn't quite have Extend and as I said QoI is poor, there are often trivial optimisations that Rust does but the C++ means the same but isn't optimised.
I am interested in a quite different benchmark for hash tables, rather than merging I'm interested in very small hash tables. Clearly for two items it will be faster to try them both, and clearly for a million items trying them all is awful, so I measure a VecMap type (same API as a hash table but actually just the growable array of unordered key->value pairs, searched linearly) against HashMap and other implementations of this API.
For N=25 VecMap is still competitive, but even at N=5 if we use a very fast hash (such as that identity function) instead of SipHash we can beat VecMap for most operations. I suspect this sort of benchmark would fare very differently on older hardware (faster memory relative to ALU operations) and the direction of travel is likely to stay the same for the foreseeable future. In 1975 if you have six key->value pairs you don't want a hash table because it's too slow but in 2025 you probably do.
Who's to say I'm not the one making the hashtable? There are plenty of real-world reasons the standard library hashtable may be either inaccessible or unsuitable.
Furthermore, the idea that "oh, honey, it's too hard, smart people did it for you" is insufferable and needs to die. When I'm the one making something, I have dramatically more information about the problem I'm trying to solve than the author of a hashtable library, and am therefore much better equipped to make design decision tradeoffs.
Please stop perpetuating the idea that 'just use a library' is unilaterally the best option. Sometimes, it's not.
And indeed if you have your own custom operation you want, it may well make sense for you to implement it on both your own types and stdlib types.
There may be better constants than 2^64/phi, perhaps some large prime number with roughly equal number of one and zero bits could also work.
This will prevent bucket collisions on hash table resizing that may lead to "accidentally quadratic" behavior [2], while not requiring rehashing with a different salt.
I didn't do detailed analysis on whether it helps on hash table merging too, but I think it would.
[0] https://probablydance.com/2018/06/16/fibonacci-hashing-the-o... [1] https://news.ycombinator.com/item?id=43677122 [2] https://accidentallyquadratic.tumblr.com/post/153545455987/r...
Fibonacci hashing is really adding another multiplicative hashing step followed by dropping the bottom bits using a shift operation instead of the top bits using an and operation. Since it still works by dropping bits, items that were near before the resize will still be near after the resize and it won't really change anything.
> Abseil
> Rust standard
> hashbrown
These hash tables are not based on plain linear probing, they use something that's essentially quadratic probing done in chunks. Not sure about the others but they might be doing something similar.
Let h0 be the larger table, and h1 the smaller. N = len(h0), M = len(h1). Pretend the elements of the tables are sequentially indexed. Element[0] is h0[0], Element[N] is h1[0], etc.
h0' = Resize(h0, (N+M)*capacity_factor)
for x in 0...(range):
y = permute(x, 0, (N+M)*capacity_factor)
if(y >= N) move_to(h0'[y], element[x])
One allocation and you move the minimum number of elements needed to eliminate primary clustering. Elements in h0 that aren't moved would presumably remain correctly indexed. You have to move the remaining elements of h1 as well, but that cluttered things.Any randomish permutation works, from basic ones up to cryptographic. If your permutation only works on certain powers of two, iterate it until the result is in range.
willvarfar•22h ago
If you merge linear probed tables by iterating in sorted hash order then you are matching the storage order and can congest particular parts of the table and cause the linear probing worse case behaviour.
By changing the iteration order, or salting the hash, you can avoid this.
Of course chained hash tables don't suffer from this particular problem.
My quick thought is that hash tables ought keep an internal salt hidden away. This seems good to avoid 'attacks' as well as speeding up merging etc. The only downside I can think of is that the creation of the table needs to fetch a random salt that might not be quick, although that can alleviated by allowing it to be set externally in the table creation so people who don't care can set it to 0 or whatever. What am I missing?
kzrdude•19h ago
The keys are hidden/secret to the system external to the application.