Re: [patch 2/7] lib/hashmod: Add modulo based hash mechanism

From: Linus Torvalds
Date: Fri Apr 29 2016 - 19:51:38 EST


On Fri, Apr 29, 2016 at 2:10 PM, Linus Torvalds
<torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
> On Thu, Apr 28, 2016 at 11:32 AM, Linus Torvalds
> <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
>>
>> hash_long/ptr really shouldn't care about the low bits being zero at
>> all, because it should mix in all the bits (using a prime multiplier
>> and taking the high bits).
>
> Looking at this assertion, it doesn't actually pan out very well at all.
>
> The 32-bit hash looks to be ok. Certainly not perfect, but not horrible.
>
> The 64-bit hash seems to be quite horribly bad with lots of values.

Ok, I have tried to research this a bit more. There's a lot of
confusion here that causes the fact that hash_64() sucks donkey balls.

The basic method for the hashing method by multiplication is fairly
sane. It's well-documented, and attributed to Knuth as the comment
above it says.

However, there's two problems in there that degrade the hash, and
particularly so for the 64-bit case.

The first is the confusion about multiplying with a prime number..
That actually makes no sense at all, and is in fact entirely wrong.
There's no reason to try to pick a prime number for the
multiplication, and I'm not seeing Knuth having ever suggested that.

Knuth's suggestion is to do the multiplication with a floating point
value A somewhere in between 0 and 1, and multiplying the integer with
it, and then just taking the fractional part and multiply it up by 'm'
and do the floor of that to get a number in the range 0..m

At no point are primes involved.

And our multiplication really does approximate that - except it's
being done in fixed-point arithmetic (so the thing you multiply with
is basically n./2**64, and the overflow is what gets rid of the
fractional part - then getting the "high bits" of the result is really
just multiplying by a power of two and taking the floor of the
result).

So the first mistake is thinking that the thing you should multiply
with should be prime. The primality matters for when you use a
division to get a modulus, which is presumably where the confusion
came from.

Now, what value 'A' you use doesn't seem to really matter much. Knuth
suggested the fractional part of the golden ratio, but I suspect that
is purely because it's an irrational number that is not near to 0 or
1. You could use anything, although since "random bit pattern" is part
of the issue, irrational numbers are a good starting point. I suspect
that with our patterns, there's actually seldom a good reason to do
lots of high-order bits, so you might as well pick something closer to
0, but whatever - it's only going to matter for the overflow part that
gets thrown away anyway.

The second mistake - and the one that actually causes the real problem
- is to believe that the "bit sparseness" is a good thing. It's not.
It's _very_ much not. If you don't mix the bits well in the
multiplication, you get exactly the problem we hit: certain bit
patternsjust will not mix up into the higher order bits.

So if you look at what the actual golden ratio representation *should* have bee:

#define GOLDEN_RATIO_32 0x9e3779b9
#define GOLDEN_RATIO_64 0x9e3779b97f4a7c16

and then compare it to the values we actually _use_ (bit-sparse primes
closeish to those values):

#define GOLDEN_RATIO_PRIME_32 0x9e370001UL
#define GOLDEN_RATIO_PRIME_64 0x9e37fffffffc0001UL

you start to see the problem. The right set of values have roughly 50%
of the bits set in a random pattern. The wrong set of values do not.

But as far as I an tell, you might as well use the fractional part of
'e' or 'pi' or just make random shit up that has a reasonable bit
distribution.

So we could use the fractional part of the golden ratio (phi):
0x9e3779b9
0x9e3779b97f4a7c16

or pi:
0x243f6a88
0x243f6a8885a308d3

or e:
0xb7e15162
0xb7e151628aed2a6b

or whatever. The constants don't have to be prime. They don't even
have to be odd, because the low bits will always be masked off anyway.
The whole "hash one integer value down to X bits" is very different in
this respect to things like string hashes, where you really do tend to
want primes (because you keep all bits).

But none of those are sparse. I think *some* amount of sparseness
might be ok if it allows people with bad CPU's to do it using
shift-and-adds, it just must not be as sparse as the current number,
the 64-bit one on particular.

There's presumably a few optimal values from a "spread bits out
evenly" standpoint, and they won't have anything to do with random
irrational constants, and will have everything to do with having nice
bitpatterns.

I'm adding Rik to the cc, because the original broken constants came
from him long long ago (they go back to 2002, originally only used for
the waitqueue hashing. Maybe he had some other input that caused him
to believe that the primeness actually mattered.

Linus