Re: Common hash table implementation

From: Daniel Phillips (
Date: Tue Jul 24 2001 - 07:57:20 EST

On Monday 23 July 2001 01:34, Eyal Lebedinsky wrote:
> Daniel Phillips wrote:
> > Yes, I tested almost all of them to see how well they worked my
> > directory index application. There are really only two criterea:
> >
> > 1) How random is the hash
> > 2) How efficient is it
> >
> > My testing was hardly what you would call rigorous. Basically,
> > what I do is hash a lot of very unrandom strings and see how
> > uniform the
> Actually, to measure the randomness you need to measure the
> randomness of the output in the face of non-random input.

This is exactly what I do.

> Most well constructed
> hash functions perform well when the strings are random, however real
> world data (e.g. directory contntent) is not random at all.

I think you meant to say there, "even many poorly constructed hash
functions perform well when..."

> Efficiency should measure both space and time resources. If it should
> work in a multithreaded situation then another level of complexity is
> added.

Sure, I could have added "how big is it". For me, that's just
another kind of efficiency. Writing the code so it's reentrant is
just good practice. There is no excuse whatsoever for not doing
that for something simple like a hash function, even if you
yourself never expect to run two copies concurrently.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

This archive was generated by hypermail 2b29 : Tue Jul 31 2001 - 21:00:15 EST