Re: [PATCH] shrink core hashes on small systems

From: Matt Mackall
Date: Mon Apr 05 2004 - 18:01:47 EST


On Mon, Apr 05, 2004 at 02:38:24PM -0700, Andrew Morton wrote:
> Matt Mackall <mpm@xxxxxxxxxxx> wrote:
> >
> > Longer term, I think some serious thought needs to go into scaling
> > hash sizes across the board, but this serves my purposes on the
> > low-end without changing behaviour asymptotically.
>
> Can we make longer-term a bit shorter? init_per_zone_pages_min() only took
> a few minutes thinking..
>
> I suspect what we need is to replace num_physpages with nr_free_pages(),
> then account for PAGE_SIZE, then muck around with sqrt() for a while then
> apply upper and lower bounds to it.

Ok, I can take a look at that. I believe Arjan's been looking at the
upper end side of the equation.

On the small end, I think we need to take account for the fact
that:

- a bunch of stuff gets effectively pinned post-init
- there are fewer pages total so paging granularity is bad
- the desired size/performance ratio is heavily skewed towards size

..so I think the head of this curve is squeezing most of these hashes
down to order 0 or 1 for the first 16MB of RAM or so.

On the large end, we obviously have diminishing returns for larger
hashes and lots of dirty cachelines for hash misses. We almost
certainly want sublinear growth here, but sqrt might be too
aggressive. Arjan?

For 2.7, I've been thinking about pulling out a generic lookup API,
which would allow replacing hashing with something like rbtree, etc.,
depending on space and performance criterion.

--
Matt Mackall : http://www.selenic.com : Linux development and consulting
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/