Re: [slub p4 6/7] slub: per cpu cache for partial pages

From: Christoph Lameter
Date: Sat Aug 20 2011 - 12:00:42 EST


On Sat, 20 Aug 2011, Pekka Enberg wrote:

> > @@ -2919,7 +3071,34 @@ static int kmem_cache_open(struct kmem_c
> > * The larger the object size is, the more pages we want on the
> > partial
> > * list to avoid pounding the page allocator excessively.
> > */
> > - set_min_partial(s, ilog2(s->size));
> > + set_min_partial(s, ilog2(s->size) / 2);
>
> Why do we want to make minimum size smaller?

Because we are getting additional partial pages cached for each processor.

> > + */
> > + if (s->size >= PAGE_SIZE)
> > + s->cpu_partial = 2;
> > + else if (s->size >= 1024)
> > + s->cpu_partial = 6;
> > + else if (s->size >= 256)
> > + s->cpu_partial = 13;
> > + else
> > + s->cpu_partial = 30;
>
> How did you come up with these limits?

These are the per cpu queue limits of SLAB.

> > + struct { /* slub per cpu partial pages */
> > + struct page *next; /* Next partial slab */
> > +#ifdef CONFIG_64BIT
> > + int pages; /* Nr of partial slabs left */
> > + int pobjects; /* Approximate # of objects */
> > +#else
> > + short int pages;
> > + short int pobjects;
> > +#endif
> > + };
> > + };
>
> Why are the sizes different on 32-bit and 64-bit? Does this change 'struct
> page' size?

int == long /2 under 64 bit
int == long on 32 bit.

without the ifdef the page struct could get bigger for 32 bit.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/