Re: [PATCH 1/1] network memory allocator.

From: Christoph Hellwig
Date: Wed Aug 16 2006 - 04:45:50 EST


On Wed, Aug 16, 2006 at 09:35:46AM +0400, Evgeniy Polyakov wrote:
> On Tue, Aug 15, 2006 at 10:21:22PM +0200, Arnd Bergmann (arnd@xxxxxxxx) wrote:
> > Am Monday 14 August 2006 13:04 schrieb Evgeniy Polyakov:
> > > ?* full per CPU allocation and freeing (objects are never freed on
> > > ????????different CPU)
> >
> > Many of your data structures are per cpu, but your underlying allocations
> > are all using regular kzalloc/__get_free_page/__get_free_pages functions.
> > Shouldn't these be converted to calls to kmalloc_node and alloc_pages_node
> > in order to get better locality on NUMA systems?
> >
> > OTOH, we have recently experimented with doing the dev_alloc_skb calls
> > with affinity to the NUMA node that holds the actual network adapter, and
> > got significant improvements on the Cell blade server. That of course
> > may be a conflicting goal since it would mean having per-cpu per-node
> > page pools if any CPU is supposed to be able to allocate pages for use
> > as DMA buffers on any node.
>
> Doesn't alloc_pages() automatically switches to alloc_pages_node() or
> alloc_pages_current()?

That's not what's wanted. If you have a slow interconnect you always want
to allocate memory on the node the network device is attached to.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/