Re: 2.1.76, nfs client, and memory fragmentation

kwrohrer@enteract.com
Sun, 28 Dec 1997 17:49:29 -0600 (CST)


And lo, Alan Cox saith unto me:
> > However, permanently non-freeable blocks should be rare, temporarily locked
> > blocks should be uncommon, and relocatable (= swappable) blocks should
> > abound. And with a defragmenter available, especially one which can
> No - alas not. Also for 32K chunks you only need 1 of the 8 pages to be
> touched to be tied down.
You're right; the locked blocks themselves are somewhat rare but the higher-
order blocks they lock down multiplies that by up to the maximum order...

> Marking immovable objects and trying to land them
> in the same buddy blocks might help.
Agreed!

> The problem with buddy is you have to clean that specific block out. With
> first fit you just have to stick together a handy sized block.
But you don't have to. An uncleanable buddy is equivalent to the first
first-fit block not fitting, just a little slower to discover. Since the
buddy cleaner won't be running until there *aren't* any chunks big enough
to fit... Of course, this is more likely with a buddy system than with
an arbitrary-boundary system. On the other hand, the strength of the
binary buddies is that once you've got your nK contiguous block, you
don't have to scrabble for more space on one side or the other to align
it on an nK boundary... (Remember the contortions ftape goes/went through
to get aligned DMA buffers?)

> > defragment to order (individual order, that is), the long-term kernel
> > allocations can be made larger (and correspondingly less frequent),
> > thereby minimizing the fragmentation.
> A fair number of long time kernel allocations now are SLAB based so the
> SLAB allocator is helping a bit here, but not enough.
The ipv4 fragment handling mechanism already goes through the SLAB
allocator. I believe the queue_glue failures occur when the SLAB allocator
can't expand the cache of objects of the needed size...

> > Problem is, it *will* catch up one day. I'd just as soon shoot it now,
> > so we can make larger allocations, then actually do something to make
> > them happen should they block.
> It will catch up with us inevitably. With the SGI ports to do the O2
> port we will regularly need to grab 64K contiguous memory blocks. For the
> Matrox Meteor you can need to pull several hundred K of contiguous ram.
Well, we're in definite agreement here as well as the need to place user vm
pages differently from locked kernel pages...

Keith