a patch that might help on small-memory machines (was Re: 2.1 MM [was: Re: 2.2 will never come..])

Kevin Buhr (buhr@stat.wisc.edu)
06 Jul 1998 17:02:12 -0500

MOLNAR Ingo <mingo@valerie.inf.elte.hu> writes:
> Stephen (and me too) is pretty much focused on getting this right, to fix
> the SLAB stuff and to tune all the little knobs in the VM. The situation
> was similar in the late 1.3 days, so i wouldnt worry too much :)

Here's some food for thought. Note that Bill Hawes will hate this, so
nobody let him see it, okay? ;)

You might want to pay particular attention to the allocation and
freeing of *inodes* (and, to a lesser extent, dentries) as they
concern performance on low-memory machines. Since not long after
Linus first rewrote "fs/inode.c" in the 2.1.xx tree (around the
2.1.50s), I've been using a patch which frees inodes much more
aggressively than the stock kernel does when memory gets tight, and
I've had much better luck with it on small-memory machines.

Under a stock kernel, the number of allocated inodes seems to grow to
the maximum of 4096 and then just sit there, no matter how tight
memory gets. Inodes are being freed, of course, but no faster than
they are being reallocated. This is what I've observed in practice.

Now, imagine the effect of around 275 pages of inodes (4096 inodes at
about 15 inodes a page) on an 8 meg machine (2048 pages, not
subtracting the several hundred pages needed for kernel code and other
semi-permanent data structures). With maybe as many as 1 in 5 pages
dedicated to inodes, we should not be surprised to encounter some poor
performance and serious fragmentation now and then.

I'm not claiming my patch implements the best strategy (or even a
particularly good one) for freeing these inodes, but it points at a
rather guilty-looking suspect in this whole MM mess. It also stuffs
inodes and dentries into dedicated SLAB caches, which seems like the
right thing to do. It *also* puts no artificial upper limit on
allocated inodes: the inode cache can now grow huge, it's just pruned
much more aggressively.

Anyway, enough talk. For me, the proof was in the pudding. At one
time, stock kernels, after the nightly "updatedb" or a few "cvs"
commands, rarely stayed up more than a couple days on a small-memory
machine. Eventually, "fork" would start failing with "out of memory"
errors until I rebooted. On the other hand, my patched kernels stayed
up for weeks.

Nowadays, the stock kernel fares a little better, but I still think I
get much better mileage out of my patch.

The patch, against 2.1.106, is a bit long for the list:


Note that its clearly a patch-in-progress, so it's not particularly
succinct; there's plenty of useless cruft that has nothing to do with
its main thrust. It will probably apply with only modest manual
intervention against 2.1.107 or 2.1.108 kernels, though of course I
haven't tested it with anything past 2.1.106.

I'd appreciate feedback if people find this helps (or fails to help)
on itty, bitty 8 meg or 16 meg machines.


Kevin <buhr@stat.wisc.edu>

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu