Re: vmscan.c heuristic adjustment for smaller systems

From: Andrew Morton
Date: Sat Apr 17 2004 - 18:53:08 EST


Marc Singer <elf@xxxxxxxxx> wrote:
>
> On Sat, Apr 17, 2004 at 04:21:25PM -0700, Andrew Morton wrote:
> > Marc Singer <elf@xxxxxxxxx> wrote:
> > >
> > > I'd say that there is no statistically significant difference between
> > > these sets of times. However, after I've run the test program, I run
> > > the command "ls -l /proc"
> > >
> > > swappiness
> > > 60 (default) 0
> > > ------------ --------
> > > elapsed time(s) 18 1
> > > 30 1
> > > 33 1
> >
> > How on earth can it take half a minute to list /proc?
>
> I've watched the vmscan code at work. The memory pressure is so high
> that it reclaims mapped pages zealously. The program's code pages are
> being evicted frequently.

Which tends to imply that the VM is not reclaiming any of that nfs-backed
pagecache.

> I've been wondering if the swappiness isn't a red herring. Is it
> reasonable that the distress value (in refill_inactive_zones ()) be
> 50?

I'd assume that setting swappiness to zero simply means that you still have
all of your libc in pagecache when running ls.

What happens if you do the big file copy, then run `sync', then do the ls?

Have you experimented with the NFS mount options? v2? UDP?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/