Mike Fedyk wrote:
Andrew Morton wrote:
shrink_slab-for-all-zones.patch
vm: scan slab in response to highmem scanning
zone-balancing-fix.patch
vmscan: zone balancing fix
On 2.6.3 + [1] + nfsd-lofft.patch running on a 1GB ram file server. I have noticed two related issues.
First, under 2.6.3 it averages about 90MB[2] anon memory, and 30MB with the -mm4 vm (the rest is in swap cache). This could balance out on the normal non-idle week-day load though...
Second the -mm4 vm, there is a lot more swapping[3] going on during the daily updatedb, and backup runs that are performed on this machine.
I'd have to call this second issue a regression, but I want to run it a couple more days to see if it gets any better (unless you agree of course).
There are a few things backed out now in 2.6.4-rc1-mm1, and quite a
few other changes. I hope we can trouble you to test 2.6.4-rc1-mm1?
Tell me, do you have highmem enabled on this system? If so, swapping
might be explained by the batching patch. With it, a small highmem
zone could possibly place quite a lot more pressure on a large
ZONE_NORMAL.
2.6.4-rc1-mm1 sould do much better here.