Re: BUG: Slowdown on 3000 socket-machines tracked down

From: Christian Schmid
Date: Fri May 27 2005 - 22:22:04 EST


Hi.

I want to give the newest report for the vm-lock problem. It seems the problem is getting less critical in every new release. I am currently using 2.6.12-rc5. The problem with the massive vm-lock appears as always when 3500 sockets are reached as reported in earlier mails. The problem suddenly disappears when I set lowmem_reserve_ratio to "1 1" AND min_free_kbytes to 1024000. It only starts to appear again when reaching around 7000 sockets. -rc3 for example slowed down at 4500 sockets again.

I am very sure its a vm-lock because for example reading /proc/sys/vm/lowmem_reserve_ratio needs no time with < 3500 sockets. While testing with 7000 sockets, I had to wait 20-30 seconds until the "file" was opened.

Any suggestions? Dual Xeon 3,6 GHz with 8 GB Ram.

Nick Piggin wrote:
On Mon, 2005-03-14 at 05:53 +0100, Christian Schmid wrote:


The other thing that worries me is your need for lower_zone_protection.
I think this may be due to unbalanced highmem vs lowmem reclaim. It
would be interesting to know if those patches I sent you improve this.
They certainly improve reclaim balancing for me... but again I guess
you'll be reluctant to do much experimentation :\

I have tested your patch and unfortunately on 2.6.11 it didnt change anything :( I reported this before, or do you mean something else? I am of course willing to test patches as I do not want to stick with 2.6.10 forever.


Well I hope that scheduler developments in progress will put future
kernels at least on par with 2.6.10 again (and hopefully better).

Yes you did report that my patch didn't help 2.6.11, but could those
results have been influenced by the suboptimal HT scheduling? If so,
I was interested in the results with HT turned off.

Nick


Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/