Re: A different metric for scheduler optimisation

From: Larry McVoy (lm@bitmover.com)
Date: Sun Jan 30 2000 - 14:16:48 EST


: Jamie, We're not speaking of speaking about RQ now ;)
: This thread born from a my consideration that high switching tasks "probably"
: does not have a great cache trashing due to the fact that since they run a very
: short time they have a lower "probability" to touch RAM locations.
:
: Steve have said that may be true the contrary and I tried to give an
: explanation of my point of view in a previous message :

SGI has a scheduler with a design similar to your reasoning. They bound
processes to caches only if they ran on that cache long enough that they
could have created a cache footprint.

This reasoning was wrong. For I/O processes, such as a web server,
ftp server, NFS server, etc., the behaviour is a short run, followed by
a sleep on I/O, repeat forever.

Given the reasoning that a short run doesn't create a big cache footprint,
these processes were all candidates to be moved to a different processor
so they bounced around like crazy.

Pinning them down to a particular processor typically gave about another
20% in throughput. I've personally done this and this is the default
behaviour of the BDS server - each thread ties itself to a processor and
just stays there.

Moral of the story: even a small, perhaps almost invisible, amount of
cache state matters. If you think hard about it, if all short time
slice processes do a rotate right every time they get scheduled, they
are all essentially running uncached. Even if they only need 100
cache lines, that's a big hit.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Mon Jan 31 2000 - 21:00:26 EST