Re: [Lse-tech] [patch] sched-domain cleanups, sched-2.6.5-rc2-mm2-A3

From: Nick Piggin
Date: Tue Mar 30 2004 - 04:10:23 EST


(please use piggin@xxxxxxxxxxxx)

Erich Focht wrote:

On Thursday 25 March 2004 23:28, Martin J. Bligh wrote:

Can we hold off on changing the fork/exec time balancing until we've
come to a plan as to what should actually be done with it? Unless we're
giving it some hint from userspace, it's frigging hard to be sure if
it's going to exec or not - and the vast majority of things do.


After more than a year (or two?) of discussions there's no better idea
yet than giving a userspace hint. Default should be to balance at
exec(), and maybe use a syscall for saying: balance all children a
particular process is going to fork/clone at creation time. Everybody
reached the insight that we can't foresee what's optimal, so there is
only one solution: control the behavior. Give the user a tool to
improve the performance. Just a small inheritable variable in the task
structure is enough. Whether you give the hint at or before run-time
or even at compile-time is not really the point...

I don't think it's worth to wait and hope that somebody shows up with
a magic algorithm which balances every kind of job optimally.



I'm with Martin here, we are just about to merge all this
sched-domains stuff. So we should at least wait until after
that. And of course, *nothing* gets changed without at least
one benchmark that shows it improves something. So far
nobody has come up to the plate with that.

There was a really good reason why the code is currently set up that
way, it's not some random accident ;-)


The current code isn't a result of a big optimization effort, it's the
result of stripping stuff down to something which was acceptable at
all in the 2.6 feature freeze phase such that we get at least _some_
NUMA scheduler infrastructure. It was clear right from the beginning
that it has to be extended to really become useful.


Clone is a much more interesting case, though at the time, I consciously
decided NOT to do that, as we really mostly want threads on the same
node.


That is not true in the case of HPC applications. And if someone uses
OpenMP he is just doing that kind of stuff. I consider STREAM a good
benchmark because it shows exactly the problem of HPC applications:
they need a lot of memory bandwidth, they don't run in cache and the
tasks live really long. Spreading those tasks across the nodes gives
me more bandwidth per task and I accumulate the positive effect
because the tasks run for hours or days. It's a simple and clear case
where the scheduler should be improved.

Benchmarks simulating "user work" like SPECsdet, kernel compile, AIM7
are not relevant for HPC. In a compute center it actually doesn't
matter much whether some shell command returns 10% faster, it just
shouldn't disturb my super simulation code for which I bought an
expensive NUMA box.



There are other things, like java, ervers, etc that use threads.
The point is that we have never had this before, and nobody
(until now) has been asking for it. And there are as yet no
convincing benchmarks that even show best case improvements. And
it could very easily have some bad cases. And finally, HPC
applications are the very ones that should be using CPU
affinities because they are usually tuned quite tightly to the
specific architecture.

Let's just make sure we don't change defaults without any
reason...

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/