Re: [patch] restore sched_exec load balance heuristics

From: Peter Zijlstra
Date: Mon Nov 10 2008 - 07:54:25 EST


On Mon, 2008-11-10 at 10:29 +0100, Ingo Molnar wrote:
> * Peter Zijlstra <a.p.zijlstra@xxxxxxxxx> wrote:
>
> > void sched_exec(void)
> > {
> > int new_cpu, this_cpu = get_cpu();
> > - new_cpu = sched_balance_self(this_cpu, SD_BALANCE_EXEC);
> > + struct task_group *tg;
> > + long weight, eload;
> > +
> > + tg = task_group(current);
> > + weight = current->se.load.weight;
> > + eload = -effective_load(tg, this_cpu, -weight, -weight);
> > +
> > + new_cpu = sched_balance_self(this_cpu, SD_BALANCE_EXEC, eload);
>
> okay, i think this will work.
>
> it feels somewhat backwards though on a conceptual level.
>
> There's nothing particularly special about exec-balancing: the load
> picture is in equilibrium - it is in essence a rebalancing pass done
> not in the scheduler tick but in a special place in the middle of
> exec() where the old-task / new-task cross section is at a minimum
> level.
>
> _fork_ balancing is what is special: there we'll get a new context so
> we have to take the new load into account. It's a bit like wakeup
> balancing. (just done before the new task is truly woken up)
>
> OTOH, triggering the regular busy-balance at exec() time isnt totally
> straightforward either: the 'old' task is the current task so it
> cannot be balanced away. We have to trigger all the active-migration
> logic - which again makes exec() balancing special.
>
> So maybe this patch is the best solution after all.

Even worse, you want to balance current, the generic load balance might
pick two cpus to balance neither of which will have current on it. But
even if it would pick the queue with current on it as busiest, there is
no saying you'll actually end up moving current.

So this specialized form of moving current to a possibly more idle cpu
is afaics the best solution for balancing a particular task.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/