Re: [patch v7 7/8] sched: consider runnable load average in move_tasks

From: Alex Shi
Date: Fri May 31 2013 - 11:08:01 EST



>
> runnable_load_avg is u64, so you need to use div_u64() similar to how it
> is already done in task_h_load() further down in this patch. It doesn't
> build on ARM as is.
>
> Fix:
> - load /= tg->parent->cfs_rq[cpu]->runnable_load_avg + 1;
> + load = div_u64(load,
> tg->parent->cfs_rq[cpu]->runnable_load_avg + 1);
>
> Morten

Thank a lot for review!

div_u64 or do_div will do force cast u32 on the divisor, so in 64bit machine,
the divisor may become incorrect.
Since cfs_rq->runnable_load_avg is always smaller the cfs_rq.load.weight. and
load.weight is 'unsigned long', we can cast the runnable_load_avg to
'unsigned long' too. Than the div will fit on both 64/32 bit machine and no
data concatenate!

So the patch changed as following.

BTW, Paul & Peter:
in cfs_rq, runnable_load_avg, blocked_load_avg, tg_load_contrib are all
u64, but their are similar with 'unsigned long' load.weight. So could we change
them to 'unsigned long'?

---