Re: [PATCH 1/2] sched: remove extraneous load manipulations

From: Gregory Haskins
Date: Fri Jul 18 2008 - 08:49:11 EST


>>> On Fri, Jul 18, 2008 at 8:39 AM, in message <1216384754.28405.31.camel@twins>,
Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> On Thu, 2008-07-03 at 15:37 -0600, Gregory Haskins wrote:
>> commit 62fb185130e4d420f71a30ff59d8b16b74ef5d2b reverted some patches
>> in the scheduler, but it looks like it may have left a few redundant
>> calls to inc_load/dec_load remain in set_user_nice (since the
>> dequeue_task/enqueue_task take care of the load. This could result
>> in the load values being off since the load may change while dequeued.
>
> I just checked out v2.6.25.10 but cannot see dequeue_task() do it.

Perhaps I was trying to hit a moving target, or did not have enough coffee that day ;)

I will look again to see if I made a mistake.

Thanks Peter,

-Greg

>
> deactivate_task() otoh does do it.
>
> static void dequeue_task(struct rq *rq, struct task_struct *p, int sleep)
> {
> p->sched_class->dequeue_task(rq, p, sleep);
> p->se.on_rq = 0;
> }
>
> vs
>
> static void deactivate_task(struct rq *rq, struct task_struct *p, int sleep)
> {
> if (task_contributes_to_load(p))
> rq->nr_uninterruptible++;
>
> dequeue_task(rq, p, sleep);
> dec_nr_running(p, rq);
> }
>
> where
>
> static void dec_nr_running(struct task_struct *p, struct rq *rq)
> {
> rq->nr_running--;
> dec_load(rq, p);
> }
>
> And since set_user_nice() actually changes the load we'd better not
> forget to do this dec/inc load stuff.
>
> So I'm thinking this patch would actually break stuff.
>
>> Signed-off-by: Gregory Haskins <ghaskins@xxxxxxxxxx>
>> CC: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
>> CC: Ingo Molnar <mingo@xxxxxxx>
>> ---
>>
>> kernel/sched.c | 6 ++----
>> 1 files changed, 2 insertions(+), 4 deletions(-)
>>
>> diff --git a/kernel/sched.c b/kernel/sched.c
>> index 31f91d9..b046754 100644
>> --- a/kernel/sched.c
>> +++ b/kernel/sched.c
>> @@ -4679,10 +4679,8 @@ void set_user_nice(struct task_struct *p, long nice)
>> goto out_unlock;
>> }
>> on_rq = p->se.on_rq;
>> - if (on_rq) {
>> + if (on_rq)
>> dequeue_task(rq, p, 0);
>> - dec_load(rq, p);
>> - }
>>
>> p->static_prio = NICE_TO_PRIO(nice);
>> set_load_weight(p);
>> @@ -4692,7 +4690,7 @@ void set_user_nice(struct task_struct *p, long nice)
>>
>> if (on_rq) {
>> enqueue_task(rq, p, 0);
>> - inc_load(rq, p);
>> +
>> /*
>> * If the task increased its priority or is running and
>> * lowered its priority, then reschedule its CPU:
>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/