Re: [patch v8 4/9] sched: fix slept time double counting in enqueue entity

From: Lei Wen
Date: Fri Jun 21 2013 - 09:26:13 EST


On Fri, Jun 21, 2013 at 7:09 PM, Alex Shi <seakeel@xxxxxxxxx> wrote:
>
>>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>>> index c61a614..9640c66 100644
>>>> --- a/kernel/sched/fair.c
>>>> +++ b/kernel/sched/fair.c
>>>> @@ -5856,7 +5856,8 @@ static void switched_from_fair(struct rq *rq,
>>>> struct task_struct *p)
>>>> */
>>>> if (p->se.avg.decay_count) {
>>>> struct cfs_rq *cfs_rq = cfs_rq_of(&p->se);
>>>> - __synchronize_entity_decay(&p->se);
>>>> + p->se.avg.last_runnable_update +=
>>>> + __synchronize_entity_decay(&p->se);
>>>
>>> it is not needed, since the last_runnable_update will be reset if it
>>> will be switched to fair.
>>
>> When it is backed to fair class, the only thing scheduler would do
>> is to check whether it could preempt current running task, so
>> where to reset the last_runnable_update?
>
> here, when the task enqueue to cfs_rq.
> avg.decay_count == 0, so the last line will reset last_runnable_update.
>
> Anyway, you may consider to add a comment in the function, since it is bit hard
> to understand for you.
>
> ---
> /* Add the load generated by se into cfs_rq's child load-average */
> static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq,
> struct sched_entity *se,
> int wakeup)
> {
> /*
> * We track migrations using entity decay_count <= 0, on a wake-up
> * migration we use a negative decay count to track the remote decays
> * accumulated while sleeping.
> */
> if (unlikely(se->avg.decay_count <= 0)) {
> se->avg.last_runnable_update = rq_of(cfs_rq)->clock_task;


I see...
I didn't notice before the switched_from_fair, we already called
dequeue and enqueue.
Since it works as expected, I have no question.

Thanks for your detailed explanation! :)

Thanks,
Lei

>
>>
>> Thanks,
>> Lei
>>
>>>
>>> --
>>> Thanks
>>> Alex
>
>
> --
> Thanks
> Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/