Re: [RFC][PATCH 05/18] sched: Provide p->on_rq

From: Yong Zhang
Date: Wed Jan 05 2011 - 03:14:14 EST


On Tue, Jan 4, 2011 at 10:59 PM, Peter Zijlstra <a.p.zijlstra@xxxxxxxxx> wrote:
> Provide a generic p->on_rq because the p->se.on_rq semantics are
> unfavourable for lockless wakeups but needed for sched_fair.
>
> In particular, p->on_rq is only cleared when we actually dequeue the
> task in schedule() and not on any random dequeue as done by things
> like __migrate_task() and __sched_setscheduler().
>
> This also allows us to remove p->se usage from !sched_fair code.
>
> Signed-off-by: Peter Zijlstra <a.p.zijlstra@xxxxxxxxx>
> ---
> Âinclude/linux/sched.h  |  Â1 +
> Âkernel/sched.c     Â|  36 ++++++++++++++++++------------------
> Âkernel/sched_debug.c  Â|  Â2 +-
> Âkernel/sched_rt.c    |  10 +++++-----
> Âkernel/sched_stoptask.c | Â Â2 +-
> Â5 files changed, 26 insertions(+), 25 deletions(-)
>
> Index: linux-2.6/kernel/sched_rt.c
> ===================================================================
> --- linux-2.6.orig/kernel/sched_rt.c
> +++ linux-2.6/kernel/sched_rt.c
> @@ -1132,7 +1132,7 @@ static void put_prev_task_rt(struct rq *
> Â Â Â Â * The previous task needs to be made eligible for pushing
> Â Â Â Â * if it is still active
> Â Â Â Â */
> - Â Â Â if (p->se.on_rq && p->rt.nr_cpus_allowed > 1)
> + Â Â Â if (p->on_rq && p->rt.nr_cpus_allowed > 1)

How about on_rt_rq(&p->rt) here?

Quoted from my previous reply:
[Seems we need on_rt_rq(&p->rt) here, otherwise we enqueue the
task to pushable list when called from rt_mutex_setprio()/
__sched_setscheduler() etc. Thus add a little overhead.
Though we call dequeue_pushable_task() in set_curr_task_rt()
unconditionally.]

Thanks,
Yong

> Â Â Â Â Â Â Â Âenqueue_pushable_task(rq, p);
> Â}
>
> @@ -1283,7 +1283,7 @@ static struct rq *find_lock_lowest_rq(st
> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â !cpumask_test_cpu(lowest_rq->cpu,
> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â &task->cpus_allowed) ||
> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â task_running(rq, task) ||
> - Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â!task->se.on_rq)) {
> + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â!task->on_rq)) {
>
> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Âraw_spin_unlock(&lowest_rq->lock);
> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Âlowest_rq = NULL;
> @@ -1317,7 +1317,7 @@ static struct task_struct *pick_next_pus
> Â Â Â ÂBUG_ON(task_current(rq, p));
> Â Â Â ÂBUG_ON(p->rt.nr_cpus_allowed <= 1);
>
> - Â Â Â BUG_ON(!p->se.on_rq);
> + Â Â Â BUG_ON(!p->on_rq);
> Â Â Â ÂBUG_ON(!rt_task(p));
>
> Â Â Â Âreturn p;
> @@ -1463,7 +1463,7 @@ static int pull_rt_task(struct rq *this_
> Â Â Â Â Â Â Â Â */
> Â Â Â Â Â Â Â Âif (p && (p->prio < this_rq->rt.highest_prio.curr)) {
> Â Â Â Â Â Â Â Â Â Â Â ÂWARN_ON(p == src_rq->curr);
> - Â Â Â Â Â Â Â Â Â Â Â WARN_ON(!p->se.on_rq);
> + Â Â Â Â Â Â Â Â Â Â Â WARN_ON(!p->on_rq);
>
> Â Â Â Â Â Â Â Â Â Â Â Â/*
> Â Â Â Â Â Â Â Â Â Â Â Â * There's a chance that p is higher in priority
> @@ -1534,7 +1534,7 @@ static void set_cpus_allowed_rt(struct t
> Â Â Â Â * Update the migration status of the RQ if we have an RT task
> Â Â Â Â * which is running AND changing its weight value.
> Â Â Â Â */
> - Â Â Â if (p->se.on_rq && (weight != p->rt.nr_cpus_allowed)) {
> + Â Â Â if (p->on_rq && (weight != p->rt.nr_cpus_allowed)) {
> Â Â Â Â Â Â Â Âstruct rq *rq = task_rq(p);
>
> Â Â Â Â Â Â Â Âif (!task_current(rq, p)) {
> Index: linux-2.6/kernel/sched_stoptask.c
> ===================================================================
> --- linux-2.6.orig/kernel/sched_stoptask.c
> +++ linux-2.6/kernel/sched_stoptask.c
> @@ -26,7 +26,7 @@ static struct task_struct *pick_next_tas
> Â{
> Â Â Â Âstruct task_struct *stop = rq->stop;
>
> - Â Â Â if (stop && stop->se.on_rq)
> + Â Â Â if (stop && stop->on_rq)
> Â Â Â Â Â Â Â Âreturn stop;
>
> Â Â Â Âreturn NULL;
>
>
>



--
Only stand for myself
N‹§²æìr¸›yúèšØb²X¬¶ÇvØ^–)Þ{.nÇ+‰·¥Š{±‘êçzX§¶›¡Ü}©ž²ÆzÚ&j:+v‰¨¾«‘êçzZ+€Ê+zf£¢·hšˆ§~†­†Ûiÿûàz¹®w¥¢¸?™¨è­Ú&¢)ßf”ù^jÇy§m…á@A«a¶Úÿ 0¶ìh®å’i