Re: [RFC 0/12][PATCH] SCHED_DEADLINE: fork and terminate task logic

From: Fabio Checconi
Date: Wed Jan 13 2010 - 16:20:34 EST


> From: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> Date: Wed, Jan 13, 2010 05:15:11PM +0100
>
> On Wed, 2010-01-13 at 12:11 +0100, Raistlin wrote:
>
> > > > + } else if (rt_prio(p->prio))
> > > > + p->sched_class = &rt_sched_class;
> > > > + else
> > > > p->sched_class = &fair_sched_class;
> > > >
> > > > #ifdef CONFIG_SMP
> > > > @@ -2744,6 +2756,10 @@ static void finish_task_switch(struct rq *rq, struct task_struct *prev)
> > > > if (mm)
> > > > mmdrop(mm);
> > > > if (unlikely(prev_state == TASK_DEAD)) {
> > > > + /* a deadline task is dying: stop the bandwidth timer */
> > > > + if (deadline_task(prev))
> > > > + hrtimer_cancel(&prev->dl.dl_timer);
> > > > +
> > > > /*
> > > > * Remove function-return probe instances associated with this
> > > > * task and put them back on the free list.
> > >
> > > Shouldn't this be done in the ->dequeue_task() callback?
> > >
> > Not sure of this snippet... Actually, it is one of the most disturbing
> > piece of code of this whole scheduler. :-(
> >
> > The reason why it is here is that I think it is needed to call
> > hrtimer_cancel() _without_ holding the rq->lock, is that correct?
>
> I think we can nest the hrtimer base lock inside the rq->lock these
> days, so it should be safe to call while holding it, anyway, lockdep
> will quickly tell you if you try ;-)
>

I may be wrong, but the race here should be between the hrtimer_cancel()
and the handler itself (which takes rq->lock): if the timer handler is
running on a different cpu and still it has not entered its critical section
we may end up here waiting for it to terminate, but that will never happen.

If we are able to enforce that both hrtimer_cancel() and the timer
handler are always executed on the same cpu then we should be safe, because
this code would never be executed with a running handler.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/