Re: [patch] current scheduler bits #2, 2.5.21

From: Robert Love (
Date: Tue Jun 11 2002 - 13:25:13 EST

On Tue, 2002-06-11 at 10:35, Ingo Molnar wrote:

> anyway, my current tree has a new type of optimization which again changes
> the way task_rq_lock() and task_rq_unlock() looks like: in this specific
> case we can rely on __cli() disabling preemption, we do not need to
> elevate the preemption count in this case. (Some special care has to be
> taken to not cause a preemption with the raw rq spinlock held, which the
> patch does.)

Ow, I thought about doing this but decided against it. If we merge
this, we have to be _very_ careful. Any code that sets need_resched
explicitly will cause a preemption on a preempt_enable (implicitly off
any unlock, etc ...). Is this worth it to save an inc/dec?


        static inline void task_rq_unlock(runqueue_t *rq, unsigned long *flags)
                _raw_spin_unlock_irqrestore(&rq->lock, *flags);

Don't we want to explicitly check for a preemption here? Using your new
preempt_check_resched() would make sense...

> and i found and fixed a preemption latency source in wait_task_inactive().
> That function can create *very* bad latencies if the target task happens
> to not go away from its CPU for many milliseconds (for whatever reason).
> So we should and can check the resched bit.

Hm, this may be the cause of the latency issues some have raised.
Dieter Nutzel, for example, has been saying sometime early in O(1)'s
life it killed latency with the preemptible kernel... this may be it.

I'll check it out. Good job.

        Robert Love

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

This archive was generated by hypermail 2b29 : Sat Jun 15 2002 - 22:00:23 EST