Re: [PATCH] sched; Simplify mutex_spin_on_owner()

From: Paul E. McKenney
Date: Sat Jun 11 2011 - 11:57:39 EST


On Fri, Jun 10, 2011 at 06:04:24PM -0700, Paul E. McKenney wrote:
> On Fri, Jun 10, 2011 at 03:08:55PM +0200, Thomas Gleixner wrote:
> > It does not make sense to rcu_read_lock/unlock() in every loop
> > iteration while spinning on the mutex.
> >
> > Move the rcu protection once outside the loop. Also simplify the
> > return path to always check for lock->owner == NULL which meets the
> > requirements of both owner changed and need_resched() caused loop
> > exits.
>
> Interesting. If the spin was preempted in the new form, then
> RCU priority boosting would boost the priority of the task spinning
> on the mutex. My guess is that this would happen rarely enough
> to not be a problem, but other thoughts?

And if it does turn out to be a problem, one way to handle it would
be for me to provide an rcu_boosted_me() or some such that checks
the bit in the task structure, and then add something like the
following to your patch, which would momentarily exit the RCU
read-side critical section in order to deboost.

Thoughts?

Thanx, Paul

> > Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> > ---
> > kernel/sched.c | 25 +++++++++----------------
> > 1 file changed, 9 insertions(+), 16 deletions(-)
> >
> > Index: linux-2.6/kernel/sched.c
> > ===================================================================
> > --- linux-2.6.orig/kernel/sched.c
> > +++ linux-2.6/kernel/sched.c
> > @@ -4306,11 +4306,8 @@ EXPORT_SYMBOL(schedule);
> >
> > static inline bool owner_running(struct mutex *lock, struct task_struct *owner)
> > {
> > - bool ret = false;
> > -
> > - rcu_read_lock();
> > if (lock->owner != owner)
> > - goto fail;
> > + return false;
> >
> > /*
> > * Ensure we emit the owner->on_cpu, dereference _after_ checking
> > @@ -4320,11 +4317,7 @@ static inline bool owner_running(struct
> > */
> > barrier();
> >
> > - ret = owner->on_cpu;
> > -fail:
> > - rcu_read_unlock();
> > -
> > - return ret;
> > + return owner->on_cpu;
> > }
> >
> > /*
> > @@ -4336,21 +4329,21 @@ int mutex_spin_on_owner(struct mutex *lo
> > if (!sched_feat(OWNER_SPIN))
> > return 0;
> >
> > + rcu_read_lock();
> > while (owner_running(lock, owner)) {
> > if (need_resched())
> > - return 0;
> > + break;

if (rcu_boosted_me()) {
rcu_read_unlock();
rcu_read_lock();
}

> > arch_mutex_cpu_relax();
> > }
> > + rcu_read_unlock();
> >
> > /*
> > - * If the owner changed to another task there is likely
> > - * heavy contention, stop spinning.
> > + * We break out the loop above on need_resched() and when the
> > + * owner changed, which is a sign for heavy contention. Return
> > + * success only when lock->owner is NULL.
> > */
> > - if (lock->owner)
> > - return 0;
> > -
> > - return 1;
> > + return lock->owner == NULL;
> > }
> > #endif
> >
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/