Re: [TIP][RFC 6/7] futex: add requeue_pi calls

From: Darren Hart
Date: Mon Mar 09 2009 - 15:55:42 EST


Thomas Gleixner wrote:
On Thu, 5 Mar 2009, Darren Hart wrote:
int rt_mutex_start_proxy_lock(struct rt_mutex *lock,
struct rt_mutex_waiter *waiter,
struct task_struct *task, int detect_deadlock)
{
int ret;

spin_lock(&lock->wait_lock);
ret = task_blocks_on_rt_mutex(lock, waiter, task, detect_deadlock);


I add the following line to fix the bug. Question is, should I use this
atomic
optimization here (under the lock->wait_lock) or should I just do "lock->owner
|= RT_MUTEX_HAS_WAITERS" ?

=====> mark_rt_mutex_waiters(lock);

This is still not enough as I explained in the review of the original
patch. What you need to do is:

if (try_to_take_rt_mutex(lock, task)) {
spin_unlock(&lock->wait_lock);
/* The caller needs to wake up task, as it is now the owner */
return WAKEIT;
}

ret = task_blocks_on_rt_mutex(lock, waiter, task, detect_deadlock);


Right, so I'm testing this out:

mark_rt_mutex_waiters(lock);

if (!rt_mutex_owner(lock) || try_to_steal_lock(lock, task)) {
/* We got the lock for task. */
debug_rt_mutex_lock(lock);

rt_mutex_set_owner(lock, task, 0);

rt_mutex_deadlock_account_lock(lock, task);
return 1;
}

Steven, is this the proper use of the debug* routines? I copied them from try_to_take_rt_mutex(), but they are empty routines without comments so I wasn't sure exactly how they were intended to be used. Does the usage of debug_rt_mutex_lock() assume task=current (the other has the task_struct passed int).

Thanks,

Darren


if (ret && !waiter->task) {
/*
* Reset the return value. We might have
* returned with -EDEADLK and the owner
* released the lock while we were walking the
* pi chain. Let the waiter sort it out.
*/
ret = 0;
}
spin_unlock(&lock->wait_lock);

debug_rt_mutex_print_deadlock(waiter);

return ret;
}

Thanks,

tglx


--
Darren Hart
IBM Linux Technology Center
Real-Time Linux Team
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/