Re: [PATCH -v8][RFC] mutex: implement adaptive spinning

From: Avi Kivity
Date: Mon Jan 12 2009 - 12:24:37 EST


Peter Zijlstra wrote:
On Mon, 2009-01-12 at 18:13 +0200, Avi Kivity wrote:

One thing that worries me here is that the spinners will spin on a memory location in struct mutex, which means that the cacheline holding the mutex (which is likely to be under write activity from the owner) will be continuously shared by the spinners, slowing the owner down when it needs to unshare it. One way out of this is to spin on a location in struct mutex_waiter, and have the mutex owner touch it when it schedules out.

Yeah, that is what pure MCS locks do -- however I don't think its a
feasible strategy for this spin/sleep hybrid.

Bummer.

So:
- each task_struct has an array of currently owned mutexes, appended to by mutex_lock()

That's not going to fly I think. Lockdep does this but its very
expensive and has some issues. We're currently at 48 max owners, and
still some code paths manage to exceed that.

Might make it per-cpu instead, and set a bit in the mutex when scheduling out so we know not to remove it from the list on unlock.

- mutex waiters spin on mutex_waiter.wait, which they initialize to zero
- when switching out of a task, walk the mutex list, and for each mutex, bump each waiter's wait variable, and clear the owner array

Which is O(n).

It may be better than O(n) cpus banging on the mutex for the lock duration. Of course we should avoid walking the part of the list where non-spinning owners wait (or maybe have a separate list for spinners).

- when unlocking a mutex, bump the nearest waiter's wait variable, and remove from the owner array

Something similar might be done to spinlocks to reduce cacheline contention from spinners and the owner.

Spinlocks can use 'pure' MCS locks.

I'll read up on those, thanks.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/