Re: [PATCH v6 6/6] locking/pvqspinlock: Queue node adaptive spinning

From: Waiman Long
Date: Mon Sep 14 2015 - 15:37:50 EST


On 09/14/2015 10:10 AM, Peter Zijlstra wrote:
On Fri, Sep 11, 2015 at 02:37:38PM -0400, Waiman Long wrote:
In an overcommitted guest where some vCPUs have to be halted to make
forward progress in other areas, it is highly likely that a vCPU later
in the spinlock queue will be spinning while the ones earlier in the
queue would have been halted. The spinning in the later vCPUs is then
just a waste of precious CPU cycles because they are not going to
get the lock soon as the earlier ones have to be woken up and take
their turn to get the lock.

This patch implements an adaptive spinning mechanism where the vCPU
will call pv_wait() if the following conditions are true:

1) the vCPU has not been halted before;
2) the previous vCPU is not running.
Why 1? For the mutex adaptive stuff we only care about the lock holder
running, right?

The wait-early once logic was there because of the kick-ahead patch as I don't want a recently kicked vCPU near the head of the queue to go back to sleep too early. However, without kick-ahead, a woken up vCPU should now be at the queue head. Indeed, we can remove that check and simplify the logic.

BTW, the queue head vCPU at pv_wait_head_and_lock() doesn't wait early, it will spin the full threshold as there is no way for it to figure out if the lock holder is running or not.

Cheers,
Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/