Re: [PATCH 3/4] KVM: X86: Use smp_rmb() to pair with smp_wmb() in mmu_try_to_unsync_pages()

From: Paolo Bonzini
Date: Thu Oct 21 2021 - 13:45:46 EST


On 19/10/21 13:01, Lai Jiangshan wrote:
From: Lai Jiangshan<laijs@xxxxxxxxxxxxxxxxx>

The commit 578e1c4db2213 ("kvm: x86: Avoid taking MMU lock in
kvm_mmu_sync_roots if no sync is needed") added smp_wmb() in
mmu_try_to_unsync_pages(), but the corresponding smp_load_acquire()
isn't used on the load of SPTE.W which is impossible since the load of
SPTE.W is performed in the CPU's pagetable walking.

This patch changes to use smp_rmb() instead. This patch fixes nothing
but just comments since smp_rmb() is NOP and compiler barrier() is not
required since the load of SPTE.W is before VMEXIT.

I think that even implicit loads during pagetable walking obey read-read ordering on x86, but this is clearer and it is necessary for patch 4.

Paolo