Re: [PATCH v1 1/5] KVM: arm64: Enable ring-based dirty memory tracking

From: Paolo Bonzini
Date: Thu Sep 01 2022 - 20:20:00 EST


On 8/30/22 16:42, Peter Xu wrote:
Marc,

I thought we won't hit this as long as we properly take care of other
orderings of (a) gfn push, and (b) gfn collect, but after a second thought
I think it's indeed logically possible that with a reversed ordering here
we can be reading some garbage gfn before (a) happens butt also read the
valid flag after (b).

It seems we must have all the barriers correctly applied always. If that's
correct, do you perhaps mean something like this to just add the last piece
of barrier?

Okay, so I thought about it some more and it's quite tricky.

Strictly speaking, the synchronization is just between userspace and kernel. The fact that the actual producer of dirty pages is in another CPU is a red herring, because reset only cares about harvested pages.

In other words, the dirty page ring is essentially two ring buffers in one and we only care about the "harvested ring", not the "produced ring".

On the other hand, it may happen that userspace has set more RESET flags while the ioctl is ongoing:


CPU0 CPU1 CPU2
fill gfn0
store-rel flags for gfn0
fill gfn1
store-rel flags for gfn1
load-acq flags for gfn0
set RESET for gfn0
load-acq flags for gfn1
set RESET for gfn1
do ioctl! ----------->
ioctl(RESET_RINGS)
fill gfn2
store-rel flags for gfn2
load-acq flags for gfn2
set RESET for gfn2
process gfn0
process gfn1
process gfn2
do ioctl!
etc.

The three load-acquire in CPU0 synchronize with the three store-release in CPU2, but CPU0 and CPU1 are only synchronized up to gfn1 and CPU1 may miss gfn2's fields other than flags.

The kernel must be able to cope with invalid values of the fields, and userspace will invoke the ioctl once more. However, once the RESET flag is cleared on gfn2, it is lost forever, therefore in the above scenario CPU1 must read the correct value of gfn2's fields.

Therefore RESET must be set with a store-release, that will synchronize with a load-acquire in CPU1 as you suggested.

Paolo

diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c
index f4c2a6eb1666..ea620bfb012d 100644
--- a/virt/kvm/dirty_ring.c
+++ b/virt/kvm/dirty_ring.c
@@ -84,7 +84,7 @@ static inline void kvm_dirty_gfn_set_dirtied(struct kvm_dirty_gfn *gfn)
static inline bool kvm_dirty_gfn_harvested(struct kvm_dirty_gfn *gfn)
{
- return gfn->flags & KVM_DIRTY_GFN_F_RESET;
+ return smp_load_acquire(&gfn->flags) & KVM_DIRTY_GFN_F_RESET;
}
int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring)
===8<===

Thanks,

--
Peter Xu