Re: [PATCH v2 11/12] KVM: selftests / xen: don't explicitly set the vcpu_info address

From: Paul Durrant
Date: Mon Sep 18 2023 - 11:34:50 EST


On 18/09/2023 14:36, David Woodhouse wrote:
On Mon, 2023-09-18 at 14:26 +0100, Paul Durrant wrote:
On 18/09/2023 14:21, David Woodhouse wrote:
On Mon, 2023-09-18 at 11:21 +0000, Paul Durrant wrote:
From: Paul Durrant <pdurrant@xxxxxxxxxx>

If the vCPU id is set and the shared_info is mapped using HVA then we can
infer that KVM has the ability to use a default vcpu_info mapping. Hence
we can stop setting the address of the vcpu_info structure.

Again that means we're not *testing* it any more when the test is run
on newer kernels. Can we perhaps set it explicitly, after *half* the
tests are done? Maybe to a *different* address than the default which
is derived from the Xen vcpu_id? And check that the memcpy works right
when we do?


Ok. The VMM is currently responsible for that memcpy. Are you suggesting
we push that into KVM too?

Ah OK.

Hm, maybe we should?

What happened before in the case where interrupts were being delivered,
and the vcpu_info address was changed.

In Xen, I guess it's effectively atomic? Some locking will mean that
the event channel is delivered to the vcpu_info either *before* the
memcpy, or *after* it, but never to the old address after the copy has
been done, so that the event (well the index of it) gets lost?

In KVM before we did the automatic placement, it was the VMM's problem.

If there are any interrupts set up for direct delivery, I suppose the
VMM should have *removed* the vcpu_info mapping before doing the
memcpy, then restored it at the new address? I may have to check qemu
gets that right.

Then again, it's a very hard race to trigger, given that a guest can
only set the vcpu_info once. So it can move it from the shinfo to a
separate address and attempt to trigger this race just that one time.

But in the case where auto-placement has happened, and then the guest
sets an explicit vcpu_info location... are we saying that the VMM must
explicitly *unmap* the vcpu_info first, then memcpy, then set it to the
new location? Or will we handle the memcpy in-kernel?


Well, if the VMM is using the default then it can't unmap it. But setting a vcpu_info *after* enabling any event channels would be a very odd thing for a guest to do and IMO it gets to keep the pieces if it does so.

Paul