Re: [RFC 09/33] KVM: x86: hyper-v: Introduce per-VTL vcpu helpers

From: Maxim Levitsky
Date: Tue Nov 28 2023 - 02:25:41 EST


On Wed, 2023-11-08 at 11:17 +0000, Nicolas Saenz Julienne wrote:
> Introduce two helper functions. The first one queries a vCPU's VTL
> level, the second one, given a struct kvm_vcpu and VTL pair, returns the
> corresponding 'sibling' struct kvm_vcpu at the right VTL.
>
> We keep track of each VTL's state by having a distinct struct kvm_vpcu
> for each level. VTL-vCPUs that belong to the same guest CPU share the
> same physical APIC id, but belong to different APIC groups where the
> apic group represents the vCPU's VTL.
>
> Signed-off-by: Nicolas Saenz Julienne <nsaenz@xxxxxxxxxx>
> ---
> arch/x86/kvm/hyperv.h | 18 ++++++++++++++++++
> 1 file changed, 18 insertions(+)
>
> diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h
> index 2bfed69ba0db..5433107e7cc8 100644
> --- a/arch/x86/kvm/hyperv.h
> +++ b/arch/x86/kvm/hyperv.h
> @@ -23,6 +23,7 @@
>
> #include <linux/kvm_host.h>
> #include "x86.h"
> +#include "lapic.h"
>
> /* "Hv#1" signature */
> #define HYPERV_CPUID_SIGNATURE_EAX 0x31237648
> @@ -83,6 +84,23 @@ static inline struct kvm_hv_syndbg *to_hv_syndbg(struct kvm_vcpu *vcpu)
> return &vcpu->kvm->arch.hyperv.hv_syndbg;
> }
>
> +static inline struct kvm_vcpu *kvm_hv_get_vtl_vcpu(struct kvm_vcpu *vcpu, int vtl)
> +{
> + struct kvm *kvm = vcpu->kvm;
> + u32 target_id = kvm_apic_id(vcpu);
> +
> + kvm_apic_id_set_group(kvm, vtl, &target_id);
> + if (vcpu->vcpu_id == target_id)
> + return vcpu;
> +
> + return kvm_get_vcpu_by_id(kvm, target_id);
> +}

> +
> +static inline u8 kvm_hv_get_active_vtl(struct kvm_vcpu *vcpu)
> +{
> + return kvm_apic_group(vcpu);
> +}
> +
> static inline u32 kvm_hv_get_vpindex(struct kvm_vcpu *vcpu)
> {
> struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);


Ideally I'll prefer the kernel to not know the VTL mapping at all but rather,
that each vCPU be assigned to an apic group / namespace and has its assigned VTL.

Then the kernel works in this way:

* Regular APIC IPI -> send it to the apic namespace to which the sender belongs or if we go with the idea of using
multiple VMs, then this will work unmodified.

* Hardware interrupt -> send it to the vCPU/VM which userspace configured it to send via GSI mappings.

* HyperV IPI -> if same VTL as the vCPU assigned VTL -> deal with it the same as with regular IPI
-> otherwise exit to the userspace.

* Page fault -> if related to violation of current VTL protection,
exit to userspace, and the userspace can then queue the SynIC message, and wakeup the higher VTL.


Best regards,
Maxim Levitsky