Re: [PATCH] KVM: VMX: replace move_msr_up with swap macro

From: Gustavo A. R. Silva
Date: Mon Nov 06 2017 - 08:14:37 EST


Hi Paolo,

Quoting Paolo Bonzini <pbonzini@xxxxxxxxxx>:

----- Original Message -----
From: "Gustavo A. R. Silva" <garsilva@xxxxxxxxxxxxxx>
To: "Paolo Bonzini" <pbonzini@xxxxxxxxxx>, "Radim KrÄmÃÅ" <rkrcmar@xxxxxxxxxx>, "Thomas Gleixner"
<tglx@xxxxxxxxxxxxx>, "Ingo Molnar" <mingo@xxxxxxxxxx>, "H. Peter Anvin" <hpa@xxxxxxxxx>, x86@xxxxxxxxxx
Cc: kvm@xxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, "Gustavo A. R. Silva" <garsilva@xxxxxxxxxxxxxx>
Sent: Friday, November 3, 2017 11:58:19 PM
Subject: [PATCH] KVM: VMX: replace move_msr_up with swap macro

Function move_msr_up is used to _manually_ swap MSR entries in MSR array.
This function can be removed and replaced using the swap macro instead.

This code was detected with the help of Coccinelle.

I think move_msr_up should instead change into a function like

void mark_msr_for_save(struct vcpu_vmx *vmx, int index)
{
swap(vmx->guest_msrs[index], vmx->guest_msrs[vmx->save_nmsrs]);
vmx->save_nmsrs++;
}

Using swap is useful, but it is also hiding what's going on exactly
(in addition, using ++ inside a macro argument might be calling for
trouble).


Thanks for your comments.

I'll work on v2 based on your feedback.

--
Gustavo A. R. Silva

Paolo


Signed-off-by: Gustavo A. R. Silva <garsilva@xxxxxxxxxxxxxx>
---
The new lines are over 80 characters, but I think in this case that is
preferable over splitting them.

arch/x86/kvm/vmx.c | 24 ++++++------------------
1 file changed, 6 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index e6c8ffa..210e491 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -2544,18 +2544,6 @@ static bool vmx_invpcid_supported(void)
return cpu_has_vmx_invpcid() && enable_ept;
}

-/*
- * Swap MSR entry in host/guest MSR entry array.
- */
-static void move_msr_up(struct vcpu_vmx *vmx, int from, int to)
-{
- struct shared_msr_entry tmp;
-
- tmp = vmx->guest_msrs[to];
- vmx->guest_msrs[to] = vmx->guest_msrs[from];
- vmx->guest_msrs[from] = tmp;
-}
-
static void vmx_set_msr_bitmap(struct kvm_vcpu *vcpu)
{
unsigned long *msr_bitmap;
@@ -2600,28 +2588,28 @@ static void setup_msrs(struct vcpu_vmx *vmx)
if (is_long_mode(&vmx->vcpu)) {
index = __find_msr_index(vmx, MSR_SYSCALL_MASK);
if (index >= 0)
- move_msr_up(vmx, index, save_nmsrs++);
+ swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);
index = __find_msr_index(vmx, MSR_LSTAR);
if (index >= 0)
- move_msr_up(vmx, index, save_nmsrs++);
+ swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);
index = __find_msr_index(vmx, MSR_CSTAR);
if (index >= 0)
- move_msr_up(vmx, index, save_nmsrs++);
+ swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);
index = __find_msr_index(vmx, MSR_TSC_AUX);
if (index >= 0 && guest_cpuid_has(&vmx->vcpu, X86_FEATURE_RDTSCP))
- move_msr_up(vmx, index, save_nmsrs++);
+ swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);
/*
* MSR_STAR is only needed on long mode guests, and only
* if efer.sce is enabled.
*/
index = __find_msr_index(vmx, MSR_STAR);
if ((index >= 0) && (vmx->vcpu.arch.efer & EFER_SCE))
- move_msr_up(vmx, index, save_nmsrs++);
+ swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);
}
#endif
index = __find_msr_index(vmx, MSR_EFER);
if (index >= 0 && update_transition_efer(vmx, index))
- move_msr_up(vmx, index, save_nmsrs++);
+ swap(vmx->guest_msrs[index], vmx->guest_msrs[save_nmsrs++]);

vmx->save_nmsrs = save_nmsrs;



--
2.7.4