Re: [PATCH 2/3] KVM: x86: Introduce definitions to support static calls for kvm_pmu_ops

From: Like Xu
Date: Mon Nov 08 2021 - 04:31:30 EST


On 5/11/2021 11:48 pm, Sean Christopherson wrote:
On Wed, Nov 03, 2021, Like Xu wrote:
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index 0db1887137d9..b6f08c719125 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -50,6 +50,13 @@
struct kvm_pmu_ops kvm_pmu_ops __read_mostly;
EXPORT_SYMBOL_GPL(kvm_pmu_ops);
+#define KVM_X86_PMU_OP(func) \
+ DEFINE_STATIC_CALL_NULL(kvm_x86_pmu_##func, \
+ *(((struct kvm_pmu_ops *)0)->func))
+#define KVM_X86_PMU_OP_NULL KVM_X86_PMU_OP

More of a question for the existing code, what's the point of KVM_X86_OP_NULL?

The comment says:

* KVM_X86_OP_NULL() can leave a NULL definition for the
* case where there is no definition or a function name that
* doesn't match the typical naming convention is supplied.

Does it help ?

AFAICT, it always resolves to KVM_X86_OP. Unless there's some magic I'm missing,
I vote we remove KVM_X86_OP_NULL and then not introduce KVM_X86_PMU_OP_NULL.
And I'm pretty sure it's useless, e.g. get_cs_db_l_bits is defined with the NULL

This transitions will not be included in the next version. Open to you.

variant, but it's never NULL and its calls aren't guarded with anything. And if
KVM_X86_OP_NULL is intended to aid in documenting behavior, it's doing a pretty
miserable job of that :-)

+#include <asm/kvm-x86-pmu-ops.h>
+EXPORT_STATIC_CALL_GPL(kvm_x86_pmu_is_valid_msr);

I'll double down on my nVMX suggestion so that this export can be avoided.

Fine to me.


static void kvm_pmi_trigger_fn(struct irq_work *irq_work)
{
struct kvm_pmu *pmu = container_of(irq_work, struct kvm_pmu, irq_work);
diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
index b2fe135d395a..e5550d4acf14 100644
--- a/arch/x86/kvm/pmu.h
+++ b/arch/x86/kvm/pmu.h
@@ -3,6 +3,8 @@
#define __KVM_X86_PMU_H
#include <linux/nospec.h>
+#include <linux/static_call_types.h>
+#include <linux/static_call.h>
#define vcpu_to_pmu(vcpu) (&(vcpu)->arch.pmu)
#define pmu_to_vcpu(pmu) (container_of((pmu), struct kvm_vcpu, arch.pmu))
@@ -45,6 +47,19 @@ struct kvm_pmu_ops {
void (*cleanup)(struct kvm_vcpu *vcpu);
};
+#define KVM_X86_PMU_OP(func) \
+ DECLARE_STATIC_CALL(kvm_x86_pmu_##func, *(((struct kvm_pmu_ops *)0)->func))
+#define KVM_X86_PMU_OP_NULL KVM_X86_PMU_OP
+#include <asm/kvm-x86-pmu-ops.h>
+
+static inline void kvm_pmu_ops_static_call_update(void)
+{
+#define KVM_X86_PMU_OP(func) \
+ static_call_update(kvm_x86_pmu_##func, kvm_pmu_ops.func)
+#define KVM_X86_PMU_OP_NULL KVM_X86_PMU_OP
+#include <asm/kvm-x86-pmu-ops.h>
+}

As alluded to in patch 01, I'd prefer these go in kvm_ops_static_call_update()
to keep the static call magic somewhat contained.

Thank and applied.


+
static inline u64 pmc_bitmask(struct kvm_pmc *pmc)
{
struct kvm_pmu *pmu = pmc_to_pmu(pmc);
--
2.33.0