Re: [PATCH 0/1] KVM: x86/vPMU: Speed up vmexit for AMD Zen 4 CPUs

From: Jim Mattson
Date: Thu Nov 09 2023 - 19:18:00 EST


On Thu, Nov 9, 2023 at 3:42 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
>
> On Thu, Nov 09, 2023, Jim Mattson wrote:
> > On Thu, Nov 9, 2023 at 10:24 AM Konstantin Khorenko
> > <khorenko@xxxxxxxxxxxxx> wrote:
> > >
> > > We have detected significant performance drop of our atomic test which
> > > checks the rate of CPUID instructions rate inside an L1 VM on an AMD
> > > node.
> > >
> > > Investigation led to 2 mainstream patches which have introduced extra
> > > events accounting:
> > >
> > > 018d70ffcfec ("KVM: x86: Update vPMCs when retiring branch instructions")
> > > 9cd803d496e7 ("KVM: x86: Update vPMCs when retiring instructions")
> > >
> > > And on an AMD Zen 3 CPU that resulted in immediate 43% drop in the CPUID
> > > rate.
> > >
> > > Checking latest mainsteam kernel the performance difference is much less
> > > but still quite noticeable: 13.4% and shows up on AMD CPUs only.
> > >
> > > Looks like iteration over all PMCs in kvm_pmu_trigger_event() is cheap
> > > on Intel and expensive on AMD CPUs.
> > >
> > > So the idea behind this patch is to skip iterations over PMCs at all in
> > > case PMU is disabled for a VM completely or PMU is enabled for a VM, but
> > > there are no active PMCs at all.
> >
> > A better solution may be to maintain two bitmaps of general purpose
> > counters that need to be incremented, one for instructions retired and
> > one for branch instructions retired. Set or clear these bits whenever
> > the PerfEvtSelN MSRs are written. I think I would keep the PGC bits
> > separate, on those microarchitectures that support PGC. Then,
> > kvm_pmu_trigger_event() need only consult the appropriate bitmap (or
> > the logical and of that bitmap with PGC). In most cases, the value
> > will be zero, and the function can simply return.
> >
> > This would work even for AMD microarchitectures that don't support PGC.
>
> Yeah. There are multiple lower-hanging fruits to be picked though, most of which
> won't conflict with using dedicated per-event bitmaps, or at worst are trivial
> to resolve.
>
> 1. Don't call into perf to get the eventsel (which generates an indirect call)
> on every invocation, let alone every iteration.
>
> 2. Avoid getting the CPL when it's irrelevant.
>
> 3. Check the eventsel before querying the event filter.
>
> 4. Mask out PMCs that aren't globally enabled from the get-go (masking out
> PMCs based on eventsel would essentially be the same as per-event bitmaps).

The code below only looks at PGC. Even on CPUs that support PGC, some
PMU clients still use the enable bits in the PerfEvtSelN. Linux perf,
for instance, can't seem to make up its mind whether to use PGC or
PerfEvtSelN.EN.

> I'm definitely not opposed to per-event bitmaps, but it'd be nice to avoid them,
> e.g. if we can eke out 99% of the performance just by doing a few obvious
> optimizations.
>
> This is the end result of what I'm testing and will (hopefully) post shortly:
>
> static inline bool pmc_is_eventsel_match(struct kvm_pmc *pmc, u64 eventsel)
> {
> return !((pmc->eventsel ^ eventsel) & AMD64_RAW_EVENT_MASK_NB);
> }

The top nybble of AMD's 3-nybble event select collides with Intel's
IN_TX and IN_TXCP bits. I think we can assert that the vCPU can't be
in a transaction if KVM is emulating an instruction, but this probably
merits a comment. The function name should also be more descriptive,
so that it doesn't get misused out of context. :)

> static inline bool cpl_is_matched(struct kvm_pmc *pmc)
> {
> bool select_os, select_user;
> u64 config;
>
> if (pmc_is_gp(pmc)) {
> config = pmc->eventsel;
> select_os = config & ARCH_PERFMON_EVENTSEL_OS;
> select_user = config & ARCH_PERFMON_EVENTSEL_USR;
> } else {
> config = fixed_ctrl_field(pmc_to_pmu(pmc)->fixed_ctr_ctrl,
> pmc->idx - KVM_FIXED_PMC_BASE_IDX);
> select_os = config & 0x1;
> select_user = config & 0x2;
> }
>
> /*
> * Skip the CPL lookup, which isn't free on Intel, if the result will
> * be the same regardless of the CPL.
> */
> if (select_os == select_user)
> return select_os;
>
> return (static_call(kvm_x86_get_cpl)(pmc->vcpu) == 0) ? select_os : select_user;
> }
>
> void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel)
> {
> DECLARE_BITMAP(bitmap, X86_PMC_IDX_MAX);
> struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
> struct kvm_pmc *pmc;
> int i;
>
> BUILD_BUG_ON(sizeof(pmu->global_ctrl) * BITS_PER_BYTE != X86_PMC_IDX_MAX);
>
> if (!kvm_pmu_has_perf_global_ctrl(pmu))
> bitmap_copy(bitmap, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX);
> else if (!bitmap_and(bitmap, pmu->all_valid_pmc_idx,
> (unsigned long *)&pmu->global_ctrl, X86_PMC_IDX_MAX))
> return;
>
> kvm_for_each_pmc(pmu, pmc, i, bitmap) {
> /* Ignore checks for edge detect, pin control, invert and CMASK bits */
> if (!pmc_is_eventsel_match(pmc, eventsel) ||
> !pmc_event_is_allowed(pmc) || !cpl_is_matched(pmc))
> continue;
>
> kvm_pmu_incr_counter(pmc);
> }
> }