Re: [PATCH v8 07/15] KVM: pfncache: include page offset in uhva and use it consistently

From: Xu Yilun
Date: Wed Nov 22 2023 - 03:56:27 EST


On Tue, Nov 21, 2023 at 06:02:15PM +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@xxxxxxxxxx>
>
> Currently the pfncache page offset is sometimes determined using the gpa
> and sometimes the khva, whilst the uhva is always page-aligned. After a
> subsequent patch is applied the gpa will not always be valid so adjust
> the code to include the page offset in the uhva and use it consistently
> as the source of truth.
>
> Also, where a page-aligned address is required, use PAGE_ALIGN_DOWN()
> for clarity.
>
> Signed-off-by: Paul Durrant <pdurrant@xxxxxxxxxx>
> ---
> Cc: Sean Christopherson <seanjc@xxxxxxxxxx>
> Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx>
> Cc: David Woodhouse <dwmw2@xxxxxxxxxxxxx>
>
> v8:
> - New in this version.
> ---
> virt/kvm/pfncache.c | 27 +++++++++++++++++++--------
> 1 file changed, 19 insertions(+), 8 deletions(-)
>
> diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c
> index 0eeb034d0674..c545f6246501 100644
> --- a/virt/kvm/pfncache.c
> +++ b/virt/kvm/pfncache.c
> @@ -48,10 +48,10 @@ bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, unsigned long len)
> if (!gpc->active)
> return false;
>
> - if (offset_in_page(gpc->gpa) + len > PAGE_SIZE)
> + if (gpc->generation != slots->generation || kvm_is_error_hva(gpc->uhva))
> return false;
>
> - if (gpc->generation != slots->generation || kvm_is_error_hva(gpc->uhva))
> + if (offset_in_page(gpc->uhva) + len > PAGE_SIZE)
> return false;
>
> if (!gpc->valid)
> @@ -119,7 +119,7 @@ static inline bool mmu_notifier_retry_cache(struct kvm *kvm, unsigned long mmu_s
> static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc)
> {
> /* Note, the new page offset may be different than the old! */
> - void *old_khva = gpc->khva - offset_in_page(gpc->khva);
> + void *old_khva = (void *)PAGE_ALIGN_DOWN((uintptr_t)gpc->khva);
> kvm_pfn_t new_pfn = KVM_PFN_ERR_FAULT;
> void *new_khva = NULL;
> unsigned long mmu_seq;
> @@ -192,7 +192,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc)
>
> gpc->valid = true;
> gpc->pfn = new_pfn;
> - gpc->khva = new_khva + offset_in_page(gpc->gpa);
> + gpc->khva = new_khva + offset_in_page(gpc->uhva);
>
> /*
> * Put the reference to the _new_ pfn. The pfn is now tracked by the
> @@ -215,8 +215,8 @@ static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa,
> struct kvm_memslots *slots = kvm_memslots(gpc->kvm);
> unsigned long page_offset = offset_in_page(gpa);
> bool unmap_old = false;
> - unsigned long old_uhva;
> kvm_pfn_t old_pfn;
> + bool hva_change = false;
> void *old_khva;
> int ret;
>
> @@ -242,8 +242,7 @@ static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa,
> }
>
> old_pfn = gpc->pfn;
> - old_khva = gpc->khva - offset_in_page(gpc->khva);
> - old_uhva = gpc->uhva;
> + old_khva = (void *)PAGE_ALIGN_DOWN((uintptr_t)gpc->khva);
>
> /* If the userspace HVA is invalid, refresh that first */
> if (gpc->gpa != gpa || gpc->generation != slots->generation ||
> @@ -259,13 +258,25 @@ static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa,
> ret = -EFAULT;
> goto out;
> }
> +
> + hva_change = true;
> + } else {
> + /*
> + * No need to do any re-mapping if the only thing that has
> + * changed is the page offset. Just page align it to allow the
> + * new offset to be added in.

I don't understand how the uhva('s offset) could be changed when both gpa and
slot are not changed. Maybe I have no knowledge of xen, but in later
patch you said your uhva would never change...

Thanks,
Yilun

> + */
> + gpc->uhva = PAGE_ALIGN_DOWN(gpc->uhva);
> }
>
> + /* Note: the offset must be correct before calling hva_to_pfn_retry() */
> + gpc->uhva += page_offset;
> +
> /*
> * If the userspace HVA changed or the PFN was already invalid,
> * drop the lock and do the HVA to PFN lookup again.
> */
> - if (!gpc->valid || old_uhva != gpc->uhva) {
> + if (!gpc->valid || hva_change) {
> ret = hva_to_pfn_retry(gpc);
> } else {
> /*
> --
> 2.39.2
>
>