Re: [PATCH v1 1/2] KVM: arm64: determine memory type from VMA

From: Jason Gunthorpe
Date: Thu Oct 05 2023 - 13:10:33 EST


On Thu, Oct 05, 2023 at 05:15:37PM +0100, Catalin Marinas wrote:
> On Thu, Sep 07, 2023 at 11:14:58AM -0700, ankita@xxxxxxxxxx wrote:
> > From: Ankit Agrawal <ankita@xxxxxxxxxx>
> >
> > Currently KVM determines if a VMA is pointing at IO memory by checking
> > pfn_is_map_memory(). However, the MM already gives us a way to tell what
> > kind of memory it is by inspecting the VMA.
>
> Well, it doesn't. It tells us what attributes the user mapped that
> memory with, not whether it's I/O memory or standard RAM.

There is VM_IO which is intended to be used for address space with
side effects.

And there is VM_PFNMAP which is intended to be used for address space
without struct page (IO or not)

And finally we have the pgprot bit which define the cachability.

Do you have a definition of IO memory that those three things don't
cover?

I would propose that, for KVM's purpose, IO memory is marked with
VM_IO or a non-cachable pgprot

And "standard RAM" is defined by a cachable pgprot. Linux never makes
something that is VM_IO cachable.

?

> I would move the second patch to be the first, we could even merge that
> independently as it is about relaxing the stage 2 mapping to Normal NC.
> It would make it simpler I think to reason about the second patch which
> further relaxes the stage 2 mapping to Normal Cacheable under certain
> conditions.

Make sense

> > Unfortunately when FWB is not enabled, the kernel expects to naively do
> > cache management by flushing the memory using an address in the
> > kernel's map. This does not work in several of the newly allowed
> > cases such as dcache_clean_inval_poc(). Check whether the targeted pfn
> > and its mapping KVA is valid in case the FWB is absent before continuing.
>
> I would only allow cacheable stage 2 mappings if FWB is enabled.
> Otherwise we end up with a mismatch between the VMM mapping and whatever
> the guest may do.

Does it need to be stronger? If FWB is disabled and the cache flush
works then what is the issue?

> > + /*
> > + * Figure out the memory type based on the user va mapping properties
> > + * Only MT_DEVICE_nGnRE and MT_DEVICE_nGnRnE will be set using
> > + * pgprot_device() and pgprot_noncached() respectively.
> > + */
> > + if ((mapping_type(vma->vm_page_prot) == MT_DEVICE_nGnRE) ||
> > + (mapping_type(vma->vm_page_prot) == MT_DEVICE_nGnRnE) ||
> > + (mapping_type(vma->vm_page_prot) == MT_NORMAL_NC))
> > + prot |= KVM_PGTABLE_PROT_DEVICE;
> > + else if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC))
> > + prot |= KVM_PGTABLE_PROT_X;
>
> Does this mean that we can end up with some I/O memory also mapped as
> executable?

Yes. We don't have cachable memory with side effects in Linux?

> Is there a use-case (e.g. using CXL memory as standard guest
> RAM, executable)?

Certainly.

> > +
> > /* Don't use the VMA after the unlock -- it may have vanished */
> > vma = NULL;
> >
> > @@ -1576,10 +1597,21 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> > if (exec_fault)
> > prot |= KVM_PGTABLE_PROT_X;
> >
> > - if (device)
> > - prot |= KVM_PGTABLE_PROT_DEVICE;
> > - else if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC))
> > - prot |= KVM_PGTABLE_PROT_X;
> > + /*
> > + * When FWB is unsupported KVM needs to do cache flushes
> > + * (via dcache_clean_inval_poc()) of the underlying memory. This is
> > + * only possible if the memory is already mapped into the kernel map
> > + * at the usual spot.
> > + *
> > + * Validate that there is a struct page for the PFN which maps
> > + * to the KVA that the flushing code expects.
> > + */
> > + if (!stage2_has_fwb(pgt) &&
> > + !(pfn_valid(pfn) &&
> > + page_to_virt(pfn_to_page(pfn)) == kvm_host_va(PFN_PHYS(pfn)))) {
> > + ret = -EINVAL;
> > + goto out_unlock;
> > + }
>
> My preference would be to keep most of the current logic (including
> pfn_is_map_memory())

Why? I think this pfn_is_map_memory() is actually not desired, it
doesn't deal with modern memory hotplug or pgmap stuff? Isn't
pfn_valid() more appropriate?

> as an ABI change but I don't think it matters, it mostly brings
> cacheable I/O mem mappings in line with standard RAM (bar the exec
> permission unless there is a use-case for it).

I would discourage the concept of "cacheable I/O mem mappings".

Cachable memory located on a NUMA node close to the CPU should have
exactly the same treatement as cachable memory located on a NUMA node
distant from the CPU.

I think when you say "cachable I/O memory" it really just means normal
memory that lives on a NUMA node that is located on an IO device.

At the KVM level we don't care about the NUMA locality, we only care
if it is normal cachable system memory.

I think there are two issues here.

1) KVM uses pfn_is_map_memory() which does not cover all our modern
NUMA and memory hotplug cases for normal struct page backed cachable
memory.

2) KVM doesn't work with normal cachable memory that does not have
struct pages.


For 1 the test should be 'does the pfn have a struct page, does the
struct page refer to cachable memory?'

For 2 the test should be 'does the VMA have pgprot = cachable,
VM_PFNMAP and not VM_IO (both implied)'

Jason