Re: [HMM v14 05/16] mm/ZONE_DEVICE/unaddressable: add support for un-addressable device memory

From: Jerome Glisse
Date: Thu Dec 08 2016 - 11:40:15 EST


> On 12/08/2016 08:39 AM, JÃrÃme Glisse wrote:
> > Architecture that wish to support un-addressable device memory should make
> > sure to never populate the kernel linar mapping for the physical range.
>
> Does the platform somehow provide a range of physical addresses for this
> unaddressable area? How do we know no memory will be hot-added in a
> range we're using for unaddressable device memory, for instance?

That's what one of the big issue. No platform does not reserve any range so
there is a possibility that some memory get hotpluged and assign this range.

I pushed the range decision to higher level (ie it is the device driver that
pick one) so right now for device driver using HMM (NVidia close driver as
we don't have nouveau ready for that yet) it goes from the highest physical
address and scan down until finding an empty range big enough.

I don't think i can control or enforce at platform level how to choose
specific physical address for hotplug.

So right now with my patchset what happens is that the hotplug will fail
because i already registered a resource for the physical range. What i can
add is a way to migrate the device memory to a different physical range.
I am bit afraid on how complex this can be.

The ideal solution would be to increase the MAX_PHYSMEM_BITS by one and use
physical address that can never be valid. We would not need to increase the
the direct mapping size of memory (this memory is not mappable by CPU). But
i am afraid of complication this might cause.

I think for sparse memory model it should be easy enough and i already rely
on sparse for HMM.

In any case i think this is something that can be solve after. If it becomes
a real issue. Maybe i should add a debug printk that when hotplug fails
because of an existing un-addressable ZONE_DEVICE resource.

Cheers,
JÃrÃme