Re: [PATCH v3 19/20] PCI/P2PDMA: introduce pci_mmap_p2pmem()

From: Logan Gunthorpe
Date: Fri Oct 01 2021 - 18:22:54 EST





On 2021-10-01 4:14 p.m., Jason Gunthorpe wrote:
> On Fri, Oct 01, 2021 at 02:13:14PM -0600, Logan Gunthorpe wrote:
>>
>>
>> On 2021-10-01 11:45 a.m., Jason Gunthorpe wrote:
>>>> Before the invalidation, an active flag is cleared to ensure no new
>>>> mappings can be created while the unmap is proceeding.
>>>> unmap_mapping_range() should sequence itself with the TLB flush and
>>>
>>> AFIAK unmap_mapping_range() kicks off the TLB flush and then
>>> returns. It doesn't always wait for the flush to fully finish. Ie some
>>> cases use RCU to lock the page table against GUP fast and so the
>>> put_page() doesn't happen until the call_rcu completes - after a grace
>>> period. The unmap_mapping_range() does not wait for grace periods.
>>
>> Admittedly, the tlb flush code isn't the easiest code to understand.
>> But, yes it seems at least on some arches the pages are freed by
>> call_rcu(). But can't this be fixed easily by adding a synchronize_rcu()
>> call after calling unmap_mapping_range()? Certainly after a
>> synchronize_rcu(), the TLB has been flushed and it is safe to free those
>> pages.
>
> It would close this issue, however synchronize_rcu() is very slow
> (think > 1second) in some cases and thus cannot be inserted here.

It shouldn't be *that* slow, at least not the vast majority of the
time... it seems a bit unreasonable that a CPU wouldn't schedule for
more than a second. But these aren't fast paths and synchronize_rcu()
already gets called in the unbind path for p2pdma a of couple times. I'm
sure it would also be fine to slow down the vma_close() path as well.

> I'm also not completely sure that rcu is the only case, I don't know
> how every arch handles its gather structure.. I have a feeling the
> general intention was for this to be asynchronous

Yeah, this is not clear to me either.

> My preferences are to either remove devmap from gup_fast, or fix it to
> not use special pages - the latter being obviously better.

Yeah, I rather expect DAX users want the optimization provided by
gup_fast. I don't think P2PDMA users would be happy about being stuck
with slow gup either.

Loga