Re: [PATCH v4 3/3] mm: Batch-zap large anonymous folio PTE mappings

From: Yu Zhao
Date: Thu Jul 27 2023 - 13:22:52 EST


On Thu, Jul 27, 2023 at 8:18 AM Ryan Roberts <ryan.roberts@xxxxxxx> wrote:
>
> This allows batching the rmap removal with folio_remove_rmap_range(),
> which means we avoid spuriously adding a partially unmapped folio to the
> deferred split queue in the common case, which reduces split queue lock
> contention.
>
> Previously each page was removed from the rmap individually with
> page_remove_rmap(). If the first page belonged to a large folio, this
> would cause page_remove_rmap() to conclude that the folio was now
> partially mapped and add the folio to the deferred split queue. But
> subsequent calls would cause the folio to become fully unmapped, meaning
> there is no value to adding it to the split queue.
>
> A complicating factor is that for platforms where MMU_GATHER_NO_GATHER
> is enabled (e.g. s390), __tlb_remove_page() drops a reference to the
> page. This means that the folio reference count could drop to zero while
> still in use (i.e. before folio_remove_rmap_range() is called). This
> does not happen on other platforms because the actual page freeing is
> deferred.
>
> Solve this by appropriately getting/putting the folio to guarrantee it
> does not get freed early. Given the need to get/put the folio in the
> batch path, we stick to the non-batched path if the folio is not large.
> While the batched path is functionally correct for a folio with 1 page,
> it is unlikely to be as efficient as the existing non-batched path in
> this case.
>
> Signed-off-by: Ryan Roberts <ryan.roberts@xxxxxxx>

This ad hoc patch looks unacceptable to me: we can't afford to keep
adding special cases.

I vote for completely converting zap_pte_range() to use
folio_remove_rmap_range(), and that includes tlb_flush_rmap_batch()
and other types of large folios, not just anon. Otherwise I'll leave
it to Matthew and David.