[PATCH v2 0/4] mm/gup: page unpining improvements

From: Joao Martins
Date: Thu Feb 04 2021 - 15:26:35 EST


Hey,

This series improves page unpinning, with an eye on improving MR
deregistration for big swaths of memory (which is bound by the page
unpining), particularly:

1) Decrement the head page by @ntails and thus reducing a lot the number of
atomic operations per compound page. This is done by comparing individual
tail pages heads, and counting number of consecutive tails on which they
match heads and based on that update head page refcount. Should have a
visible improvement in all page (un)pinners which use compound pages.

2) Introducing a new API for unpinning page ranges (to avoid the trick in the
previous item and be based on math), and use that in RDMA ib_mem_release
(used for mr deregistration).

Performance improvements: unpin_user_pages() for hugetlbfs and THP improves ~3x
(through gup_test) and RDMA MR dereg improves ~4.5x with the new API.
See patches 2 and 4 for those.

These patches used to be in this RFC:

https://lore.kernel.org/linux-mm/20201208172901.17384-1-joao.m.martins@xxxxxxxxxx/,
"[PATCH RFC 0/9] mm, sparse-vmemmap: Introduce compound pagemaps"

But were moved separately at the suggestion of Jason, given it's applicable
to page unpinning in general. Thanks for all the comments in the RFC above.

These patches apply on top of linux-next tag next-20210202.

Suggestions, comments, welcomed as usual.

Joao

Changelog since,

v1 -> v2:
* Prefix macro arguments with __ to avoid collisions with other defines (John)
* Remove count_tails() and have the logic for the two iterators split into
range_next() and compound_next() (John)
* Remove the @range boolean from the iterator helpers (John)
* Add docs on unpin_user_page_range_dirty_lock() on patch 3 (John)
* Use unsigned for @i on patch 4 (John)
* Fix subject line of patch 4 (John)
* Add John's Reviewed-by on the second patch
* Fix incorrect use of @nmap and use @sg_nents instead (Jason)

RFC -> v1:
* Introduce a head/ntails iterator and change unpin_*_pages() to use that,
inspired by folio iterators (Jason)
* Introduce an alternative unpin_user_page_range_dirty_lock() to unpin based
on a consecutive page range without having to walk page arrays (Jason)
* Use unsigned for number of tails (Jason)

Joao Martins (4):
mm/gup: add compound page list iterator
mm/gup: decrement head page once for group of subpages
mm/gup: add a range variant of unpin_user_pages_dirty_lock()
RDMA/umem: batch page unpin in __ib_umem_release()

drivers/infiniband/core/umem.c | 12 ++--
include/linux/mm.h | 2 +
mm/gup.c | 122 ++++++++++++++++++++++++++++-----
3 files changed, 112 insertions(+), 24 deletions(-)

--
2.17.1