Re: [PATCH 00/75] MM folio patches for 5.18

From: John Hubbard
Date: Sun Feb 13 2022 - 17:32:03 EST


On 2/4/22 11:57, Matthew Wilcox (Oracle) wrote:
Whole series availabke through git, and shortly in linux-next:
https://git.infradead.org/users/willy/pagecache.git/shortlog/refs/heads/for-next
or git://git.infradead.org/users/willy/pagecache.git for-next

Hi Matthew,

I'm having trouble finding this series linux-next, or mmotm either. Has
the plan changed, or maybe I'm just Doing It Wrong? :)

Background as to why (you can skip this part unless you're wondering):

Locally, I've based a small but critical patch on top of this series. It
introduces a new routine:

void pin_user_page(struct page *page);

...which is a prerequisite for converting Direct IO over to use
FOLL_PIN.

For that, I am on the fence about whether to request putting the first
part of my conversion patchset into 5.18, or 5.19. Ideally, I'd like to
keep it based on your series, because otherwise there are a couple of
warts in pin_user_page() that have to be fixed up later. But on the
other hand, it would be nice to get the prerequisites in place, because
many filesystems need small changes.

Here's the diffs for "mm/gup: introduce pin_user_page()", for reference:

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 73b7e4bd250b..c2bb8099a56b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1963,6 +1963,7 @@ long get_user_pages(unsigned long start, unsigned long nr_pages,
long pin_user_pages(unsigned long start, unsigned long nr_pages,
unsigned int gup_flags, struct page **pages,
struct vm_area_struct **vmas);
+void pin_user_page(struct page *page);
long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages,
struct page **pages, unsigned int gup_flags);
long pin_user_pages_unlocked(unsigned long start, unsigned long nr_pages,
diff --git a/mm/gup.c b/mm/gup.c
index 7150ea002002..7d57c3452192 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -3014,6 +3014,40 @@ long pin_user_pages(unsigned long start, unsigned long nr_pages,
}
EXPORT_SYMBOL(pin_user_pages);

+/**
+ * pin_user_page() - apply a FOLL_PIN reference to a page ()
+ *
+ * @page: the page to be pinned.
+ *
+ * Similar to get_user_pages(), in that the page's refcount is elevated using
+ * FOLL_PIN rules.
+ *
+ * IMPORTANT: That means that the caller must release the page via
+ * unpin_user_page().
+ *
+ */
+void pin_user_page(struct page *page)
+{
+ struct folio *folio = page_folio(page);
+
+ WARN_ON_ONCE(folio_ref_count(folio) <= 0);
+
+ /*
+ * Similar to try_grab_page(): be sure to *also*
+ * increment the normal page refcount field at least once,
+ * so that the page really is pinned.
+ */
+ if (folio_test_large(folio)) {
+ folio_ref_add(folio, 1);
+ atomic_add(1, folio_pincount_ptr(folio));
+ } else {
+ folio_ref_add(folio, GUP_PIN_COUNTING_BIAS);
+ }
+
+ node_stat_mod_folio(folio, NR_FOLL_PIN_ACQUIRED, 1);
+}
+EXPORT_SYMBOL(pin_user_page);
+
/*
* pin_user_pages_unlocked() is the FOLL_PIN variant of
* get_user_pages_unlocked(). Behavior is the same, except that this one sets


thanks,
--
John Hubbard
NVIDIA