[PATCH 16/46] hugetlbfs: do a full walk to check if vma maps a page

From: James Houghton
Date: Thu Jan 05 2023 - 05:21:04 EST


Because it is safe to do so, we can do a full high-granularity page
table walk to check if the page is mapped.

If it were not safe to do so, we could bail out early in the case of a
high-granularity mapped PTE, indicating that the page could have been
mapped.

Signed-off-by: James Houghton <jthoughton@xxxxxxxxxx>
---
fs/hugetlbfs/inode.c | 17 ++++++++++++-----
1 file changed, 12 insertions(+), 5 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 48f1a8ad2243..d34ce79da595 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -386,17 +386,24 @@ static void hugetlb_delete_from_page_cache(struct folio *folio)
static bool hugetlb_vma_maps_page(struct vm_area_struct *vma,
unsigned long addr, struct page *page)
{
- pte_t *ptep, pte;
+ pte_t pte;
+ struct hugetlb_pte hpte;

- ptep = hugetlb_walk(vma, addr, huge_page_size(hstate_vma(vma)));
- if (!ptep)
+ if (hugetlb_full_walk(&hpte, vma, addr))
return false;

- pte = huge_ptep_get(ptep);
+ pte = huge_ptep_get(hpte.ptep);
if (huge_pte_none(pte) || !pte_present(pte))
return false;

- if (pte_page(pte) == page)
+ if (unlikely(!hugetlb_pte_present_leaf(&hpte, pte)))
+ /*
+ * We raced with someone splitting us, and the only case
+ * where this is impossible is when the pte was none.
+ */
+ return false;
+
+ if (compound_head(pte_page(pte)) == page)
return true;

return false;
--
2.39.0.314.g84b9a713c41-goog