Re: [PATCH 6.3.y] mm/hugetlb: revert use of page_cache_next_miss()

From: Sidhartha Kumar
Date: Tue Jun 06 2023 - 14:13:29 EST


On 6/6/23 10:38 AM, Greg KH wrote:
On Tue, Jun 06, 2023 at 10:20:22AM -0700, Sidhartha Kumar wrote:
As reported by Ackerley[1], the use of page_cache_next_miss() in
hugetlbfs_fallocate() introduces a bug where a second fallocate() call to
same offset fails with -EEXIST. Revert this change and go back to the
previous method of using get from the page cache and then dropping the
reference on success.

hugetlbfs_pagecache_present() was also refactored to use
page_cache_next_miss(), revert the usage there as well.

User visible impacts include hugetlb fallocate incorrectly returning
EEXIST if pages are already present in the file. In addition, hugetlb
pages will not be included in core dumps if they need to be brought in via
GUP. userfaultfd UFFDIO_COPY also uses this code and will not notice pages
already present in the cache. It may try to allocate a new page and
potentially return ENOMEM as opposed to EEXIST.

Fixes: d0ce0e47b323 ("mm/hugetlb: convert hugetlb fault paths to use alloc_hugetlb_folio()")
Cc: <stable@xxxxxxxxxxxxxxx> #v6.3
Reported-by: Ackerley Tng <ackerleytng@xxxxxxxxxx>
Signed-off-by: Sidhartha Kumar <sidhartha.kumar@xxxxxxxxxx>
Reviewed-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx>

[1] https://lore.kernel.org/linux-mm/cover.1683069252.git.ackerleytng@xxxxxxxxxx/
---

This revert is the safest way to fix 6.3. The upstream fix will either
fix page_cache_next_miss() itself or use Ackerley's patch to introduce a
new function to check if a page is present in the page cache. Both
directions are currently under review so we can use this safe and simple
fix for 6.3

Is there any specific reason why we don't just wait for the fix for
Linus's tree before applying this one, or applying the real fix instead?

I missed Andrew's message stating he would prefer the real fix[1].

Sorry for the noise,
Sidhartha Kumar

[1] https://lore.kernel.org/lkml/20230603022209.GA114055@monkey/T/#mea6c8a015dbea5f9c2be88b9791996f4be6c2de8

thanks,

greg k-h