Re: [PATCH linux-next] Fix shmem huge page failed to set F_SEAL_WRITE attribute problem

From: Matthew Wilcox
Date: Thu Feb 17 2022 - 08:44:50 EST


On Wed, Feb 16, 2022 at 05:25:17PM -0800, Hugh Dickins wrote:
> On Wed, 16 Feb 2022, Mike Kravetz wrote:
> > On 2/14/22 23:37, cgel.zte@xxxxxxxxx wrote:
> > > From: wangyong <wang.yong12@xxxxxxxxxx>
> > >
> > > After enabling tmpfs filesystem to support transparent hugepage with the
> > > following command:
> > > echo always > /sys/kernel/mm/transparent_hugepage/shmem_enabled
> > > The docker program adds F_SEAL_WRITE through the following command will
> > > prompt EBUSY.
> > > fcntl(5, F_ADD_SEALS, F_SEAL_WRITE)=-1.
> > >
> > > It is found that in memfd_wait_for_pins function, the page_count of
> > > hugepage is 512 and page_mapcount is 0, which does not meet the
> > > conditions:
> > > page_count(page) - page_mapcount(page) != 1.
> > > But the page is not busy at this time, therefore, the page_order of
> > > hugepage should be taken into account in the calculation.
> > >
> > > Reported-by: Zeal Robot <zealci@xxxxxxxxxx>
> > > Signed-off-by: wangyong <wang.yong12@xxxxxxxxxx>
> > > ---
> > > mm/memfd.c | 16 +++++++++++++---
> > > 1 file changed, 13 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/mm/memfd.c b/mm/memfd.c
> > > index 9f80f162791a..26d1d390a22a 100644
> > > --- a/mm/memfd.c
> > > +++ b/mm/memfd.c
> > > @@ -31,6 +31,7 @@
> > > static void memfd_tag_pins(struct xa_state *xas)
> > > {
> > > struct page *page;
> > > + int count = 0;
> > > unsigned int tagged = 0;
> > >
> > > lru_add_drain();
> > > @@ -39,8 +40,12 @@ static void memfd_tag_pins(struct xa_state *xas)
> > > xas_for_each(xas, page, ULONG_MAX) {
> > > if (xa_is_value(page))
> > > continue;
> > > +
> > > page = find_subpage(page, xas->xa_index);
> > > - if (page_count(page) - page_mapcount(page) > 1)
> > > + count = page_count(page);
> > > + if (PageTransCompound(page))
> >
> > PageTransCompound() is true for hugetlb pages as well as THP. And, hugetlb
> > pages will not have a ref per subpage as THP does. So, I believe this will
> > break hugetlb seal usage.
>
> Yes, I think so too; and that is not the only issue with the patch
> (I don't think page_mapcount is enough, I had to use total_mapcount).
>
> It's a good find, and thank you WangYong for the report.
> I found the same issue when testing my MFD_HUGEPAGE patch last year,
> and devised a patch to fix it (and keep MFD_HUGETLB working) then; but
> never sent that in because there wasn't time to re-present MFD_HUGEPAGE.
>
> I'm currently retesting my patch: just found something failing which
> I thought should pass; but maybe I'm confused, or maybe the xarray is
> working differently now. I'm rushing to reply now because I don't want
> others to waste their own time on it.

I did change how the XArray works for THP recently.

Kirill's original patch stored:

512: p
513: p+1
514: p+2
...
1023: p+511

A couple of years ago, I changed it to store:

512: p
513: p
514: p
...
1023: p

And in January, Linus merged the commit which changes it to:

512-575: p
576-639: (sibling of 512)
640-703: (sibling of 512)
...
960-1023: (sibling of 512)

That is, I removed a level of the tree and store sibling entries
rather than duplicate entries. That wasn't for fun; I needed to do
that in order to make msync() work with large folios. Commit
6b24ca4a1a8d has more detail and hopefully can inspire whatever
changes you need to make to your patch.