Re: [PATCH] mm/folio: Avoid special handling for order value 0 in folio_set_order

From: Tarun Sahu
Date: Tue Apr 18 2023 - 05:25:31 EST


Tarun Sahu <tsahu@xxxxxxxxxxxxx> writes:

> Hi Sidhartha,
>
> Thanks for your inputs, please find my comments inline
>
>> On 4/14/23 12:48 PM, Tarun Sahu wrote:
>>> folio_set_order(folio, 0); which is an abuse of folio_set_order as 0-order
>>> folio does not have any tail page to set order. folio->_folio_nr_pages is
>>> set to 0 for order 0 in folio_set_order. It is required because
>>
>> In the previous discussion of this function, Mike mentioned having
>> folio_set_order() be used for non-zero orders and adding a
>> folio_clear_order() that is used to set order to 0. This could be done
>> to reduce confusion.
>>
> Yes, I agree, I replied to Mathew reply to this thread, Lemme know your
> thought on this. In this patch, I proposed that there won't be need of
> folio_clear_order if folio_set_order(folio, 0) is not needed with minor
> changes in code path.
>
>>> _folio_nr_pages overlapped with page->mapping and leaving it non zero
>>> caused "bad page" error while freeing gigantic hugepages. This was fixed in
>>> Commit ba9c1201beaa ("mm/hugetlb: clear compound_nr before freeing gigantic
>>> pages"). Also commit a01f43901cfb ("hugetlb: be sure to free demoted CMA
>>> pages to CMA") now explicitly clear page->mapping and hence we won't see
>>> the bad page error even if _folio_nr_pages remains unset. Also the order 0
>>> folios are not supposed to call folio_set_order, So now we can get rid of
>>> folio_set_order(folio, 0) from hugetlb code path to clear the confusion.
>>>
>>> The patch also moves _folio_set_head and folio_set_order calls in
>>> __prep_compound_gigantic_folio() such that we avoid clearing them in the
>>> error path.
>>>
>>> Testing: I have run LTP tests, which all passes. and also I have written
>>> the test in LTP which tests the bug caused by compound_nr and page->mapping
>>> overlapping.
>>>
>>> https://lore.kernel.org/all/20230413090753.883953-1-tsahu@xxxxxxxxxxxxx/
>>>
>>> Running on older kernel ( < 5.10-rc7) with the above bug this fails while
>>> on newer kernel and, also with this patch it passes.
>>>
>>> Signed-off-by: Tarun Sahu <tsahu@xxxxxxxxxxxxx>
>>> ---
>>> mm/hugetlb.c | 9 +++------
>>> mm/internal.h | 8 ++------
>>> 2 files changed, 5 insertions(+), 12 deletions(-)
>>>
>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>>> index f16b25b1a6b9..e2540269c1dc 100644
>>> --- a/mm/hugetlb.c
>>> +++ b/mm/hugetlb.c
>>> @@ -1489,7 +1489,6 @@ static void __destroy_compound_gigantic_folio(struct folio *folio,
>>> set_page_refcounted(p);
>>> }
>>>
>>> - folio_set_order(folio, 0);
>>> __folio_clear_head(folio);
>>> }
>>>
>>> @@ -1951,9 +1950,6 @@ static bool __prep_compound_gigantic_folio(struct folio *folio,
>>> struct page *p;
>>>
>>> __folio_clear_reserved(folio);
>>> - __folio_set_head(folio);
>>> - /* we rely on prep_new_hugetlb_folio to set the destructor */
>>> - folio_set_order(folio, order);
>>> for (i = 0; i < nr_pages; i++) {
>>> p = folio_page(folio, i);
>>>
>>> @@ -1999,6 +1995,9 @@ static bool __prep_compound_gigantic_folio(struct folio *folio,
>>> if (i != 0)
>>> set_compound_head(p, &folio->page);
>>> }
>>
>> calling set_compound_head() for the tail page before the folio has the
>> head flag set could seem misleading. At this point order is not set as
>> well so it is not clear that the folio is a compound page folio.
>>
> Yeah, I agree, But they are part of same call. I can avoid moving
> __folio_set_head. And I think, It wont mislead if I avoid moving
> __folio_set_head. Below function has the similar path.

Apologies, Here, I mixed the sentences, I want to say
It won't mislead if we avoid moving __folio_set_head, but move only
folio_set_order. Below function does the same.
>
> void prep_compound_page(struct page *page, unsigned int order)
> {
> int i;
> int nr_pages = 1 << order;
>
> __SetPageHead(page);
> for (i = 1; i < nr_pages; i++)
> prep_compound_tail(page, i);
>
> prep_compound_head(page, order);
> }
>
> Lemme know you thoughts.
>
>
> ~Tarun
>
>>> + __folio_set_head(folio);
>>> + /* we rely on prep_new_hugetlb_folio to set the destructor */
>>> + folio_set_order(folio, order);
>>> atomic_set(&folio->_entire_mapcount, -1);
>>> atomic_set(&folio->_nr_pages_mapped, 0);
>>> atomic_set(&folio->_pincount, 0);
>>> @@ -2017,8 +2016,6 @@ static bool __prep_compound_gigantic_folio(struct folio *folio,
>>> p = folio_page(folio, j);
>>> __ClearPageReserved(p);
>>> }
>>> - folio_set_order(folio, 0);
>>> - __folio_clear_head(folio);
>>> return false;
>>> }
>>>
>>> diff --git a/mm/internal.h b/mm/internal.h
>>> index 18cda26b8a92..0d96a3bc1d58 100644
>>> --- a/mm/internal.h
>>> +++ b/mm/internal.h
>>> @@ -425,16 +425,12 @@ int split_free_page(struct page *free_page,
>>> */
>>> static inline void folio_set_order(struct folio *folio, unsigned int order)
>>> {
>>> - if (WARN_ON_ONCE(!folio_test_large(folio)))
>>> + if (WARN_ON_ONCE(!order || !folio_test_large(folio)))
>>> return;
>>>
>>> folio->_folio_order = order;
>>> #ifdef CONFIG_64BIT
>>> - /*
>>> - * When hugetlb dissolves a folio, we need to clear the tail
>>> - * page, rather than setting nr_pages to 1.
>>> - */
>>> - folio->_folio_nr_pages = order ? 1U << order : 0;
>>> + folio->_folio_nr_pages = 1U << order;
>>> #endif
>>> }
>>>