Re: [PATCH 4/7] mm/khugepaged: minor cleanup for collapse_file

From: Miaohe Lin
Date: Thu Jun 16 2022 - 02:11:01 EST


On 2022/6/16 2:18, Yang Shi wrote:
> On Wed, Jun 15, 2022 at 8:55 AM Zach O'Keefe <zokeefe@xxxxxxxxxx> wrote:
>>
>> On 11 Jun 16:47, Miaohe Lin wrote:
>>> nr_none is always 0 for non-shmem case because the page can be read from
>>> the backend store. So when nr_none ! = 0, it must be in is_shmem case.
>>> Also only adjust the nrpages and uncharge shmem when nr_none != 0 to save
>>> cpu cycles.
>>>
>>> Signed-off-by: Miaohe Lin <linmiaohe@xxxxxxxxxx>
>>> ---
>>> mm/khugepaged.c | 9 ++++-----
>>> 1 file changed, 4 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>>> index 1b5dd3820eac..8e6fad7c7bd9 100644
>>> --- a/mm/khugepaged.c
>>> +++ b/mm/khugepaged.c
>>> @@ -1885,8 +1885,7 @@ static void collapse_file(struct mm_struct *mm,
>>>
>>> if (nr_none) {
>>> __mod_lruvec_page_state(new_page, NR_FILE_PAGES, nr_none);
>>> - if (is_shmem)
>>> - __mod_lruvec_page_state(new_page, NR_SHMEM, nr_none);
>>> + __mod_lruvec_page_state(new_page, NR_SHMEM, nr_none);
>>> }
>>
>>
>> Might be worth a small comment here - even though folks can see in above code
>> that this is only incremented in shmem path, might be nice to say why it's
>> always 0 for non-shmem (or conversely, why it's only possible to be non 0 on
>> shmem).
>
> Agreed, better to have some comments in the code.

Will try to add comments in next version. Thanks both!

>
>>
>>>
>>> /* Join all the small entries into a single multi-index entry */
>>> @@ -1950,10 +1949,10 @@ static void collapse_file(struct mm_struct *mm,
>>>
>>> /* Something went wrong: roll back page cache changes */
>>> xas_lock_irq(&xas);
>>> - mapping->nrpages -= nr_none;
>>> -
>>> - if (is_shmem)
>>> + if (nr_none) {
>>> + mapping->nrpages -= nr_none;
>>> shmem_uncharge(mapping->host, nr_none);
>>> + }
>>>
>>> xas_set(&xas, start);
>>> xas_for_each(&xas, page, end - 1) {
>>> --
>>> 2.23.0
>>>
>>>
>>
>> Otherwise,
>>
>> Reviewed-by: Zach O'Keefe <zokeefe@xxxxxxxxxx>
>>
> .
>