Re: [Patch v2] mm/rmap.c: split huge pmd when it really is

From: Wei Yang
Date: Fri Jan 03 2020 - 08:05:59 EST


On Fri, Jan 03, 2020 at 03:18:46PM +0800, Wei Yang wrote:
>On Tue, Dec 24, 2019 at 06:28:56AM +0800, Wei Yang wrote:
>>When page is not NULL, function is called by try_to_unmap_one() with
>>TTU_SPLIT_HUGE_PMD set. There are two cases to call try_to_unmap_one()
>>with TTU_SPLIT_HUGE_PMD set:
>>
>> * unmap_page()
>> * shrink_page_list()
>>
>>In both case, the page passed to try_to_unmap_one() is PageHead() of the
>>THP. If this page's mapping address in process is not HPAGE_PMD_SIZE
>>aligned, this means the THP is not mapped as PMD THP in this process.
>>This could happen when we do mremap() a PMD size range to an un-aligned
>>address.
>>
>>Currently, this case is handled by following check in __split_huge_pmd()
>>luckily.
>>
>> page != pmd_page(*pmd)
>>
>>This patch checks the address to skip some work.
>
>I am sorry to forget address Kirill's comment in 1st version.
>
>The first one is the performance difference after this change for a PTE
>mappged THP.
>
>Here is the result:(in cycle)
>
> Before Patched
>
> 963 195
> 988 40
> 895 78
>
>Average 948 104
>
>So the change reduced 90% time for function split_huge_pmd_address().
>
>For the 2nd comment, the vma check. Let me take a further look to analysis.
>
>Thanks for Kirill's suggestion.
>

For 2nd comment, check vma could hold huge page.

You mean do this check ?

vma->vm_start <= address && vma->vm_end >= address + HPAGE_PMD_SIZE

This happens after munmap a partial of the THP range? After doing so, we can
skip split_pmd for this case.

>>
>>Signed-off-by: Wei Yang <richardw.yang@xxxxxxxxxxxxxxx>
>>
>>---
>>v2: move the check into split_huge_pmd_address().
>>---
>> mm/huge_memory.c | 16 ++++++++++++++++
>> 1 file changed, 16 insertions(+)
>>
>>diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>index 893fecd5daa4..2b9c2f412b32 100644
>>--- a/mm/huge_memory.c
>>+++ b/mm/huge_memory.c
>>@@ -2342,6 +2342,22 @@ void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address,
>> pud_t *pud;
>> pmd_t *pmd;
>>
>>+ /*
>>+ * When page is not NULL, function is called by try_to_unmap_one()
>>+ * with TTU_SPLIT_HUGE_PMD set. There are two places set
>>+ * TTU_SPLIT_HUGE_PMD
>>+ *
>>+ * unmap_page()
>>+ * shrink_page_list()
>>+ *
>>+ * In both cases, the "page" here is the PageHead() of a THP.
>>+ *
>>+ * If the page is not a PMD mapped huge page, e.g. after mremap(), it
>>+ * is not necessary to split it.
>>+ */
>>+ if (page && !IS_ALIGNED(address, HPAGE_PMD_SIZE))
>>+ return;
>>+
>> pgd = pgd_offset(vma->vm_mm, address);
>> if (!pgd_present(*pgd))
>> return;
>>--
>>2.17.1
>
>--
>Wei Yang
>Help you, Help me

--
Wei Yang
Help you, Help me