Re: [PATCH 5/7] khugepaged: Allow to collapse PTE-mapped compound pages

From: Kirill A. Shutemov
Date: Fri Mar 27 2020 - 20:40:34 EST


On Fri, Mar 27, 2020 at 01:45:55PM -0700, Yang Shi wrote:
> On Fri, Mar 27, 2020 at 10:06 AM Kirill A. Shutemov
> <kirill@xxxxxxxxxxxxx> wrote:
> >
> > We can collapse PTE-mapped compound pages. We only need to avoid
> > handling them more than once: lock/unlock page only once if it's present
> > in the PMD range multiple times as it handled on compound level. The
> > same goes for LRU isolation and putpack.
> >
> > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
> > ---
> > mm/khugepaged.c | 41 +++++++++++++++++++++++++++++++----------
> > 1 file changed, 31 insertions(+), 10 deletions(-)
> >
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index b47edfe57f7b..c8c2c463095c 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -515,6 +515,17 @@ void __khugepaged_exit(struct mm_struct *mm)
> >
> > static void release_pte_page(struct page *page)
> > {
> > + /*
> > + * We need to unlock and put compound page on LRU only once.
> > + * The rest of the pages have to be locked and not on LRU here.
> > + */
> > + VM_BUG_ON_PAGE(!PageCompound(page) &&
> > + (!PageLocked(page) && PageLRU(page)), page);
> > +
> > + if (!PageLocked(page))
> > + return;
> > +
> > + page = compound_head(page);
> > dec_node_page_state(page, NR_ISOLATED_ANON + page_is_file_cache(page));
> > unlock_page(page);
> > putback_lru_page(page);
>
> BTW, wouldn't this unlock the whole THP and put it back to LRU?

It is the intention.

> Then we may copy the following PTE mapped pages with page unlocked and
> on LRU. I don't see critical problem, just the pages might be on and off
> LRU by others, i.e. vmscan, compaction, migration, etc. But no one could
> take the page away since try_to_unmap() would fail, but not very
> productive.
>
>
> > @@ -537,6 +548,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> > pte_t *_pte;
> > int none_or_zero = 0, result = 0, referenced = 0;
> > bool writable = false;
> > + LIST_HEAD(compound_pagelist);
> >
> > for (_pte = pte; _pte < pte+HPAGE_PMD_NR;
> > _pte++, address += PAGE_SIZE) {
> > @@ -561,13 +573,23 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> > goto out;
> > }
> >
> > - /* TODO: teach khugepaged to collapse THP mapped with pte */
> > + VM_BUG_ON_PAGE(!PageAnon(page), page);
> > +
> > if (PageCompound(page)) {
> > - result = SCAN_PAGE_COMPOUND;
> > - goto out;
> > - }
> > + struct page *p;
> > + page = compound_head(page);
> >
> > - VM_BUG_ON_PAGE(!PageAnon(page), page);
> > + /*
> > + * Check if we have dealt with the compount page
> > + * already
> > + */
> > + list_for_each_entry(p, &compound_pagelist, lru) {
> > + if (page == p)
> > + break;
> > + }
> > + if (page == p)
> > + continue;
> > + }
> >
> > /*
> > * We can do it before isolate_lru_page because the
> > @@ -640,6 +662,9 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> > page_is_young(page) || PageReferenced(page) ||
> > mmu_notifier_test_young(vma->vm_mm, address))
> > referenced++;
> > +
> > + if (PageCompound(page))
> > + list_add_tail(&page->lru, &compound_pagelist);
> > }
> > if (likely(writable)) {
> > if (likely(referenced)) {
> > @@ -1185,11 +1210,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
> > goto out_unmap;
> > }
> >
> > - /* TODO: teach khugepaged to collapse THP mapped with pte */
> > - if (PageCompound(page)) {
> > - result = SCAN_PAGE_COMPOUND;
> > - goto out_unmap;
> > - }
> > + page = compound_head(page);
> >
> > /*
> > * Record which node the original page is from and save this
> > --
> > 2.26.0
> >
> >

--
Kirill A. Shutemov