Re: [PATCH v2 1/1] mm/madvise: enhance lazyfreeing with mTHP in madvise_free

From: Barry Song
Date: Thu Mar 07 2024 - 04:33:44 EST


On Thu, Mar 7, 2024 at 10:07 PM Ryan Roberts <ryan.roberts@xxxxxxx> wrote:
>
> On 07/03/2024 08:10, Barry Song wrote:
> > On Thu, Mar 7, 2024 at 9:00 PM Lance Yang <ioworker0@xxxxxxxxx> wrote:
> >>
> >> Hey Barry,
> >>
> >> Thanks for taking time to review!
> >>
> >> On Thu, Mar 7, 2024 at 3:00 PM Barry Song <21cnbao@xxxxxxxxx> wrote:
> >>>
> >>> On Thu, Mar 7, 2024 at 7:15 PM Lance Yang <ioworker0@xxxxxxxxx> wrote:
> >>>>
> >> [...]
> >>>> +static inline bool can_mark_large_folio_lazyfree(unsigned long addr,
> >>>> + struct folio *folio, pte_t *start_pte)
> >>>> +{
> >>>> + int nr_pages = folio_nr_pages(folio);
> >>>> + fpb_t flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
> >>>> +
> >>>> + for (int i = 0; i < nr_pages; i++)
> >>>> + if (page_mapcount(folio_page(folio, i)) != 1)
> >>>> + return false;
> >>>
> >>> we have moved to folio_estimated_sharers though it is not precise, so
> >>> we don't do
> >>> this check with lots of loops and depending on the subpage's mapcount.
> >>
> >> If we don't check the subpage’s mapcount, and there is a cow folio associated
> >> with this folio and the cow folio has smaller size than this folio,
> >> should we still
> >> mark this folio as lazyfree?
> >
> > I agree, this is true. However, we've somehow accepted the fact that
> > folio_likely_mapped_shared
> > can result in false negatives or false positives to balance the
> > overhead. So I really don't know :-)
> >
> > Maybe David and Vishal can give some comments here.
> >
> >>
> >>> BTW, do we need to rebase our work against David's changes[1]?
> >>> [1] https://lore.kernel.org/linux-mm/20240227201548.857831-1-david@xxxxxxxxxx/
> >>
> >> Yes, we should rebase our work against David’s changes.
> >>
> >>>
> >>>> +
> >>>> + return nr_pages == folio_pte_batch(folio, addr, start_pte,
> >>>> + ptep_get(start_pte), nr_pages, flags, NULL);
> >>>> +}
> >>>> +
> >>>> static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> >>>> unsigned long end, struct mm_walk *walk)
> >>>>
> >>>> @@ -676,11 +690,45 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> >>>> */
> >>>> if (folio_test_large(folio)) {
> >>>> int err;
> >>>> + unsigned long next_addr, align;
> >>>>
> >>>> - if (folio_estimated_sharers(folio) != 1)
> >>>> - break;
> >>>> - if (!folio_trylock(folio))
> >>>> - break;
> >>>> + if (folio_estimated_sharers(folio) != 1 ||
> >>>> + !folio_trylock(folio))
> >>>> + goto skip_large_folio;
> >>>
> >>>
> >>> I don't think we can skip all the PTEs for nr_pages, as some of them might be
> >>> pointing to other folios.
> >>>
> >>> for example, for a large folio with 16PTEs, you do MADV_DONTNEED(15-16),
> >>> and write the memory of PTE15 and PTE16, you get page faults, thus PTE15
> >>> and PTE16 will point to two different small folios. We can only skip when we
> >>> are sure nr_pages == folio_pte_batch() is sure.
> >>
> >> Agreed. Thanks for pointing that out.
> >>
> >>>
> >>>> +
> >>>> + align = folio_nr_pages(folio) * PAGE_SIZE;
> >>>> + next_addr = ALIGN_DOWN(addr + align, align);
> >>>> +
> >>>> + /*
> >>>> + * If we mark only the subpages as lazyfree, or
> >>>> + * cannot mark the entire large folio as lazyfree,
> >>>> + * then just split it.
> >>>> + */
> >>>> + if (next_addr > end || next_addr - addr != align ||
> >>>> + !can_mark_large_folio_lazyfree(addr, folio, pte))
> >>>> + goto split_large_folio;
> >>>> +
> >>>> + /*
> >>>> + * Avoid unnecessary folio splitting if the large
> >>>> + * folio is entirely within the given range.
> >>>> + */
> >>>> + folio_clear_dirty(folio);
> >>>> + folio_unlock(folio);
> >>>> + for (; addr != next_addr; pte++, addr += PAGE_SIZE) {
> >>>> + ptent = ptep_get(pte);
> >>>> + if (pte_young(ptent) || pte_dirty(ptent)) {
> >>>> + ptent = ptep_get_and_clear_full(
> >>>> + mm, addr, pte, tlb->fullmm);
> >>>> + ptent = pte_mkold(ptent);
> >>>> + ptent = pte_mkclean(ptent);
> >>>> + set_pte_at(mm, addr, pte, ptent);
> >>>> + tlb_remove_tlb_entry(tlb, pte, addr);
> >>>> + }
> >>>
> >>> Can we do this in batches? for a CONT-PTE mapped large folio, you are unfolding
> >>> and folding again. It seems quite expensive.
>
> I'm not convinced we should be doing this in batches. We want the initial
> folio_pte_batch() to be as loose as possible regarding permissions so that we
> reduce our chances of splitting folios to the min. (e.g. ignore SW bits like
> soft dirty, etc). I think it might be possible that some PTEs are RO and other
> RW too (e.g. due to cow - although with the current cow impl, probably not. But
> its fragile to assume that). Anyway, if we do an initial batch that ignores all

You are correct. I believe this scenario could indeed occur. For instance,
if process A forks process B and then unmaps itself, leaving B as the
sole process owning the large folio. The current wp_page_reuse() function
will reuse PTE one by one while the specific subpage is written. This can
make a part of PTE writable while the others are read-only.

> that then do this bit as a batch, you will end up smeering all the ptes with
> whatever properties were set on the first pte, which probably isn't right.
>
> I've done a similar conversion for madvise_cold_or_pageout_pte_range() as part
> of my swap-out series v4 (hoping to post imminently, but still working out a
> latent bug that it triggers). I use ptep_test_and_clear_young() in that, which
> arm64 can apply per-pte but avoid doing a contpte unfold/fold. I know you have
> to clear dirty here too, but I think this pattern is preferable.

nice to know ptep_test_and_clear_young() won't unfold and fold CONT-PTE.
I probably have missed this part of your CONT-PTE series as I was quite busy
with others :-)

>
> FYI, my swap-out series also halfway-batches madvise_free_pte_range() so that I
> can batch free_swap_and_cache() for the swap entry case. Ideally the work you
> are doing here would be rebased on top of that and plug-in to the approach
> implemented there. (subject to others' views of course).
>
> I'll cc you when I post it.
>
> >>
> >> Thanks for your suggestion. I'll do this in batches in v3.
> >>
> >> Thanks again for your time!
> >>
> >> Best,
> >> Lance
> >>
> >>>
> >>>> + }
> >>>> + folio_mark_lazyfree(folio);
> >>>> + goto next_folio;
> >>>> +
> >>>> +split_large_folio:
> >>>> folio_get(folio);
> >>>> arch_leave_lazy_mmu_mode();
> >>>> pte_unmap_unlock(start_pte, ptl);
> >>>> @@ -688,13 +736,28 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> >>>> err = split_folio(folio);
> >>>> folio_unlock(folio);
> >>>> folio_put(folio);
> >>>> - if (err)
> >>>> - break;
> >>>> - start_pte = pte =
> >>>> - pte_offset_map_lock(mm, pmd, addr, &ptl);
> >>>> - if (!start_pte)
> >>>> - break;
> >>>> - arch_enter_lazy_mmu_mode();
> >>>> +
> >>>> + /*
> >>>> + * If the large folio is locked or cannot be split,
> >>>> + * we just skip it.
> >>>> + */
> >>>> + if (err) {
> >>>> +skip_large_folio:
> >>>> + if (next_addr >= end)
> >>>> + break;
> >>>> + pte += (next_addr - addr) / PAGE_SIZE;
> >>>> + addr = next_addr;
> >>>> + }
> >>>> +
> >>>> + if (!start_pte) {
> >>>> + start_pte = pte = pte_offset_map_lock(
> >>>> + mm, pmd, addr, &ptl);
> >>>> + if (!start_pte)
> >>>> + break;
> >>>> + arch_enter_lazy_mmu_mode();
> >>>> + }
> >>>> +
> >>>> +next_folio:
> >>>> pte--;
> >>>> addr -= PAGE_SIZE;
> >>>> continue;
> >>>> --
> >>>> 2.33.1
> >>>>
> >

Thanks
Barry