Re: [RFC PATCH] mm: hold PTL from the first PTE while reclaiming a large folio

From: Barry Song
Date: Tue Mar 05 2024 - 05:45:29 EST


On Tue, Mar 5, 2024 at 10:21 PM Barry Song <21cnbao@xxxxxxxxx> wrote:
>
> On Tue, Mar 5, 2024 at 10:12 PM Huang, Ying <ying.huang@xxxxxxxxx> wrote:
> >
> > Barry Song <21cnbao@xxxxxxxxx> writes:
> >
> > > On Tue, Mar 5, 2024 at 8:55 PM Huang, Ying <ying.huang@xxxxxxxxx> wrote:
> > >>
> > >> Barry Song <21cnbao@xxxxxxxxx> writes:
> > >>
> > >> > On Tue, Mar 5, 2024 at 10:15 AM David Hildenbrand <david@xxxxxxxxxx> wrote:
> > >> >> > But we did "resolve" those bugs by entirely untouching all PTEs if we
> > >> >> > found some PTEs were skipped in try_to_unmap_one [1].
> > >> >> >
> > >> >> > While we find we only get the PTL from 2nd, 3rd but not
> > >> >> > 1st PTE, we entirely give up on try_to_unmap_one, and leave
> > >> >> > all PTEs untouched.
> > >> >> >
> > >> >> > /* we are not starting from head */
> > >> >> > if (!IS_ALIGNED((unsigned long)pvmw.pte, CONT_PTES * sizeof(*pvmw.pte))) {
> > >> >> > ret = false;
> > >> >> > atomic64_inc(&perf_stat.mapped_walk_start_from_non_head);
> > >> >> > set_pte_at(mm, address, pvmw.pte, pteval);
> > >> >> > page_vma_mapped_walk_done(&pvmw);
> > >> >> > break;
> > >> >> > }
> > >> >> > This will ensure all PTEs still have a unified state such as CONT-PTE
> > >> >> > after try_to_unmap fails.
> > >> >> > I feel this could have some false postive because when racing
> > >> >> > with unmap, 1st PTE might really become pte_none. So explicitly
> > >> >> > holding PTL from 1st PTE seems a better way.
> > >> >>
> > >> >> Can we estimate the "cost" of holding the PTL?
> > >> >>
> > >> >
> > >> > This is just moving PTL acquisition one or two PTE earlier in those corner
> > >> > cases. In normal cases, it doesn't affect when PTL is held.
> > >>
> > >> The mTHP may be mapped at the end of page table. In that case, the PTL
> > >> will be held longer. Or am I missing something?
> > >
> > > no. this patch doesn't change when we release PTL but change when we
> > > get PTL.
> > >
> > > when the original code iterates nr_pages PTEs in a large folio, it will skip
> > > invalid PTEs, when it meets a valid one, it will acquire PTL. so if it gets
> > > intermediate PTE values some other threads are modifying, it might
> > > skip PTE0, or sometimes PTE0 and PTE1 according to my test. but
> > > arriving at PTE2, likely other threads have written a new value, so we
> > > will begin to hold PTL and iterate till the end of the large folio.
> >
> > Is there any guarantee that the mTHP will always be mapped at the
> > beginning of the page table (PTE0)? IIUC, mTHP can be mapped at PTE496.
> > If so, with your patch, PTL will be held from PTE0 instead of PTE496 in
> > some cases.
>
> I agree. but in another discussion[1], the plan is if we find a large folio has
> been deferred split, we split it before try_to_unmap and pageout. otherwise,
> we may result in lots of redundant I/O, because PTE0-495 will still be
> pageout()-ed.
>
> [1] https://lore.kernel.org/linux-mm/a4a9054f-2040-4f70-8d10-a5af4972e5aa@xxxxxxx/

I thought about this again, seems we can cope with it even w/o the above plan
by:

+ if (folio_test_large(folio) && list_empty(&folio->_deferred_list))
+ flags |= TTU_SYNC;

if a folio has been deferred split, it seems no sense to have the optimization
for the corner cases this patch wants to provide. Only while we know this
folio is still entirely mapped, we have this optimization. This should have
reduced the chance to be quite small though we still have a bit.

>
> >
> > --
> > Best Regards,
> > Huang, Ying
> >
> > > The proposal is that we directly get PTL from PTE0, thus we don't get
> > > intermediate values for the head of nr_pages PTEs. this will ensure
> > > a large folio is either completely unmapped or completely mapped.
> > > but not partially mapped and partially unmapped.
> > >
> > >>

Thanks
Barry