Re: [PATCH v6 5/9] mm: thp: Extend THP to allocate anonymous large folios

From: Ryan Roberts
Date: Wed Nov 01 2023 - 09:56:41 EST


On 30/10/2023 23:25, John Hubbard wrote:
> On 10/30/23 04:43, Ryan Roberts wrote:
>> On 28/10/2023 00:04, John Hubbard wrote:
>>> On 9/29/23 04:44, Ryan Roberts wrote:
> ...
>>>>    +static bool vmf_pte_range_changed(struct vm_fault *vmf, int nr_pages)
>>>> +{
>>>> +    int i;
>>>> +
>>>> +    if (nr_pages == 1)
>>>> +        return vmf_pte_changed(vmf);
>>>> +
>>>> +    for (i = 0; i < nr_pages; i++) {
>>>> +        if (!pte_none(ptep_get_lockless(vmf->pte + i)))
>>>> +            return true;
>>>
>>> This seems like something different than the function name implies.
>>> It's really confusing: for a single page case, return true if the
>>> pte in the page tables has changed, yes that is very clear.
>>>
>>> But then for multiple page cases, which is really the main
>>> focus here--for that, claim that the range has changed if any
>>> pte is present (!pte_none). Can you please help me understand
>>> what this means?
>>
>> Yes I understand your confusion. Although I'm confident that the code is
>> correct, its a bad name - I'll make the excuse that this has evolved through
>> rebasing to cope with additions to UFFD. Perhaps something like
>> vmf_is_large_folio_suitable() is a better name.
>>
>> It used to be that we would only take the do_anonymous_page() path if the pte
>> was none; i.e. this is the first time we are faulting on an address covered by
>> an anon VMA and we need to allocate some memory. But more recently we also end
>> up here if the pte is a uffd_wp marker. So for a single pte, instead of checking
>> none, we can check if the pte has changed from our original check (where we
>> determined it was a uffd_wp marker or none). But for multiple ptes, we don't
>> have storage to store all the original ptes from the first check.
>>
>> Fortunately, if uffd is in use for a vma, then we don't want to use a large
>> folio anyway (this would break uffd semantics because we would no longer get a
>> fault for every page). So we only care about the "same but not none" case for
>> nr_pages=1.
>>
>> Would changing the name to vmf_is_large_folio_suitable() help here?
>
> Yes it would! And adding in a sentence or two from above about the uffd, as
> a function-level comment might be just the right of demystification for
> the code.

Actually I don't think the name I proposed it quite right either - this gets
called for small folios too.

I think its cleaner to change the name to vmf_pte_range_none() and strip out the
nr_pages==1 case. The checking-for-none part is required by alloc_anon_folio()
and needs to be safe without holding the PTL. vmf_pte_changed() is not safe in
without the lock. So I've just hoisted the nr_pages==1 case directly into
do_anonymous_page(). Shout if you think we can do better:


diff --git a/mm/memory.c b/mm/memory.c
index 569c828b1cdc..b48e4de1bf20 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4117,19 +4117,16 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
return ret;
}

-static bool vmf_pte_range_changed(struct vm_fault *vmf, int nr_pages)
+static bool pte_range_none(pte_t *pte, int nr_pages)
{
int i;

- if (nr_pages == 1)
- return vmf_pte_changed(vmf);
-
for (i = 0; i < nr_pages; i++) {
- if (!pte_none(ptep_get_lockless(vmf->pte + i)))
- return true;
+ if (!pte_none(ptep_get_lockless(pte + i)))
+ return false;
}

- return false;
+ return true;
}

#ifdef CONFIG_TRANSPARENT_HUGEPAGE
@@ -4170,7 +4167,7 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf)
while (orders) {
addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order);
vmf->pte = pte + pte_index(addr);
- if (!vmf_pte_range_changed(vmf, 1 << order))
+ if (pte_range_none(vmf->pte, 1 << order))
break;
order = next_order(&orders, order);
}
@@ -4280,7 +4277,8 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, addr, &vmf->ptl);
if (!vmf->pte)
goto release;
- if (vmf_pte_range_changed(vmf, nr_pages)) {
+ if ((nr_pages == 1 && vmf_pte_changed(vmf)) ||
+ (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages))) {
for (i = 0; i < nr_pages; i++)
update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i);
goto release;