Re: [RFC 13/20] mm/tlb: introduce tlb_start_ptes() and tlb_end_ptes()

From: Nadav Amit
Date: Mon Feb 01 2021 - 02:30:08 EST


> On Jan 31, 2021, at 2:07 AM, Damian Tometzki <linux@xxxxxxxxxxx> wrote:
>
> On Sat, 30. Jan 16:11, Nadav Amit wrote:
>> From: Nadav Amit <namit@xxxxxxxxxx>
>>
>> Introduce tlb_start_ptes() and tlb_end_ptes() which would be called
>> before and after PTEs are updated and TLB flushes are deferred. This
>> will be later be used for fine granualrity deferred TLB flushing
>> detection.
>>
>> In the meanwhile, move flush_tlb_batched_pending() into
>> tlb_start_ptes(). It was not called from mapping_dirty_helpers by
>> wp_pte() and clean_record_pte(), which might be a bug.
>>
>> No additional functional change is intended.
>>
>> Signed-off-by: Nadav Amit <namit@xxxxxxxxxx>
>> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx>
>> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
>> Cc: Andy Lutomirski <luto@xxxxxxxxxx>
>> Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
>> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
>> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
>> Cc: Will Deacon <will@xxxxxxxxxx>
>> Cc: Yu Zhao <yuzhao@xxxxxxxxxx>
>> Cc: Nick Piggin <npiggin@xxxxxxxxx>
>> Cc: x86@xxxxxxxxxx
>> ---
>> fs/proc/task_mmu.c | 2 ++
>> include/asm-generic/tlb.h | 18 ++++++++++++++++++
>> mm/madvise.c | 6 ++++--
>> mm/mapping_dirty_helpers.c | 15 +++++++++++++--
>> mm/memory.c | 2 ++
>> mm/mprotect.c | 3 ++-
>> 6 files changed, 41 insertions(+), 5 deletions(-)
>>
>> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
>> index 4cd048ffa0f6..d0cce961fa5c 100644
>> --- a/fs/proc/task_mmu.c
>> +++ b/fs/proc/task_mmu.c
>> @@ -1168,6 +1168,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr,
>> return 0;
>>
>> pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
>> + tlb_start_ptes(&cp->tlb);
>> for (; addr != end; pte++, addr += PAGE_SIZE) {
>> ptent = *pte;
>>
>> @@ -1190,6 +1191,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr,
>> tlb_flush_pte_range(&cp->tlb, addr, PAGE_SIZE);
>> ClearPageReferenced(page);
>> }
>> + tlb_end_ptes(&cp->tlb);
>> pte_unmap_unlock(pte - 1, ptl);
>> cond_resched();
>> return 0;
>> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
>> index 041be2ef4426..10690763090a 100644
>> --- a/include/asm-generic/tlb.h
>> +++ b/include/asm-generic/tlb.h
>> @@ -58,6 +58,11 @@
>> * Defaults to flushing at tlb_end_vma() to reset the range; helps when
>> * there's large holes between the VMAs.
>> *
>> + * - tlb_start_ptes() / tlb_end_ptes; makr the start / end of PTEs change.
>
> Hello Nadav,
>
> short nid makr/mark

Thanks! I will fix it.