Re: [RFC 13/20] mm/tlb: introduce tlb_start_ptes() and tlb_end_ptes()

From: Nadav Amit
Date: Mon Feb 01 2021 - 18:01:01 EST


> On Feb 1, 2021, at 5:19 AM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>
> On Sat, Jan 30, 2021 at 04:11:25PM -0800, Nadav Amit wrote:
>> +#define tlb_start_ptes(tlb) \
>> + do { \
>> + struct mmu_gather *_tlb = (tlb); \
>> + \
>> + flush_tlb_batched_pending(_tlb->mm); \
>> + } while (0)
>> +
>> +static inline void tlb_end_ptes(struct mmu_gather *tlb) { }
>
>> tlb_change_page_size(tlb, PAGE_SIZE);
>> orig_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
>> - flush_tlb_batched_pending(mm);
>> + tlb_start_ptes(tlb);
>> arch_enter_lazy_mmu_mode();
>> for (; addr < end; pte++, addr += PAGE_SIZE) {
>> ptent = *pte;
>> @@ -468,6 +468,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
>> }
>>
>> arch_leave_lazy_mmu_mode();
>> + tlb_end_ptes(tlb);
>> pte_unmap_unlock(orig_pte, ptl);
>> if (pageout)
>> reclaim_pages(&page_list);
>
> I don't like how you're dubbling up on arch_*_lazy_mmu_mode(). It seems
> to me those should be folded into tlb_{start,end}_ptes().
>
> Alternatively, the even more work approach would be to, add an optional
> @tlb argument to pte_offset_map_lock()/pte_unmap_unlock() and friends.

Not too fund of the “even more approach”. I still have debts I need to
pay to the kernel community on old patches that didn’t make it through.

I will fold arch_*_lazy_mmu_mode() as you suggested. Admittedly, I do not
understand this arch_*_lazy_mmu_mode() very well - I would have assumed
they would be needed only when PTEs are established, and in other cases
the arch code will hook directly to the TLB flushing interface.

However, based on the code, it seems that powerpc does not even flush PTEs
that are established (only removed/demoted). Probably I am missing
something. I will just blindly fold it.