Re: [PATCH 2/3] powerpc/mm: handle VM_FAULT_RETRY earlier

From: Aneesh Kumar K.V
Date: Tue Mar 21 2017 - 05:12:14 EST


Laurent Dufour <ldufour@xxxxxxxxxxxxxxxxxx> writes:

> In do_page_fault() if handle_mm_fault() returns VM_FAULT_RETRY, retry
> the page fault handling before anything else.
>
> This would simplify the handling of the mmap_sem lock in this part of
> the code.
>
> Signed-off-by: Laurent Dufour <ldufour@xxxxxxxxxxxxxxxxxx>
> ---
> arch/powerpc/mm/fault.c | 67 ++++++++++++++++++++++++++++---------------------
> 1 file changed, 38 insertions(+), 29 deletions(-)
>
> diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
> index ee09604bbe12..2a6bc7e6e69a 100644
> --- a/arch/powerpc/mm/fault.c
> +++ b/arch/powerpc/mm/fault.c
> @@ -434,6 +434,26 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
> * the fault.
> */
> fault = handle_mm_fault(vma, address, flags);
> +
> + /*
> + * Handle the retry right now, the mmap_sem has been released in that
> + * case.
> + */
> + if (unlikely(fault & VM_FAULT_RETRY)) {
> + /* We retry only once */
> + if (flags & FAULT_FLAG_ALLOW_RETRY) {
> + /*
> + * Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk
> + * of starvation.
> + */
> + flags &= ~FAULT_FLAG_ALLOW_RETRY;
> + flags |= FAULT_FLAG_TRIED;
> + if (!fatal_signal_pending(current))
> + goto retry;
> + }
> + /* We will enter mm_fault_error() below */
> + }
> +
> if (unlikely(fault & (VM_FAULT_RETRY|VM_FAULT_ERROR))) {
> if (fault & VM_FAULT_SIGSEGV)
> goto bad_area;
> @@ -445,38 +465,27 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
> }

We could make it further simpler, by handling the FAULT_RETRY without
FLAG_ALLOW_RETRY set earlier. But i guess that can be done later ?

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxxxxxxx>


>
> /*
> - * Major/minor page fault accounting is only done on the
> - * initial attempt. If we go through a retry, it is extremely
> - * likely that the page will be found in page cache at that point.
> + * Major/minor page fault accounting.
> */
> - if (flags & FAULT_FLAG_ALLOW_RETRY) {
> - if (fault & VM_FAULT_MAJOR) {
> - current->maj_flt++;
> - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1,
> - regs, address);
> + if (fault & VM_FAULT_MAJOR) {
> + current->maj_flt++;
> + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1,
> + regs, address);
> #ifdef CONFIG_PPC_SMLPAR
> - if (firmware_has_feature(FW_FEATURE_CMO)) {
> - u32 page_ins;
> -
> - preempt_disable();
> - page_ins = be32_to_cpu(get_lppaca()->page_ins);
> - page_ins += 1 << PAGE_FACTOR;
> - get_lppaca()->page_ins = cpu_to_be32(page_ins);
> - preempt_enable();
> - }
> -#endif /* CONFIG_PPC_SMLPAR */
> - } else {
> - current->min_flt++;
> - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1,
> - regs, address);
> - }
> - if (fault & VM_FAULT_RETRY) {
> - /* Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk
> - * of starvation. */
> - flags &= ~FAULT_FLAG_ALLOW_RETRY;
> - flags |= FAULT_FLAG_TRIED;
> - goto retry;
> + if (firmware_has_feature(FW_FEATURE_CMO)) {
> + u32 page_ins;
> +
> + preempt_disable();
> + page_ins = be32_to_cpu(get_lppaca()->page_ins);
> + page_ins += 1 << PAGE_FACTOR;
> + get_lppaca()->page_ins = cpu_to_be32(page_ins);
> + preempt_enable();
> }
> +#endif /* CONFIG_PPC_SMLPAR */
> + } else {
> + current->min_flt++;
> + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1,
> + regs, address);
> }
>
> up_read(&mm->mmap_sem);
> --
> 2.7.4