Re: [PATCH 6.4 00/28] 6.4.1-rc1 review

From: Guenter Roeck
Date: Fri Jun 30 2023 - 23:30:06 EST


On Fri, Jun 30, 2023 at 06:24:49PM -0700, Linus Torvalds wrote:
> On Fri, 30 Jun 2023 at 15:51, Guenter Roeck <linux@xxxxxxxxxxxx> wrote:
> >
> > There is one more, unfortunately.
> >
> > Building xtensa:de212:kc705-nommu:nommu_kc705_defconfig ... failed
>
> Heh. I didn't even realize that anybody would ever do
> lock_mm_and_find_vma() code on a nommu platform.
>
> With nommu, handle_mm_fault() will just BUG(), so it's kind of
> pointless to do any of this at all, and I didn't expect anybody to
> have this page faulting path that just causes that BUG() for any
> faults.
>
> But it turns out xtensa has a notion of protection faults even for
> NOMMU configs:
>
> config PFAULT
> bool "Handle protection faults" if EXPERT && !MMU
> default y
> help
> Handle protection faults. MMU configurations must enable it.
> noMMU configurations may disable it if used memory map never
> generates protection faults or faults are always fatal.
>
> If unsure, say Y.
>
> which is why it violated my expectations so badly.
>
> I'm not sure if that protection fault handling really ever gets quite
> this far (it certainly should *not* make it to the BUG() in
> handle_mm_fault()), but I think the attached patch is likely the right
> thing to do.
>
> Can you check if it fixes that xtensa case? It looks
> ObviouslyCorrect(tm) to me, but considering that I clearly missed this
> case existing AT ALL, it might be best to double-check.
>
> Linus

Yes, the patch below fixes the problem.

Building xtensa:de212:kc705-nommu:nommu_kc705_defconfig ... running ......... passed

Thanks,
Guenter

> include/linux/mm.h | 5 +++--
> mm/nommu.c | 11 +++++++++++
> 2 files changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 39aa409e84d5..4f2c33c273eb 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2323,6 +2323,9 @@ void pagecache_isize_extended(struct inode *inode, loff_t from, loff_t to);
> void truncate_pagecache_range(struct inode *inode, loff_t offset, loff_t end);
> int generic_error_remove_page(struct address_space *mapping, struct page *page);
>
> +struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
> + unsigned long address, struct pt_regs *regs);
> +
> #ifdef CONFIG_MMU
> extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma,
> unsigned long address, unsigned int flags,
> @@ -2334,8 +2337,6 @@ void unmap_mapping_pages(struct address_space *mapping,
> pgoff_t start, pgoff_t nr, bool even_cows);
> void unmap_mapping_range(struct address_space *mapping,
> loff_t const holebegin, loff_t const holelen, int even_cows);
> -struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
> - unsigned long address, struct pt_regs *regs);
> #else
> static inline vm_fault_t handle_mm_fault(struct vm_area_struct *vma,
> unsigned long address, unsigned int flags,
> diff --git a/mm/nommu.c b/mm/nommu.c
> index 37d0b03143f1..fdc392735ec6 100644
> --- a/mm/nommu.c
> +++ b/mm/nommu.c
> @@ -630,6 +630,17 @@ struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr)
> }
> EXPORT_SYMBOL(find_vma);
>
> +/*
> + * At least xtensa ends up having protection faults even with no
> + * MMU.. No stack expansion, at least.
> + */
> +struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
> + unsigned long addr, struct pt_regs *regs)
> +{
> + mmap_read_lock(mm);
> + return vma_lookup(mm, addr);
> +}
> +
> /*
> * expand a stack to a given address
> * - not supported under NOMMU conditions