Re: [BUG] arm64: an infinite loop in generic_perform_write()

From: Catalin Marinas
Date: Thu Jun 24 2021 - 14:56:04 EST


On Thu, Jun 24, 2021 at 04:27:17PM +0000, Al Viro wrote:
> On Thu, Jun 24, 2021 at 02:22:27PM +0100, Robin Murphy wrote:
> > FWIW I think the only way to make the kernel behaviour any more robust here
> > would be to make the whole uaccess API more expressive, such that rather
> > than simply saying "I only got this far" it could actually differentiate
> > between stopping due to a fault which may be recoverable and worth retrying,
> > and one which definitely isn't.
>
> ... and propagate that "more expressive" information through what, 3 or 4
> levels in the call chain?
>
> From include/linux/uaccess.h:
>
> * If raw_copy_{to,from}_user(to, from, size) returns N, size - N bytes starting
> * at to must become equal to the bytes fetched from the corresponding area
> * starting at from. All data past to + size - N must be left unmodified.
> *
> * If copying succeeds, the return value must be 0. If some data cannot be
> * fetched, it is permitted to copy less than had been fetched; the only
> * hard requirement is that not storing anything at all (i.e. returning size)
> * should happen only when nothing could be copied. In other words, you don't
> * have to squeeze as much as possible - it is allowed, but not necessary.
>
> arm64 instances violate the aforementioned hard requirement.

After reading the above a few more times, I think I get it. The key
sentence is: not storing anything at all should happen only when nothing
could be copied. In the MTE case, something can still be copied.

> Please, fix
> it there; it's not hard. All you need is an exception handler in .Ltiny15
> that would fall back to (short) byte-by-byte copy if the faulting address
> happened to be unaligned. Or just do one-byte copy, not that it had been
> considerably cheaper than a loop. Will be cheaper than propagating that extra
> information up the call chain, let alone paying for extra ->write_begin()
> and ->write_end() for single byte in generic_perform_write().

Yeah, it's definitely fixable in the arch code. I misread the above
requirements and thought it could be fixed in the core code.

Quick hack, though I think in the actual exception handling path in .S
more sense (and it needs the copy_to_user for symmetry):

diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index b5f08621fa29..903f8a2a457b 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -415,6 +415,15 @@ extern unsigned long __must_check __arch_copy_from_user(void *to, const void __u
uaccess_ttbr0_enable(); \
__acfu_ret = __arch_copy_from_user((to), \
__uaccess_mask_ptr(from), (n)); \
+ if (__acfu_ret == n) { \
+ int __cfu_err = 0; \
+ char __cfu_val; \
+ __raw_get_mem("ldtr", __cfu_val, (char *)from, __cfu_err);\
+ if (!__cfu_err) { \
+ *(char *)to = __cfu_val; \
+ __acfu_ret--; \
+ } \
+ } \
uaccess_ttbr0_disable(); \
__acfu_ret; \
})

Of course, it only fixes the MTE problem, I'll ignore the MMIO case
(though it may work in certain configurations like synchronous faults).

--
Catalin