Re: [PATCH 06/11] change memory_is_poisoned_16 for aligned error

From: Ard Biesheuvel
Date: Tue Dec 05 2017 - 12:08:54 EST


On 5 December 2017 at 14:19, Liuwenliang (Abbott Liu)
<liuwenliang@xxxxxxxxxx> wrote:
> On Nov 23, 2017 20:30 Russell King - ARM Linux [mailto:linux@xxxxxxxxxxxxxxx] wrote:
>>On Thu, Oct 12, 2017 at 11:27:40AM +0000, Liuwenliang (Lamb) wrote:
>>> >> - I don't understand why this is necessary. memory_is_poisoned_16()
>>> >> already handles unaligned addresses?
>>> >>
>>> >> - If it's needed on ARM then presumably it will be needed on other
>>> >> architectures, so CONFIG_ARM is insufficiently general.
>>> >>
>>> >> - If the present memory_is_poisoned_16() indeed doesn't work on ARM,
>>> >> it would be better to generalize/fix it in some fashion rather than
>>> >> creating a new variant of the function.
>>>
>>>
>>> >Yes, I think it will be better to fix the current function rather then
>>> >have 2 slightly different copies with ifdef's.
>>> >Will something along these lines work for arm? 16-byte accesses are
>>> >not too common, so it should not be a performance problem. And
>>> >probably modern compilers can turn 2 1-byte checks into a 2-byte check
>>> >where safe (x86).
>>>
>>> >static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>>> >{
>>> > u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr);
>>> >
>>> > if (shadow_addr[0] || shadow_addr[1])
>>> > return true;
>>> > /* Unaligned 16-bytes access maps into 3 shadow bytes. */
>>> > if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>>> > return memory_is_poisoned_1(addr + 15);
>>> > return false;
>>> >}
>>>
>>> Thanks for Andrew Morton and Dmitry Vyukov's review.
>>> If the parameter addr=0xc0000008, now in function:
>>> static __always_inline bool memory_is_poisoned_16(unsigned long addr)
>>> {
>>> --- //shadow_addr = (u16 *)(KASAN_OFFSET+0x18000001(=0xc0000008>>3)) is not
>>> --- // unsigned by 2 bytes.
>>> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>>>
>>> /* Unaligned 16-bytes access maps into 3 shadow bytes. */
>>> if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE)))
>>> return *shadow_addr || memory_is_poisoned_1(addr + 15);
>>> ---- //here is going to be error on arm, specially when kernel has not finished yet.
>>> ---- //Because the unsigned accessing cause DataAbort Exception which is not
>>> ---- //initialized when kernel is starting.
>>> return *shadow_addr;
>>> }
>>>
>>> I also think it is better to fix this problem.
>
>>What about using get_unaligned() ?
>
> Thanks for your review.
>
> I think it is good idea to use get_unaligned. But ARMv7 support CONFIG_ HAVE_EFFICIENT_UNALIGNED_ACCESS
> (arch/arm/Kconfig : select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU).
> So on ARMv7, the code:
> u16 *shadow_addr = get_unaligned((u16 *)kasan_mem_to_shadow((void *)addr));
> equals the code:000
> u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr);
>

No it does not. The compiler may merge adjacent accesses into ldm or
ldrd instructions, which do not tolerate misalignment regardless of
the SCTLR.A bit.

This is actually something we may need to fix for ARM, i.e., drop
HAVE_EFFICIENT_UNALIGNED_ACCESS altogether, or carefully review the
way it is used currently.

> On ARMv7, if SCRLR.A is 0, unaligned access is OK. Here is the description comes from ARM(r) Architecture Reference
> Manual ARMv7-A and ARMv7-R edition :
>
<snip>

Could you *please* stop quoting the ARM ARM at us? People who are
seeking detailed information like that will know where to find it.

--
Ard.