Re: [RFC PATCH v7 14/16] EXPERIMENTAL: xpfo, mm: optimize spin lock usage in xpfo_kmap

From: Khalid Aziz
Date: Thu Jan 17 2019 - 10:15:56 EST


On 1/16/19 5:18 PM, Laura Abbott wrote:
> On 1/10/19 1:09 PM, Khalid Aziz wrote:
>> From: Julian Stecklina <jsteckli@xxxxxxxxx>
>>
>> We can reduce spin lock usage in xpfo_kmap to the 0->1 transition of
>> the mapcount. This means that xpfo_kmap() can now race and that we
>> get spurious page faults.
>>
>> The page fault handler helps the system make forward progress by
>> fixing the page table instead of allowing repeated page faults until
>> the right xpfo_kmap went through.
>>
>> Model-checked with up to 4 concurrent callers with Spin.
>>
>
> This needs the spurious check for arm64 as well. This at
> least gets me booting but could probably use more review:
>
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index 7d9571f4ae3d..8f425848cbb9 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -32,6 +32,7 @@
> Â#include <linux/perf_event.h>
> Â#include <linux/preempt.h>
> Â#include <linux/hugetlb.h>
> +#include <linux/xpfo.h>
> Â
> Â#include <asm/bug.h>
> Â#include <asm/cmpxchg.h>
> @@ -289,6 +290,9 @@ static void __do_kernel_fault(unsigned long addr,
> unsigned int esr,
> ÂÂÂÂÂÂÂ if (!is_el1_instruction_abort(esr) && fixup_exception(regs))
> ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ return;
> Â
> +ÂÂÂÂÂÂ if (xpfo_spurious_fault(addr))
> +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ return;
> +
> ÂÂÂÂÂÂÂ if (is_el1_permission_fault(addr, esr, regs)) {
> ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ if (esr & ESR_ELx_WNR)
> ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ msg = "write to read-only memory";
>
>

That makes sense. Thanks for debugging this. I will add this to patch 14
("EXPERIMENTAL: xpfo, mm: optimize spin lock usage in xpfo_kmap").

Thanks,
Khalid