Re: [RFC PATCH v8 03/14] mm, x86: Add support for eXclusive Page Frame Ownership (XPFO)

From: Khalid Aziz
Date: Thu Feb 14 2019 - 12:15:24 EST


On 2/14/19 3:56 AM, Peter Zijlstra wrote:
> On Wed, Feb 13, 2019 at 05:01:26PM -0700, Khalid Aziz wrote:
>> static inline void *kmap_atomic(struct page *page)
>> {
>> + void *kaddr;
>> +
>> preempt_disable();
>> pagefault_disable();
>> + kaddr = page_address(page);
>> + xpfo_kmap(kaddr, page);
>> + return kaddr;
>> }
>> #define kmap_atomic_prot(page, prot) kmap_atomic(page)
>>
>> static inline void __kunmap_atomic(void *addr)
>> {
>> + xpfo_kunmap(addr, virt_to_page(addr));
>> pagefault_enable();
>> preempt_enable();
>> }
>
> How is that supposed to work; IIRC kmap_atomic was supposed to be
> IRQ-safe.
>

Ah, the spin_lock in in xpfo_kmap() can be problematic in interrupt
context. I will see if I can fix that.

Juerg, you wrote the original code and understand what you were trying
to do here. If you have ideas on how to tackle this, I would very much
appreciate it.

>> +/* Per-page XPFO house-keeping data */
>> +struct xpfo {
>> + unsigned long flags; /* Page state */
>> + bool inited; /* Map counter and lock initialized */
>
> What's sizeof(_Bool) ? Why can't you use a bit in that flags word?
>
>> + atomic_t mapcount; /* Counter for balancing map/unmap requests */
>> + spinlock_t maplock; /* Lock to serialize map/unmap requests */
>> +};
>
> Without that bool, the structure would be 16 bytes on 64bit, which seems
> like a good number.
>

Patch 11 ("xpfo, mm: remove dependency on CONFIG_PAGE_EXTENSION") cleans
all this up. If the original authors of these two patches, Juerg
Haefliger and Julian Stecklina, are ok with it, I would like to combine
the two patches in one.

>> +void xpfo_kmap(void *kaddr, struct page *page)
>> +{
>> + struct xpfo *xpfo;
>> +
>> + if (!static_branch_unlikely(&xpfo_inited))
>> + return;
>> +
>> + xpfo = lookup_xpfo(page);
>> +
>> + /*
>> + * The page was allocated before page_ext was initialized (which means
>> + * it's a kernel page) or it's allocated to the kernel, so nothing to
>> + * do.
>> + */
>> + if (!xpfo || unlikely(!xpfo->inited) ||
>> + !test_bit(XPFO_PAGE_USER, &xpfo->flags))
>> + return;
>> +
>> + spin_lock(&xpfo->maplock);
>> +
>> + /*
>> + * The page was previously allocated to user space, so map it back
>> + * into the kernel. No TLB flush required.
>> + */
>> + if ((atomic_inc_return(&xpfo->mapcount) == 1) &&
>> + test_and_clear_bit(XPFO_PAGE_UNMAPPED, &xpfo->flags))
>> + set_kpte(kaddr, page, PAGE_KERNEL);
>> +
>> + spin_unlock(&xpfo->maplock);
>> +}
>> +EXPORT_SYMBOL(xpfo_kmap);
>> +
>> +void xpfo_kunmap(void *kaddr, struct page *page)
>> +{
>> + struct xpfo *xpfo;
>> +
>> + if (!static_branch_unlikely(&xpfo_inited))
>> + return;
>> +
>> + xpfo = lookup_xpfo(page);
>> +
>> + /*
>> + * The page was allocated before page_ext was initialized (which means
>> + * it's a kernel page) or it's allocated to the kernel, so nothing to
>> + * do.
>> + */
>> + if (!xpfo || unlikely(!xpfo->inited) ||
>> + !test_bit(XPFO_PAGE_USER, &xpfo->flags))
>> + return;
>> +
>> + spin_lock(&xpfo->maplock);
>> +
>> + /*
>> + * The page is to be allocated back to user space, so unmap it from the
>> + * kernel, flush the TLB and tag it as a user page.
>> + */
>> + if (atomic_dec_return(&xpfo->mapcount) == 0) {
>> + WARN(test_bit(XPFO_PAGE_UNMAPPED, &xpfo->flags),
>> + "xpfo: unmapping already unmapped page\n");
>> + set_bit(XPFO_PAGE_UNMAPPED, &xpfo->flags);
>> + set_kpte(kaddr, page, __pgprot(0));
>> + xpfo_flush_kernel_tlb(page, 0);
>> + }
>> +
>> + spin_unlock(&xpfo->maplock);
>> +}
>> +EXPORT_SYMBOL(xpfo_kunmap);
>
> And these here things are most definitely not IRQ-safe.
>

Got it. I will work on this.

Thanks,
Khalid

Attachment: pEpkey.asc
Description: application/pgp-keys