Re: [PATCH V3 04/10] x86/pks: Preserve the PKRS MSR on context switch

From: Thomas Gleixner
Date: Fri Dec 18 2020 - 17:45:46 EST


On Fri, Dec 18 2020 at 13:58, Dan Williams wrote:
> On Fri, Dec 18, 2020 at 1:06 PM Thomas Gleixner <tglx@xxxxxxxxxxxxx> wrote:
>> kmap_local() is fine. That can work automatically because it's strict
>> local to the context which does the mapping.
>>
>> kmap() is dubious because it's a 'global' mapping as dictated per
>> HIGHMEM. So doing the RELAXED mode for kmap() is sensible I think to
>> identify cases where the mapped address is really handed to a different
>> execution context. We want to see those cases and analyse whether this
>> can't be solved in a different way. That's why I suggested to do a
>> warning in that case.
>>
>> Also vs. the DAX use case I really meant the code in fs/dax and
>> drivers/dax/ itself which is handling this via dax_read_[un]lock.
>>
>> Does that make more sense?
>
> Yup, got it. The dax code can be precise wrt to PKS in a way that
> kmap_local() cannot.

Which makes me wonder whether we should have kmap_local_for_read()
or something like that, which could be obviously only be RO enforced for
the real HIGHMEM case or the (for now x86 only) enforced kmap_local()
debug mechanics on 64bit.

So for the !highmem case it would not magically make the existing kernel
mapping RO, but this could be forwarded to the PKS protection. Aside of
that it's a nice annotation in the code.

That could be used right away for all the kmap[_atomic] -> kmap_local
conversions.

Thanks,

tglx
---
include/linux/highmem-internal.h | 14 ++++++++++++++
1 file changed, 14 insertions(+)

--- a/include/linux/highmem-internal.h
+++ b/include/linux/highmem-internal.h
@@ -32,6 +32,10 @@ static inline void kmap_flush_tlb(unsign
#define kmap_prot PAGE_KERNEL
#endif

+#ifndef kmap_prot_to
+#define kmap_prot PAGE_KERNEL_RO
+#endif
+
void *kmap_high(struct page *page);
void kunmap_high(struct page *page);
void __kmap_flush_unused(void);
@@ -73,6 +77,11 @@ static inline void *kmap_local_page(stru
return __kmap_local_page_prot(page, kmap_prot);
}

+static inline void *kmap_local_page_for_read(struct page *page)
+{
+ return __kmap_local_page_prot(page, kmap_prot_ro);
+}
+
static inline void *kmap_local_page_prot(struct page *page, pgprot_t prot)
{
return __kmap_local_page_prot(page, prot);
@@ -169,6 +178,11 @@ static inline void *kmap_local_page_prot
{
return kmap_local_page(page);
}
+
+static inline void *kmap_local_page_for_read(struct page *page)
+{
+ return kmap_local_page(page);
+}

static inline void *kmap_local_pfn(unsigned long pfn)
{