RE: [PATCH Part2 v6 09/49] x86/fault: Add support to handle the RMP fault for user address

From: Kalra, Ashish
Date: Thu Sep 01 2022 - 16:32:46 EST


[AMD Official Use Only - General]

Hello Boris,

>> It is basically an index into the 4K page within the hugepage mapped
>> in the RMP table or in other words an index into the RMP table entry
>> for 4K page(s) corresponding to a hugepage.

>So pte_index(address) and for 1G pages, pmd_index(address).

>So no reinventing the wheel if we already have helpers for that.

>Yes that makes sense and pte_index(address) is exactly what is required for 2M hugepages.

>Will use pte_index() for 2M pages and pmd_index() for 1G pages.

Had a relook into this.

As I mentioned earlier, this is computing an index into a 4K page within a hugepage mapping,
therefore, though pte_index() works for 2M pages, but pmd_index() will not work for 1G pages.

We basically need to do :
pfn |= (address >> PAGE_SHIFT) & mask;

where mask is the (number of 4K pages per hugepage) - 1

So this still needs the original code but with a fix for mask computation as following :

static inline size_t pages_per_hpage(int level)
return page_level_size(level) / PAGE_SIZE;
}

static int handle_user_rmp_page_fault(struct pt_regs *regs, unsigned long error_code,
unsigned long address)
{
...
pfn = pte_pfn(*pte);

/* If its large page then calculte the fault pfn */
if (level > PG_LEVEL_4K) {
+ /*
+ * index into the 4K page within the hugepage mapping
+ * in the RMP table
+ */
unsigned long mask;

- mask = pages_per_hpage(level) - pages_per_hpage(level - 1);
+ mask = pages_per_hpage(level) - 1;
pfn |= (address >> PAGE_SHIFT) & mask;


Thanks,
Ashish