Re: [PATCH v8 13/40] x86/kernel: Make the bss.decrypted section shared in RMP table

From: Tom Lendacky
Date: Thu Jan 06 2022 - 14:06:25 EST


On 1/6/22 11:40 AM, Venu Busireddy wrote:
On 2022-01-05 15:39:22 -0600, Brijesh Singh wrote:


On 1/5/22 2:27 PM, Dave Hansen wrote:
On 1/5/22 11:52, Brijesh Singh wrote:
          for (; vaddr < vaddr_end; vaddr += PMD_SIZE) {
+            /*
+             * When SEV-SNP is active then transition the
page to shared in the RMP
+             * table so that it is consistent with the page
table attribute change.
+             */
+            early_snp_set_memory_shared(__pa(vaddr),
__pa(vaddr), PTRS_PER_PMD);

Shouldn't the first argument be vaddr as below?

Nope, sme_postprocess_startup() is called while we are fixing the
initial page table and running with identity mapping (so va == pa).

I'm not sure I've ever seen a line of code that wanted a comment so badly.

The early_snp_set_memory_shared() call the PVALIDATE instruction to clear
the validated bit from the BSS region. The PVALIDATE instruction needs a
virtual address, so we need to use the identity mapped virtual address so
that PVALIDATE can clear the validated bit. I will add more comments to
clarify it.

Looking forward to see your final comments explaining this. I can't
still follow why, when PVALIDATE needs the virtual address, we are doing
a __pa() on the vaddr.

It's because of the phase of booting that the kernel is in. At this point, the kernel is running in identity mapped mode (VA == PA). The __start_bss_decrypted address is a regular kernel address, e.g. for the kernel I'm on it is 0xffffffffa7600000. Since the PVALIDATE instruction requires a valid virtual address, the code needs to perform a __pa() against __start_bss_decrypted to get the identity mapped virtual address that is currently in place.

It is not until the .Ljump_to_C_code label in head_64.S that the addressing switches from identity mapped to kernel addresses.

Thanks,
Tom


Venu