[PATCH] x86/mm: Clean up the __[PHYSICAL/VIRTUAL]_MASK_SHIFT definitions a bit

From: Ingo Molnar
Date: Thu May 09 2019 - 05:02:46 EST



* Yury Norov <yury.norov@xxxxxxxxx> wrote:

> __VIRTUAL_MASK_SHIFT is defined twice to the same valie in
> arch/x86/include/asm/page_32_types.h. Fix it.
>
> Signed-off-by: Yury Norov <ynorov@xxxxxxxxxxx>
> ---
> arch/x86/include/asm/page_32_types.h | 5 ++---
> 1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/include/asm/page_32_types.h b/arch/x86/include/asm/page_32_types.h
> index 0d5c739eebd7..9bfac5c80d89 100644
> --- a/arch/x86/include/asm/page_32_types.h
> +++ b/arch/x86/include/asm/page_32_types.h
> @@ -28,6 +28,8 @@
> #define MCE_STACK 0
> #define N_EXCEPTION_STACKS 1
>
> +#define __VIRTUAL_MASK_SHIFT 32
> +
> #ifdef CONFIG_X86_PAE
> /*
> * This is beyond the 44 bit limit imposed by the 32bit long pfns,
> @@ -36,11 +38,8 @@
> * The real limit is still 44 bits.
> */
> #define __PHYSICAL_MASK_SHIFT 52
> -#define __VIRTUAL_MASK_SHIFT 32
> -
> #else /* !CONFIG_X86_PAE */
> #define __PHYSICAL_MASK_SHIFT 32
> -#define __VIRTUAL_MASK_SHIFT 32
> #endif /* CONFIG_X86_PAE */

I think it's clearer to see them defined where the physical mask shift is
defined.

How about the patch below? It does away with the weird formatting and
cleans up both the comments and the style of the definition:

/*
* 52 bits on PAE is beyond the 44-bit limit imposed by the
* 32-bit long PFNs, but we need the full mask to make sure
* inverted PROT_NONE entries have all the host bits set
* in a guest. The real limit is still 44 bits.
*/
#ifdef CONFIG_X86_PAE
# define __PHYSICAL_MASK_SHIFT 52
# define __VIRTUAL_MASK_SHIFT 32
#else
# define __PHYSICAL_MASK_SHIFT 32
# define __VIRTUAL_MASK_SHIFT 32
#endif

?

Thanks,

Ingo

===============>
From: Ingo Molnar <mingo@xxxxxxxxxx>
Date: Thu, 9 May 2019 10:59:44 +0200
Subject: [PATCH] x86/mm: Clean up the __[PHYSICAL/VIRTUAL]_MASK_SHIFT definitions a bit

Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
---
arch/x86/include/asm/page_32_types.h | 23 +++++++++++------------
1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/page_32_types.h b/arch/x86/include/asm/page_32_types.h
index 565ad755c785..009e96d4b6d4 100644
--- a/arch/x86/include/asm/page_32_types.h
+++ b/arch/x86/include/asm/page_32_types.h
@@ -26,20 +26,19 @@

#define N_EXCEPTION_STACKS 1

-#ifdef CONFIG_X86_PAE
/*
- * This is beyond the 44 bit limit imposed by the 32bit long pfns,
- * but we need the full mask to make sure inverted PROT_NONE
- * entries have all the host bits set in a guest.
- * The real limit is still 44 bits.
+ * 52 bits on PAE is beyond the 44-bit limit imposed by the
+ * 32-bit long PFNs, but we need the full mask to make sure
+ * inverted PROT_NONE entries have all the host bits set
+ * in a guest. The real limit is still 44 bits.
*/
-#define __PHYSICAL_MASK_SHIFT 52
-#define __VIRTUAL_MASK_SHIFT 32
-
-#else /* !CONFIG_X86_PAE */
-#define __PHYSICAL_MASK_SHIFT 32
-#define __VIRTUAL_MASK_SHIFT 32
-#endif /* CONFIG_X86_PAE */
+#ifdef CONFIG_X86_PAE
+# define __PHYSICAL_MASK_SHIFT 52
+# define __VIRTUAL_MASK_SHIFT 32
+#else
+# define __PHYSICAL_MASK_SHIFT 32
+# define __VIRTUAL_MASK_SHIFT 32
+#endif

/*
* Kernel image size is limited to 512 MB (see in arch/x86/kernel/head_32.S)