Re: [RFC PATCH v1 00/17] ban the use of _PAGE_XXX flags outside platform specific code

From: Christophe LEROY
Date: Wed Sep 12 2018 - 12:05:29 EST




Le 10/09/2018 Ã 08:08, Aneesh Kumar K.V a ÃcritÂ:
Christophe Leroy <christophe.leroy@xxxxxx> writes:

On 09/06/2018 09:58 AM, Aneesh Kumar K.V wrote:
Christophe Leroy <christophe.leroy@xxxxxx> writes:

Today flags like for instance _PAGE_RW or _PAGE_USER are used through
common parts of code.
Using those directly in common parts of code have proven to lead to
mistakes or misbehaviour, because their use is not always as trivial
as one could think.

For instance, (flags & _PAGE_USER) == 0 isn't enough to tell
that a page is a kernel page, because some targets are using
_PAGE_PRIVILEDGED and not _PAGE_USER, so the test has to be
(flags & (_PAGE_USER | _PAGE_PRIVILEDGED)) == _PAGE_PRIVILEDGED
This has to (bad) consequences:

- All targets must define every bit, even the unsupported ones,
leading to a lot of useless #define _PAGE_XXX 0
- If someone forgets to take into account all possible _PAGE_XXX bits
for the case, we can get unexpected behaviour on some targets.

This becomes even more complex when we come to using _PAGE_RW.
Testing (flags & _PAGE_RW) is not enough to test whether a page
if writable or not, because:

- Some targets have _PAGE_RO instead, which has to be unset to tell
a page is writable
- Some targets have _PAGE_R and _PAGE_W, in which case
_PAGE_RW = _PAGE_R | _PAGE_W
- Even knowing whether a page is readable is not always trivial because:
- Some targets requires to check that _PAGE_R is set to ensure page
is readable
- Some targets requires to check that _PAGE_NA is not set
- Some targets requires to check that _PAGE_RO or _PAGE_RW is set

Etc ....

In order to work around all those issues and minimise the risks of errors,
this serie aims at removing all use of _PAGE_XXX flags from powerpc code
and always use pte_xxx() and pte_mkxxx() accessors instead. Those accessors
are then defined in target specific parts of the kernel code.

The series is really good. It also helps in code readability. Few things
i am not sure there is a way to reduce the overhead

- access = _PAGE_EXEC;
+ access = pte_val(pte_mkexec(__pte(0)));

Considering we have multiple big endian to little endian coversion there
for book3s 64.

Thanks for the review.

For the above, I propose the following:

diff --git a/arch/powerpc/mm/hash_utils_64.c
b/arch/powerpc/mm/hash_utils_64.c
index f23a89d8e4ce..904ac9c84ea5 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -1482,7 +1482,7 @@ static bool should_hash_preload(struct mm_struct
*mm, unsigned long ea)
#endif

void hash_preload(struct mm_struct *mm, unsigned long ea,
- unsigned long access, unsigned long trap)
+ bool is_exec, unsigned long trap)
{
int hugepage_shift;
unsigned long vsid;
@@ -1490,6 +1490,7 @@ void hash_preload(struct mm_struct *mm, unsigned
long ea,
pte_t *ptep;
unsigned long flags;
int rc, ssize, update_flags = 0;
+ unsigned long access = is_exec ? _PAGE_EXEC : 0;


I guess it will be better if we do

unsigned long access = _PAGE_PRESENT | _PAGE_READ

if (is_exec)
access |= _PAGE_EXEC.

That will also bring it closer to __hash_page. I agree that we should
always find _PAGE_PRESENT and _PAGE_READ set, because we just handled
the page fault.


Ok, I did it in v2, can you have a look (patch 11/24) ?

Christophe