Re: [PATCH 3/3] mm, page_alloc: reduce static keys in prep_new_page()

From: Vlastimil Babka
Date: Tue Oct 27 2020 - 13:42:13 EST


On 10/27/20 2:32 PM, Vlastimil Babka wrote:
So my conclusion:
- We can remove PAGE_POISONING_NO_SANITY because it only makes sense with
PAGE_POISONING_ZERO, and we can use init_on_free instead

Note for this we first have to make sanity checking compatible with
hibernation, but that should be easy as the zeroing variants already
paved the way. The patch below will be added to the next version of
the series:

From 44474ee27c4f5248061ea2e5bbc2aeefc91bcdfc Mon Sep 17 00:00:00 2001
From: Vlastimil Babka <vbabka@xxxxxxx>
Date: Tue, 27 Oct 2020 18:25:17 +0100
Subject: [PATCH] kernel/power: allow hibernation with page_poison sanity
checking

Page poisoning used to be incompatible with hibernation, as the state of
poisoned pages was lost after resume, thus enabling CONFIG_HIBERNATION forces
CONFIG_PAGE_POISONING_NO_SANITY. For the same reason, the poisoning with zeroes
variant CONFIG_PAGE_POISONING_ZERO used to disable hibernation. The latter
restriction was removed by commit 1ad1410f632d ("PM / Hibernate: allow
hibernation with PAGE_POISONING_ZERO") and similarly for init_on_free by commit
18451f9f9e58 ("PM: hibernate: fix crashes with init_on_free=1") by making sure
free pages are cleared after resume.

We can use the same mechanism to instead poison free pages with PAGE_POISON
after resume. This covers both zero and 0xAA patterns. Thus we can remove the
Kconfig restriction that disables page poison sanity checking when hibernation
is enabled.

Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx>
---
kernel/power/hibernate.c | 2 +-
kernel/power/power.h | 2 +-
kernel/power/snapshot.c | 14 ++++++++++----
mm/Kconfig.debug | 1 -
4 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
index 2fc7d509a34f..da0b41914177 100644
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -326,7 +326,7 @@ static int create_image(int platform_mode)

if (!in_suspend) {
events_check_enabled = false;
- clear_free_pages();
+ clear_or_poison_free_pages();
}

platform_leave(platform_mode);
diff --git a/kernel/power/power.h b/kernel/power/power.h
index 24f12d534515..778bf431ec02 100644
--- a/kernel/power/power.h
+++ b/kernel/power/power.h
@@ -106,7 +106,7 @@ extern int create_basic_memory_bitmaps(void);
extern void free_basic_memory_bitmaps(void);
extern int hibernate_preallocate_memory(void);

-extern void clear_free_pages(void);
+extern void clear_or_poison_free_pages(void);

/**
* Auxiliary structure used for reading the snapshot image data and
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index 46b1804c1ddf..6b1c84afa891 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -1144,7 +1144,7 @@ void free_basic_memory_bitmaps(void)
pr_debug("Basic memory bitmaps freed\n");
}

-void clear_free_pages(void)
+void clear_or_poison_free_pages(void)
{
struct memory_bitmap *bm = free_pages_map;
unsigned long pfn;
@@ -1152,12 +1152,18 @@ void clear_free_pages(void)
if (WARN_ON(!(free_pages_map)))
return;

- if (IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) || want_init_on_free()) {
+ if (page_poisoning_enabled() || want_init_on_free()) {
memory_bm_position_reset(bm);
pfn = memory_bm_next_pfn(bm);
while (pfn != BM_END_OF_MAP) {
- if (pfn_valid(pfn))
- clear_highpage(pfn_to_page(pfn));
+ if (pfn_valid(pfn)) {
+ struct page *page = pfn_to_page(pfn);
+ if (page_poisoning_enabled_static())
+ kernel_poison_pages(page, 1);
+ else if (want_init_on_free())
+ clear_highpage(page);
+
+ }

pfn = memory_bm_next_pfn(bm);
}
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index 864f129f1937..c57786ad5be9 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -64,7 +64,6 @@ config PAGE_OWNER

config PAGE_POISONING
bool "Poison pages after freeing"
- select PAGE_POISONING_NO_SANITY if HIBERNATION
help
Fill the pages with poison patterns after free_pages() and verify
the patterns before alloc_pages. The filling of the memory helps
--
2.29.0