[PATCH RFC 1/2] mm/page_alloc: safeguard free_pcppages_bulk

From: Chris Li
Date: Fri Aug 18 2023 - 02:07:20 EST


The current free_pcppages_bulk() can panic when
pcp->count is changed outside of this function by
the BPF program injected in ftrace function entry.

Commit c66a36af7ba3a628 was to fix on the BPF program side
to not allocate memory inside the spinlock.

But the kernel can still panic loading similar BPF without the fix.
Here is the step to reproduce it:

$ git checkout 19030564ab116757e32
$ cd tools/perf
$ make perf
$ ./perf lock con -ab -- ./perf bench sched messaging

You should be able to see the kernel panic within 20 seconds.

Here is what happened in the panic:

count = min(pcp->count, count);

free_pcppages_bulk() assumes count and pcp->count are in sync.
There are no pcp->count changes outside of this function.

That assumption gets broken when BPF lock contention code
allocates memory inside spinlock. pcp->count is one less than
"count". The loop only checks against "count" and runs into
a deadloop because pcp->count drops to zero and all lists
are empty. In a deadloop pindex_min can grow bigger than pindex_max
and pindex_max can lower to negative. The kernel panic is happening
on the pindex trying to access outside of pcp->lists ranges.

Notice that this is just one of the (buggy) BPF programs that
can break it. Other than the spin lock, there are other function
tracepoints under this function can be hooked up to the BPF program
which can allocate memory and change the pcp->count.

One argument is that BPF should not allocate memory under the
spinlock. On the other hand, the kernel can just check pcp->count
inside the loop to avoid the kernel panic.

Signed-off-by: Chris Li <chrisl@xxxxxxxxxx>
Reported-by: John Sperbeck<jsperbeck@xxxxxxxxxx>
---
mm/page_alloc.c | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1eb3864e1dbc7..347cb93081a02 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1215,12 +1215,6 @@ static void free_pcppages_bulk(struct zone *zone, int count,
bool isolated_pageblocks;
struct page *page;

- /*
- * Ensure proper count is passed which otherwise would stuck in the
- * below while (list_empty(list)) loop.
- */
- count = min(pcp->count, count);
-
/* Ensure requested pindex is drained first. */
pindex = pindex - 1;

@@ -1266,7 +1260,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,

__free_one_page(page, page_to_pfn(page), zone, order, mt, FPI_NONE);
trace_mm_page_pcpu_drain(page, order, mt);
- } while (count > 0 && !list_empty(list));
+ } while (count > 0 && pcp->count > 0 && !list_empty(list));
}

spin_unlock_irqrestore(&zone->lock, flags);

--
2.42.0.rc1.204.g551eb34607-goog