Re: [PATCH UPDATED] percpu: use dynamic percpu allocator as thedefault percpu allocator

From: Christoph Lameter
Date: Tue Mar 31 2009 - 12:19:48 EST


I reviewed the new per cpu allocator. Quite extensive work. Found two
issues that I would like to have addressed. But basically:

Reviewed-by: Christoph Lameter <cl@xxxxxxxxx>


Two issues

1. Lot of unnecessary use of __read_mostly for local static variables that
are not on the hotpath. Patch follows in this message.

2. rbtree is not necessary since we can link back through the an available
field in struct page. Patch in next message.



Subject: Remove __read_mostly from percpu allocator symbols.

__read_mostly is reserved for hot code path items. The percpu variables
are used in allocation and freeing of percpu items which are typically
not hot code paths. It also reduces the cache footprint of the percpu
allocator if the variables are placed near each other. __read_mostly
would move a portion of the variable into a different linker section.

Signed-off-by: Christoph Lameter <cl@xxxxxxxxx>

---
mm/percpu.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)

Index: linux-2.6/mm/percpu.c
===================================================================
--- linux-2.6.orig/mm/percpu.c 2009-03-31 10:59:34.000000000 -0500
+++ linux-2.6/mm/percpu.c 2009-03-31 11:11:22.000000000 -0500
@@ -100,14 +100,14 @@ struct pcpu_chunk {
struct page *page_ar[]; /* #cpus * UNIT_PAGES */
};

-static int pcpu_unit_pages __read_mostly;
-static int pcpu_unit_size __read_mostly;
-static int pcpu_chunk_size __read_mostly;
-static int pcpu_nr_slots __read_mostly;
-static size_t pcpu_chunk_struct_size __read_mostly;
+static int pcpu_unit_pages;
+static int pcpu_unit_size;
+static int pcpu_chunk_size;
+static int pcpu_nr_slots;
+static size_t pcpu_chunk_struct_size;

/* the address of the first chunk which starts with the kernel static area */
-void *pcpu_base_addr __read_mostly;
+void *pcpu_base_addr;
EXPORT_SYMBOL_GPL(pcpu_base_addr);

/* optional reserved chunk, only accessible for reserved allocations */
@@ -139,7 +139,7 @@ static int pcpu_reserved_chunk_limit;
static DEFINE_MUTEX(pcpu_alloc_mutex); /* protects whole alloc and reclaim */
static DEFINE_SPINLOCK(pcpu_lock); /* protects index data structures */

-static struct list_head *pcpu_slot __read_mostly; /* chunk list slots */
+static struct list_head *pcpu_slot; /* chunk list slots */
static struct rb_root pcpu_addr_root = RB_ROOT; /* chunks by address */

/* reclaim work to release fully free chunks, scheduled from free path */
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/