Re: High order page allocs - final thought for tonight

From: Alex Bligh - linux-kernel (linux-kernel@alex.org.uk)
Date: Sun Sep 02 2001 - 18:31:16 EST


Final thought for tonight & off to bed.

For order=0 returns to list, it may help if we add the
page to the back (rather than the front) of the
list, if its buddy is on the InactiveClean or
InactiveDirty list, as page_launder will find
these first. Also a nice quick chek.
We keep the previous heuristic for order>0 returns
to list.

Patch below has both has previous heuristic,
memory_pressure change, and Inactive[Clean|Dirty]
buddying. Again, uncompiled, untested, against
2.4.9

Daniel: Whitespace fixed - sorry about that.

--
Alex Bligh

--- mm/page_alloc.c.keep Sun Sep 2 23:32:56 2001 +++ mm/page_alloc.c Mon Sep 3 00:23:27 2001 @@ -69,6 +69,8 @@ struct page *base; zone_t *zone;

+ int addfront=1; + if (page->buffers) BUG(); if (page->mapping) @@ -112,10 +114,29 @@ if (area >= zone->free_area + MAX_ORDER) BUG(); if (!__test_and_change_bit(index, area->map)) - /* - * the buddy page is still allocated. - */ - break; + { + /* + * The buddy page is still allocated. + * + * Test: + * - If order=0, see if buddy is on Inactive list + * - If order>0, see if buddy has only one 'half' + * used, rather than both + * If the appropriate condition is true, then we + * conclude the buddy may be free soon, so add + * it to the tail of the queue. Else we + * add it to the head. + */ + if (mask & 1) /* not order 0 merge */ + addfront = ( !test_bit((index^1)<<1, + (area-1)->map) && + !test_bit(((index^1)<<1) | 1, + (area-1)->map) ); + else + addfront = !( PageInactiveDirty(base+(page_idx^-mask)) || + PageInactiveClean(base+(page_idx^-mask)) ); + break; + } /* * Move the buddy up one level. */ @@ -132,7 +153,11 @@ index >>= 1; page_idx &= mask; } - memlist_add_head(&(base + page_idx)->list, &area->free_list); + + if (addfront) + memlist_add_head(&(base + page_idx)->list, &area->free_list); + else + memlist_add_tail(&(base + page_idx)->list, &area->free_list);

spin_unlock_irqrestore(&zone->lock, flags);

@@ -141,8 +166,8 @@ * since it's nothing important, but we do want to make sure * it never gets negative. */ - if (memory_pressure > NR_CPUS) - memory_pressure--; + if (memory_pressure > (NR_CPUS << order)) + memory_pressure-= 1<<order; }

#define MARK_USED(index, order, area) \ @@ -288,7 +313,7 @@ /* * Allocations put pressure on the VM subsystem. */ - memory_pressure++; + memory_pressure+= 1<<order;

/* * (If anyone calls gfp from interrupts nonatomically then it

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Fri Sep 07 2001 - 21:00:16 EST