Re: [Suggestion] Simple memory defragmentation

kwrohrer@enteract.com
Thu, 15 Jan 1998 16:09:32 -0600 (CST)


And lo, Dr. Werner Fink saith unto me:
> the appended patch (UNTESTED!) try to solve the memory
> fragmentation problem. The idea is simply to seek for more
> following free-able pages in shrink_mmap if we (kswapd) found
> one with shrink_mmap. This should decrease the random level
> on increasing the number of contiguous free pages.
> One disadvantage is given: kswapd decreases the number
> of cached page more than without this approach.
On the other hand, some of this looks like cleanups that should be
applied regardless.

As a general comment, a solution which actually makes use of the
free lists (i.e. tries to make a free order-3 area from a free
order-2 area and its buddy) might have less impact on other stuff.

Another general comment: I don't see anything related to the demand
for free areas in here, it looks like a constant order 2. If nothing
else, this should get merged in with Zlatko Calusic(sp?)'s patch so
we only have to change one hardcoded number...

> diff -ur linux-2.1.79/include/linux/pagemap.h linux/include/linux/pagemap.h
> --- linux-2.1.79/include/linux/pagemap.h Tue Jan 13 01:43:16 1998
> +++ linux/include/linux/pagemap.h Wed Jan 14 20:25:06 1998
> @@ -11,6 +11,7 @@
>
> #include <linux/mm.h>
> #include <linux/fs.h>
> +#include <linux/swapctl.h>
>
> static inline unsigned long page_address(struct page * page)
> {
> @@ -20,8 +21,6 @@
> #define PAGE_HASH_BITS 11
> #define PAGE_HASH_SIZE (1 << PAGE_HASH_BITS)
>
> -#define PAGE_AGE_VALUE 16
> -
> extern unsigned long page_cache_size; /* # of pages currently in the hash table */
> extern struct page * page_hash_table[PAGE_HASH_SIZE];
>
> @@ -84,7 +83,7 @@
> {
> page_cache_size++;
> set_bit(PG_referenced, &page->flags);
> - page->age = PAGE_AGE_VALUE;
> + touch_page(page);
> if((page->next_hash = *p) != NULL)
> (*p)->pprev_hash = &page->next_hash;
> *p = page;
This seems like it should be appied regardless; we want to have as few
ways to make a page young as we can easily get away with. We still
have two (one for a struct page *, one for a pte_t) after this patch...

> diff -ur linux-2.1.79/mm/page_alloc.c linux/mm/page_alloc.c
> @@ -182,6 +184,7 @@
> nr_free_pages -= 1 << order; \
> EXPAND(ret, map_nr, order, new_order, area); \
> spin_unlock_irqrestore(&page_alloc_lock, flags); \
> + PageClearFree(ret); \
This can't be right: if order>0, all but one of the allocated group of
pages will still have its free flag set. This will cause the scanner to
give up early.

> return ADDRESS(map_nr); \
> } \
> prev = ret; \
> @@ -334,7 +337,8 @@
> unsigned long page = __get_free_page(GFP_KERNEL);
>
> if (pte_val(*page_table) != entry) {
> - free_page(page);
> + if (page)
> + free_page(page);
> return;
> }
> if (!page) {
This looks like an omission on the part of the original authors...
on the other hand, the 0th ought to come up Reserved, so it
shouldn't be a problem in practice.

Keith