Re: test1-ac22-classzone performance

From: Andrea Arcangeli (andrea@suse.de)
Date: Tue Jun 20 2000 - 10:32:58 EST


On Tue, 20 Jun 2000, Andrea Arcangeli wrote:

>If you still have the system in place could you show me the output of
>`vmstat 1` during the experiment?

I bet the loss of responsiveness are due this below stuff (one of the two
things is a bugfix) that is unrelated with classzone. I'm not yet sure
though, if you are curious try it out there's an untested patch at the end
of the email, I'll have some more reliable info soon... (stay tuned). BTW
I got similar reports also for 2.2.17pre :( so maybe we should backout the
below stuff from the 2.2.17pre side too (even if one of them is a bugfix).

For the free_before_allocate right fix that doesn't hurt everybody out
there is to put the freed pages in a private per-task list but it will
take some more time to do it. For the other stuff right way to go is to
tell `cp` to generate less dirty pages instead of forcing setiathome to
write the cp stuff to disk (only `cp & kupdate & kflushd` should do that
instead or we'll hurt very innocent tasks). NOTENOTE I was just flushing
the dirty buffers from try_to_free_buffers in an 2.2.x-andrea patch,
that's fine idea, but I was using WRITE_A_ (ahead) so I was flushing the
buffer _only_ if I wasn't going to block, that's completly different of
what the below stuff does. At this moment WRITEA is been nucked from
2.4.x so I'm nucking sync_page_buffers too for now... (it can be
resurrected if WRITEA will be resurrected too ;).

--- 2.4.0-test1-ac22-classzone-VM/fs/buffer.c.~1~ Tue Jun 20 00:55:39 2000
+++ 2.4.0-test1-ac22-classzone-VM/fs/buffer.c Tue Jun 20 17:21:00 2000
@@ -2379,25 +2379,6 @@
 #define BUFFER_BUSY_BITS ((1<<BH_Dirty) | (1<<BH_Lock) | (1<<BH_Protected))
 #define buffer_busy(bh) (atomic_read(&(bh)->b_count) | ((bh)->b_state & BUFFER_BUSY_BITS))
 
-static int sync_page_buffers(struct buffer_head * bh)
-{
- struct buffer_head * tmp = bh;
-
- do {
- if (buffer_dirty(tmp) && !buffer_locked(tmp))
- ll_rw_block(WRITE, 1, &tmp);
- tmp = tmp->b_this_page;
- } while (tmp != bh);
-
- do {
- if (buffer_busy(tmp))
- return 1;
- tmp = tmp->b_this_page;
- } while (tmp != bh);
-
- return 0;
-}
-
 /*
  * try_to_free_buffers() checks if all the buffers on this particular page
  * are unused, and free's the page if so.
@@ -2460,8 +2441,7 @@
         spin_unlock(&free_list[index].lock);
         write_unlock(&hash_table_lock);
         spin_unlock(&lru_list_lock);
- if (!sync_page_buffers(bh))
- goto again;
+ wakeup_bdflush(0);
         return 0;
 }
 
--- 2.4.0-test1-ac22-classzone-VM/include/linux/mmzone.h.~1~ Tue Jun 20 00:55:47 2000
+++ 2.4.0-test1-ac22-classzone-VM/include/linux/mmzone.h Tue Jun 20 17:21:55 2000
@@ -41,7 +41,6 @@
         unsigned long pages_min, pages_low, pages_high;
         int nr_zone;
         char zone_wake_kswapd;
- atomic_t free_before_allocate;
 
         /*
          * free areas of different sizes
--- 2.4.0-test1-ac22-classzone-VM/mm/page_alloc.c.~1~ Tue Jun 20 00:55:48 2000
+++ 2.4.0-test1-ac22-classzone-VM/mm/page_alloc.c Tue Jun 20 17:21:44 2000
@@ -257,13 +257,6 @@
         if (current->flags & PF_MEMALLOC)
                 goto allocate_ok;
 
- /* Somebody needs to free pages so we free some of our own. */
- if (atomic_read(&classzone->free_before_allocate)) {
- spin_unlock_irqrestore(freelist_lock, flags);
- try_to_free_pages(gfpmask_zone->gfp_mask, classzone);
- spin_lock_irq(freelist_lock);
- }
-
         /* classzone based memory balancing */
         free_pages = classzone->classzone_free_pages;
         if (free_pages > classzone->pages_low) {
@@ -296,9 +289,7 @@
                         goto allocate_ok;
 
                 spin_unlock_irqrestore(freelist_lock, flags);
- atomic_inc(&classzone->free_before_allocate);
                 freed = try_to_free_pages(gfpmask_zone->gfp_mask, classzone);
- atomic_dec(&classzone->free_before_allocate);
                 spin_lock_irq(freelist_lock);
 
                 if (freed || gfpmask_zone->gfp_mask & __GFP_HIGH)
@@ -598,7 +589,6 @@
                 zone->free_pages = 0;
                 zone->zone_wake_kswapd = 0;
                 zone->classzone_free_pages = 0;
- atomic_set(&zone->free_before_allocate, 0);
                 if (!size)
                         continue;
                 pgdat->nr_zones = j+1;

Andrea

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Fri Jun 23 2000 - 21:00:20 EST