Re: [PATCH v2] memcg: reduce lock time at move charge (Was Re:[PATCH 04/10] memcg: disable local interrupts in lock_page_cgroup()

From: Daisuke Nishimura
Date: Thu Oct 07 2010 - 21:27:23 EST


On Thu, 7 Oct 2010 16:14:54 -0700
Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote:

> On Thu, 7 Oct 2010 17:04:05 +0900
> KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
>
> > Now, at task migration among cgroup, memory cgroup scans page table and moving
> > account if flags are properly set.
> >
> > The core code, mem_cgroup_move_charge_pte_range() does
> >
> > pte_offset_map_lock();
> > for all ptes in a page table:
> > 1. look into page table, find_and_get a page
> > 2. remove it from LRU.
> > 3. move charge.
> > 4. putback to LRU. put_page()
> > pte_offset_map_unlock();
> >
> > for pte entries on a 3rd level? page table.
> >
> > This pte_offset_map_lock seems a bit long. This patch modifies a rountine as
> >
> > for 32 pages: pte_offset_map_lock()
> > find_and_get a page
> > record it
> > pte_offset_map_unlock()
> > for all recorded pages
> > isolate it from LRU.
> > move charge
> > putback to LRU
> > for all recorded pages
> > put_page()
>
> The patch makes the code larger, more complex and slower!
>
Before this patch:
text data bss dec hex filename
27163 11782 4100 43045 a825 mm/memcontrol.o

After this patch:
text data bss dec hex filename
27307 12294 4100 43701 aab5 mm/memcontrol.o

hmm, allocating mc.target[] statically might be bad, but I'm now wondering
whether I could allocate mc itself dynamically(I'll try).

> I do think we're owed a more complete description of its benefits than
> "seems a bit long". Have problems been observed? Any measurements
> taken?
>
IIUC, this patch is necessary for "[PATCH] memcg: lock-free clear page writeback"
later, but I agree we should describe it.

Thanks,
Daisuke Nishimura.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/