Re: [PATCH 5/5] Swapless V2: Revise main migration logic

From: KAMEZAWA Hiroyuki
Date: Tue Apr 18 2006 - 05:07:09 EST


On Tue, 18 Apr 2006 01:27:40 -0700 (PDT)
Christoph Lameter <clameter@xxxxxxx> wrote:

> On Tue, 18 Apr 2006, KAMEZAWA Hiroyuki wrote:
>
> > On Mon, 17 Apr 2006 23:58:41 -0700 (PDT)
> > Christoph Lameter <clameter@xxxxxxx> wrote:
> >
> > > Hmmm... Good ideas. I think it could be much simpler like the following
> > > patch.
> > >
> > > However, the problem here is how to know that we really took the anon_vma
> > > lock and what to do about a page being unmmapped while migrating. This
> > > could cause the anon_vma not to be unlocked.
> > >
> > lock dependency here is page_lock(page) -> page's anon_vma->lock.
> > So, I guess anon_vma->lock cannot be unlocked by other threads
> > if we have page_lock(page).
>
> No the problem is to know if the lock was really taken. SWAP_AGAIN could
> mean that page_lock_anon_vma failed.
>
Ah, I see. and understood what you did in http://lkml.org/lkml/2006/4/18/19

That will be happen when the migration takes the anon_vma->lock in
try_to_unmap().

> Also the page may be freed while it is being processes. In that case
> remove_migration_ptes may not find the mapping and may not unlock the
> anon_vma.
>
My patch in http://lkml.org/lkml/2006/4/17/180

This is a look when above patch is applied.
==
/*
* Common logic to directly migrate a single page suitable for
* pages that do not use PagePrivate.
*
* Pages are locked upon entry and exit.
*/
int migrate_page(struct page *newpage, struct page *page)
{
int rc;
struct anon_vma *anon_vma;
BUG_ON(PageWriteback(page)); /* Writeback must be complete */
if (PageAnon(page)) {
anon_vma = page_lock_anon_vma(page);
}
rc = migrate_page_remove_references(newpage, page,
page_mapping(page) ? 2 : 1);

if (rc) {
remove_migration_ptes(anon_vma, page, page);
goto unlock_out;
}
migrate_page_copy(newpage, page);
remove_migration_ptes(anon_vma, page, newpage);
unlock_out:
if (anon_vma)
spin_unlock(&anon_vma->lock);
return rc;
}
==

lock around anon_vma->lock does not depend on the result of
try_to_unmap() and remove_migration_ptes().

But I agree : 'taking anon_vma->lock before try_to_unmap() is ugly and complicated
and will make things insane.'

Will this attached one make things clearer ?

This anon_vma->lock is just an optimization (for now) but complicated.
I think restart discusstion against -mm3? will be better.
-Kame
==
Index: Christoph-NewMigrationV2/mm/rmap.c
===================================================================
--- Christoph-NewMigrationV2.orig/mm/rmap.c
+++ Christoph-NewMigrationV2/mm/rmap.c
@@ -711,29 +711,44 @@ static void try_to_unmap_cluster(unsigne
pte_unmap_unlock(pte - 1, ptl);
}

-static int try_to_unmap_anon(struct page *page, int migration)
+static int __try_to_unmap_anon(struct anon_vma *anon_vma,
+ struct page *page, int migration)
{
- struct anon_vma *anon_vma;
struct vm_area_struct *vma;
int ret = SWAP_AGAIN;

- if (migration) { /* anon_vma->lock is held under migration */
- unsigned long mapping;
- mapping = (unsigned long)page->mapping - PAGE_MAPPING_ANON;
- anon_vma = (struct anon_vma *)mapping;
- } else {
- anon_vma = page_lock_anon_vma(page);
- }
- if (!anon_vma)
- return ret;
-
list_for_each_entry(vma, &anon_vma->head, anon_vma_node) {
ret = try_to_unmap_one(page, vma, migration);
if (ret == SWAP_FAIL || !page_mapped(page))
break;
}
- if (!migration)
- spin_unlock(&anon_vma->lock);
+ return ret;
+}
+
+static int try_to_unmap_anon(struct page *page)
+{
+ struct anon_vma *anon_vma;
+ struct vm_area_struct *vma;
+ int ret = SWAP_AGAIN;
+
+ anon_vma = page_lock_anon_vma(page);
+ if (!anon_vma)
+ return ret;
+ ret = __try_to_unmap_anon(anon_vma, page, 0);
+ spin_unlock(&anon_vma->lock);
+ return ret;
+}
+
+static int try_to_unmap_anon_migrate(struct page *page)
+{
+ struct anon_vma *anon_vma;
+ unsigned long mapping;
+ int ret = SWAP_AGAIN;
+ if (PageAnon(page))
+ return ret;
+ mapping = page->mapping;
+ anon_vma = (struct anon_vma *)(mapping - PAGE_MAPPING_ANON);
+ ret = __try_to_unmap_anon_migrate(anon_vma, page, 1);
return ret;
}

@@ -851,9 +866,12 @@ int try_to_unmap(struct page *page, int

BUG_ON(!PageLocked(page));

- if (PageAnon(page))
- ret = try_to_unmap_anon(page, migration);
- else
+ if (PageAnon(page)) {
+ if (migration)
+ ret = try_to_unmap_anon_migrate(page);
+ else
+ ret = try_to_unmap_anon(page);
+ } else
ret = try_to_unmap_file(page, migration);

if (!page_mapped(page))

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/