Re: Always passing mm and vma down (was: [RFC][PATCH] Convert do_no_page() to a hook to avoid DFS race)

From: Andrea Arcangeli (
Date: Sun Jun 01 2003 - 07:22:00 EST

Hi Paul,

On Sat, May 31, 2003 at 04:48:16PM -0700, Paul E. McKenney wrote:
> On Sat, May 31, 2003 at 10:46:18AM +0200, Ingo Oeser wrote:
> > On Fri, May 30, 2003 at 04:41:50PM -0700, Paul E. McKenney wrote:
> > > -struct page *
> > > -ia32_install_shared_page (struct vm_area_struct *vma, unsigned long address, int no_share)
> > > +int
> > > +ia32_install_shared_page (struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, int write_access, pmd_t *pmd)
> >
> > Why do we always pass mm and vma down, even if vma->vm_mm
> > contains the mm, where the vma belongs to? Is the connection
> > between a vma and its mm also protected by the mmap_sem?
> >
> > Is this really necessary or an oversight and we waste a lot of
> > stack in a lot of places?
> >
> > If we just need it for accounting: We need current->mm, if we
> > need it to locate the next vma relatively to this vma, vma->vm_mm
> > is the one.
> Interesting point. The original do_no_page() API does this
> as well:
> static int
> do_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
> unsigned long address, int write_access, pte_t *page_table, pmd_t *pmd)
> As does do_anonymous_page(). I assumed that there were corner
> cases where this one-to-one correspondence did not exist, but
> must confess that I did not go looking for them.
> Or is this a performance issue, avoiding a dereference and
> possible cache miss?

it's a performance microoptimization issue (the stack memory overhead is
not significant), each new parameter to those calls will slowdown the
kernel, especially with the x86 calling convetions it will hurt more
than with other archs. Jeff did something similar for UML (dunno if he
also can use vma->vm_mm, anyways I #ifdeffed it out enterely for non UML
compiles for exact this reason that I can't slowdown the whole
production host kernel just for the uml compile) then there's also the
additional extern call. So basically the patch would hurt until there
are active users.

But the real question here I guess is: why should a distributed
filesystem need to install the pte by itself?

The memory coherency with distributed shared memory (i.e. MAP_SHARED
against truncate with MAP_SHARED and truncate running on different boxes
on a DFS) is a generic problem (most of the time using the same
algorithm too), as such I believe the infrastructure and logics to keep
it coherent could make sense to be a common code functionalty.

And w/o proper high level locking, the non distributed filesystems will
corrupt the VM too with truncate against nopage. I already fixed this in
my tree. (see the attachment) So I wonder if the fixes could be shared.
I mean, you definitely need my fixes even when using the DFS on a
isolated box, and if you don't need them while using the fs locally, it
means we're duplicating effort somehow.

Since I don't see the users of the new hook, it's a bit hard to judje if
the duplication is legitimate or not. So overall I'd agree with Andrew
that to judje the patch it'd make sense to see (or know more) about the
users of the hook too.

as for the anon memory, yes, it's probably more efficient and cleaner to
have it be a nopage callback too, the double branch is probably more
costly than the duplicated unlock anyways. However it has nothing to do
with these issues, so I recommend to keep it separated from the DFS
patches (for readibility). (it can't have anything to do with DFS since
that memory is anon local to the box [if not using openmosix but that's
a different issue ;) ])


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

This archive was generated by hypermail 2b29 : Sat Jun 07 2003 - 22:00:14 EST