Re: REGRESSION: Performance regressions from switching anon_vma->lockto mutex

From: Linus Torvalds
Date: Fri Jun 17 2011 - 14:02:21 EST


On Fri, Jun 17, 2011 at 10:41 AM, Hugh Dickins <hughd@xxxxxxxxxx> wrote:
>
> Applying load with those two patches applied (combined patch shown at
> the bottom, in case you can tell me I misunderstood what to apply,
> and have got the wrong combination on), lockdep very soon protested.

Combined patch looks good, it's just the one without the NULL ptr tests removed.

And yup, that makes sense. Since we now hold the anon_vma lock over an
allocation, the memory allocation might want to start to free things.

> I've not given it _any_ thought, and won't be able to come back to
> it for a couple of hours: chucked over the wall for your delectation.

It's a mis-feature of "page_referenced()": we can call it without
holding any page locks ("is_locked=0"), and that function will then do
a try_lock() on the page, and just consider it referenced if it
failed.

HOWEVER, the code to then do the same thing for the anon_vma lock
doesn't do the same trylock model, because it used to be a spinlock:
so there was "no need" (since you can never do a non-atomic allocation
from within a spinlock).

So this is arguably a bug in memory reclaim, but sicne the
mutex->spinlock conversion had been pretty mindless, nobody noticed
until the mutex region grew.

So I do think that "page_referenced_anon()" should do a trylock, and
return "referenced" if the trylock fails. Comments?

That said, we have a few other mutexes that are just not allowed to be
held over an allocation. page_referenced_file() has that
mapping->i_mmap_mutex lock, for example. So maybe the rule just has to
be "you cannot hold anon_vma lock over an allocation". Which would be
sad: one of the whole _points_ of turning it from a spinlock to a
mutex would be that it relaxes the locking rules a lot (and not just
the preemptibility)

Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/