Re: [PATCH 1/1] mm: prevent a race between process_mrelease and exit_mmap

From: Suren Baghdasaryan
Date: Tue Nov 09 2021 - 16:10:55 EST


On Tue, Nov 9, 2021 at 12:10 PM Michal Hocko <mhocko@xxxxxxxx> wrote:
>
> On Tue 09-11-21 12:02:37, Suren Baghdasaryan wrote:
> > On Tue, Nov 9, 2021 at 11:50 AM Michal Hocko <mhocko@xxxxxxxx> wrote:
> > >
> > > On Tue 09-11-21 11:37:06, Suren Baghdasaryan wrote:
> > > > On Tue, Nov 9, 2021 at 11:26 AM Michal Hocko <mhocko@xxxxxxxx> wrote:
> > > > >
> > > > > On Tue 09-11-21 11:01:02, Suren Baghdasaryan wrote:
> > > > > [...]
> > > > > > Discussing how the patch I want to post works for maple trees that
> > > > > > Matthew is working on, I've got a question:
> > > > > >
> > > > > > IIUC, according to Michal's post here:
> > > > > > https://lore.kernel.org/all/20170725154514.GN26723@xxxxxxxxxxxxxx,
> > > > > > unmap_vmas() can race with other mmap_lock read holders (including
> > > > > > oom_reap_task_mm()) with no issues.
> > > > > > Maple tree patchset requires rcu read lock or the mmap semaphore be
> > > > > > held (read or write side) when walking the tree, including inside
> > > > > > unmap_vmas(). When asked, he told me that he is not sure why it's
> > > > > > currently "safe" to walk the vma->vm_next list in unmap_vmas() while
> > > > > > another thread is reaping the mm.
> > > > > > Michal (or maybe someone else), could you please clarify why
> > > > > > unmap_vmas() can safely race with oom_reap_task_mm()? Or maybe my
> > > > > > understanding was wrong?
> > > > >
> > > > > I cannot really comment on the mapple tree part. But the existing
> > > > > synchronization between oom reaper and exit_mmap is based on
> > > > > - oom_reaper takes mmap_sem for reading
> > > > > - exit_mmap sets MMF_OOM_SKIP and takes the exclusive mmap_sem before
> > > > > unmap_vmas.
> > > > >
> > > > > The oom_reaper therefore can either unmap the address space if the lock
> > > > > is taken before exit_mmap or it would it would bale out on MMF_OOM_SKIP
> > > > > if it takes the lock afterwards. So the reaper cannot race with
> > > > > unmap_vmas.
> > > >
> > > > I see. So, it's the combination of MMF_OOM_SKIP and mmap_lock working
> > > > as a barrier which prevent them from racing with each other...
> > > > I wasn't sure how
> > > > https://lore.kernel.org/all/20170724072332.31903-1-mhocko@xxxxxxxxxx/
> > > > was implementing this synchronization because it would take mmap_sem
> > > > write side after unmap_vmas() and IIUC there was no
> > > > "mmap_lock_write(); mmap_unlock_write();" sequence in exit_mmap at
> > > > that time. I'll need to checkout the old sources to figure this out.
> > >
> > > My memory is rather dimm but AFAIR the main problem was freeing page
> > > tables and freeing vmas not unmap_vmas. That one was no modifying the
> > > vma list. Essentially it was just a slightly modified madvise don't
> > > need. So that part was allowed to race with oom_reaper.
> >
> > So, both unmap_vmas and __oom_reap_task_mm do not modify vma list and
> > therefore can execute concurrently. That makes sense, thanks.
>
> Yes, those can run concurrently. One thing I completely forgot about is
> 27ae357fa82b ("mm, oom: fix concurrent munlock and oom reaper unmap, v3")
> which is about interaction with the munlock.

Thanks for pointing it out. IIUC, ideally we want to get rid of all
these special cases and replace them with proper locking. If so, I'll
see what I can do here.

>
> > Then I guess, if we want to be semantically correct in exit_mmap(), we
> > would have to take mmap_read_lock before unmap_vmas, then drop it and
> > take mmap_write_lock before free_pgtables.
>
> I think it would be just more straightforward to take the exclusive lock
> for the whole operation.

Ok, but note that this will prevent concurrent memory reaping, so will
likely affect the speed at which memory is released during oom-kill. I
saw measurable difference when testing process_mrelease placing
mmap_write_lock before vs after unmap_vmas. If we take mmap_read_lock
before unmap_vmas and mmap_write_lock after it, then there won't be
such issue. You indicated that the speed of memory release should not
be the deciding factor here but I want to make it clear before
proceeding.
Thanks,
Suren.

> --
> Michal Hocko
> SUSE Labs