Re: [PATCH 1/2] mm, oom: do not rely on TIF_MEMDIE for memory reserves access

From: Michal Hocko
Date: Thu Aug 03 2017 - 03:06:21 EST


On Thu 03-08-17 10:39:42, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Wed 02-08-17 00:30:33, Tetsuo Handa wrote:
> > > > @@ -3603,6 +3612,22 @@ gfp_to_alloc_flags(gfp_t gfp_mask)
> > > > return alloc_flags;
> > > > }
> > > >
> > > > +static bool oom_reserves_allowed(struct task_struct *tsk)
> > > > +{
> > > > + if (!tsk_is_oom_victim(tsk))
> > > > + return false;
> > > > +
> > > > + /*
> > > > + * !MMU doesn't have oom reaper so we shouldn't risk the memory reserves
> > > > + * depletion and shouldn't give access to memory reserves passed the
> > > > + * exit_mm
> > > > + */
> > > > + if (!IS_ENABLED(CONFIG_MMU) && !tsk->mm)
> > > > + return false;
> > >
> > > Branching based on CONFIG_MMU is ugly. I suggest timeout based next OOM
> > > victim selection if CONFIG_MMU=n.
> >
> > I suggest we do not argue about nommu without actually optimizing for or
> > fixing nommu which we are not here. I am even not sure memory reserves
> > can ever be depleted for that config.
>
> I don't think memory reserves can deplete for CONFIG_MMU=n environment.
> But the reason the OOM reaper was introduced is not limited to handling
> depletion of memory reserves. The OOM reaper was introduced because
> OOM victims might get stuck indirectly waiting for other threads doing
> memory allocation. You said
>
> > Yes, exit_aio is the only blocking call I know of currently. But I would
> > like this to be as robust as possible and so I do not want to rely on
> > the current implementation. This can change in future and I can
> > guarantee that nobody will think about the oom path when adding
> > something to the final __mmput path.
>
> at http://lkml.kernel.org/r/20170726054533.GA960@xxxxxxxxxxxxxx , but
> how can you guarantee that nobody will think about the oom path
> when adding something to the final __mmput() path without thinking
> about possibility of getting stuck waiting for memory allocation in
> CONFIG_MMU=n environment?

Look, I really appreciate your sentiment for for nommu platform but with
an absolute lack of _any_ oom reports on that platform that I am aware
of nor any reports about lockups during oom I am less than thrilled to
add a code to fix a problem which even might not exist. Nommu is usually
very special with a very specific workload running (e.g. no overcommit)
so I strongly suspect that any OOM theories are highly academic.

All I do care about is to not regress nommu as much as possible. So can
we get back to the proposed patch and updates I have done to address
your review feedback please?
--
Michal Hocko
SUSE Labs