Re: [Question] Should direct reclaim time be bounded?

From: Michal Hocko
Date: Thu Jul 11 2019 - 03:12:49 EST


On Wed 10-07-19 16:36:58, Mike Kravetz wrote:
> On 7/10/19 12:44 PM, Michal Hocko wrote:
> > On Wed 10-07-19 11:42:40, Mike Kravetz wrote:
> > [...]
> >> As Michal suggested, I'm going to do some testing to see what impact
> >> dropping the __GFP_RETRY_MAYFAIL flag for these huge page allocations
> >> will have on the number of pages allocated.
> >
> > Just to clarify. I didn't mean to drop __GFP_RETRY_MAYFAIL from the
> > allocation request. I meant to drop the special casing of the flag in
> > should_continue_reclaim. I really have hard time to argue for this
> > special casing TBH. The flag is meant to retry harder but that shouldn't
> > be reduced to a single reclaim attempt because that alone doesn't really
> > help much with the high order allocation. It is more about compaction to
> > be retried harder.
>
> Thanks Michal. That is indeed what you suggested earlier. I remembered
> incorrectly. Sorry.
>
> Removing the special casing for __GFP_RETRY_MAYFAIL in should_continue_reclaim
> implies that it will return false if nothing was reclaimed (nr_reclaimed == 0)
> in the previous pass.
>
> When I make such a modification and test, I see long stalls as a result
> of should_compact_retry returning true too often. On a system I am currently
> testing, should_compact_retry has returned true 36000000 times. My guess
> is that this may stall forever. Vlastmil previously asked about this behavior,
> so I am capturing the reason. Like before [1], should_compact_retry is
> returning true mostly because compaction_withdrawn() returns COMPACT_DEFERRED.

This smells like a problem to me. But somebody more familiar with
compaction should comment.

>
> Total 36000000
> 35437500 COMPACT_DEFERRED
> 562500 COMPACT_PARTIAL_SKIPPED
>
>
> [1] https://lkml.org/lkml/2019/6/5/643
> --
> Mike Kravetz

--
Michal Hocko
SUSE Labs