Re: [PATCH 1/1] mm: count time in drain_all_pages during direct reclaim as memory pressure

From: Suren Baghdasaryan
Date: Wed Feb 23 2022 - 14:06:39 EST


On Wed, Feb 23, 2022 at 10:54 AM Johannes Weiner <hannes@xxxxxxxxxxx> wrote:
>
> On Sun, Feb 20, 2022 at 08:52:38AM -0800, Suren Baghdasaryan wrote:
> > On Sat, Feb 19, 2022 at 4:40 PM Minchan Kim <minchan@xxxxxxxxxx> wrote:
> > >
> > > On Sat, Feb 19, 2022 at 09:49:40AM -0800, Suren Baghdasaryan wrote:
> > > > When page allocation in direct reclaim path fails, the system will
> > > > make one attempt to shrink per-cpu page lists and free pages from
> > > > high alloc reserves. Draining per-cpu pages into buddy allocator can
> > > > be a very slow operation because it's done using workqueues and the
> > > > task in direct reclaim waits for all of them to finish before
> > >
> > > Yes, drain_all_pages is serious slow(100ms - 150ms on Android)
> > > especially when CPUs are fully packed. It was also spotted in CMA
> > > allocation even when there was on no memory pressure.
> >
> > Thanks for the input, Minchan!
> > In my tests I've seen 50-60ms delays in a single drain_all_pages but I
> > can imagine there are cases worse than these.
> >
> > >
> > > > proceeding. Currently this time is not accounted as psi memory stall.
> > >
> > > Good spot.
> > >
> > > >
> > > > While testing mobile devices under extreme memory pressure, when
> > > > allocations are failing during direct reclaim, we notices that psi
> > > > events which would be expected in such conditions were not triggered.
> > > > After profiling these cases it was determined that the reason for
> > > > missing psi events was that a big chunk of time spent in direct
> > > > reclaim is not accounted as memory stall, therefore psi would not
> > > > reach the levels at which an event is generated. Further investigation
> > > > revealed that the bulk of that unaccounted time was spent inside
> > > > drain_all_pages call.
> > > >
> > > > Annotate drain_all_pages and unreserve_highatomic_pageblock during
> > > > page allocation failure in the direct reclaim path so that delays
> > > > caused by these calls are accounted as memory stall.
> > > >
> > > > Reported-by: Tim Murray <timmurray@xxxxxxxxxx>
> > > > Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx>
> > > > ---
> > > > mm/page_alloc.c | 4 ++++
> > > > 1 file changed, 4 insertions(+)
> > > >
> > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > > index 3589febc6d31..7fd0d392b39b 100644
> > > > --- a/mm/page_alloc.c
> > > > +++ b/mm/page_alloc.c
> > > > @@ -4639,8 +4639,12 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
> > > > * Shrink them and try again
> > > > */
> > > > if (!page && !drained) {
> > > > + unsigned long pflags;
> > > > +
> > > > + psi_memstall_enter(&pflags);
> > > > unreserve_highatomic_pageblock(ac, false);
> > > > drain_all_pages(NULL);
> > > > + psi_memstall_leave(&pflags);
> > >
> > > Instead of annotating the specific drain_all_pages, how about
> > > moving the annotation from __perform_reclaim to
> > > __alloc_pages_direct_reclaim?
> >
> > I'm fine with that approach too. Let's wait for Johannes' input before
> > I make any changes.
>
> I think the change makes sense, even if the workqueue fix speeds up
> the drain. I agree with Minchan about moving the annotation upward.
>
> With it moved, please feel free to add
> Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx>

Thanks Johannes!
I'll move psi_memstall_enter/psi_memstall_leave from __perform_reclaim
into __alloc_pages_direct_reclaim to cover it completely. After that
will continue on fixing the workqueue issue.