Re: [PATCH -mm] throttle direct reclaim when too many pages areisolated already

From: Andrew Morton
Date: Wed Jul 15 2009 - 23:52:09 EST


On Wed, 15 Jul 2009 23:42:28 -0400 Rik van Riel <riel@xxxxxxxxxx> wrote:

> Andrew Morton wrote:
> > On Wed, 15 Jul 2009 23:28:14 -0400 Rik van Riel <riel@xxxxxxxxxx> wrote:
>
> >> If we are stuck at this point in the page reclaim code,
> >> it is because too many other tasks are reclaiming pages.
> >>
> >> That makes it fairly safe to just return SWAP_CLUSTER_MAX
> >> here and hope that __alloc_pages() can get a page.
> >>
> >> After all, if __alloc_pages() thinks it made progress,
> >> but still cannot make the allocation, it will call the
> >> pageout code again.
> >
> > Which will immediately return because the caller still has
> > fatal_signal_pending()?
>
> Other processes are in the middle of freeing pages at
> this point, so we should succeed in __alloc_pages()
> fairly quickly (and then die and free all our memory).

What if it's a uniprocessor machine and all those processes are
scheduled out? We sit there chewing 100% CPU and not doing anything
afaict.

Even if it _is_ SMP, we could still chew decent-sized blips of CPU time
rattling around waiting for something to happen.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/