Re: [PATCH -mm] throttle direct reclaim when too many pages are isolatedalready

From: Rik van Riel
Date: Wed Jul 15 2009 - 23:43:15 EST


Andrew Morton wrote:
On Wed, 15 Jul 2009 23:28:14 -0400 Rik van Riel <riel@xxxxxxxxxx> wrote:

If we are stuck at this point in the page reclaim code,
it is because too many other tasks are reclaiming pages.

That makes it fairly safe to just return SWAP_CLUSTER_MAX
here and hope that __alloc_pages() can get a page.

After all, if __alloc_pages() thinks it made progress,
but still cannot make the allocation, it will call the
pageout code again.

Which will immediately return because the caller still has
fatal_signal_pending()?

Other processes are in the middle of freeing pages at
this point, so we should succeed in __alloc_pages()
fairly quickly (and then die and free all our memory).

--
All rights reversed.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/