Re: [PATCH] vmscan: do not throttle kthreads due to too_many_isolated

From: Michal Hocko
Date: Fri Nov 27 2015 - 07:50:17 EST


On Thu 26-11-15 11:16:24, Vladimir Davydov wrote:
> On Wed, Nov 25, 2015 at 07:27:57PM +0300, Vladimir Davydov wrote:
> > On Wed, Nov 25, 2015 at 04:45:13PM +0100, Vlastimil Babka wrote:
> > > On 11/25/2015 04:36 PM, Vladimir Davydov wrote:
> > > > Block device drivers often hand off io request processing to kernel
> > > > threads (example: device mapper). If such a thread calls kmalloc, it can
> > > > dive into direct reclaim path and end up waiting for too_many_isolated
> > > > to return false, blocking writeback. This can lead to a dead lock if the
> > >
> > > Shouldn't such allocation lack __GFP_IO to prevent this and other kinds of
> > > deadlocks? And/or have mempools?
> >
> > Not necessarily. loopback is an example: it can call
> > grab_cache_write_begin -> add_to_page_cache_lru with GFP_KERNEL.

AFAIR loop driver reduces the gfp_maks via inode mapping.

> Anyway, kthreads that use GFP_NOIO and/or mempool aren't safe either,
> because it isn't an allocation context problem: the reclaimer locks up
> not because it tries to take an fs/io lock the caller holds, but because
> it waits for isolated pages to be put back, which will never happen,
> since processes that isolated them depend on the kthread making
> progress. This is purely a reclaimer heuristic, which kmalloc users are
> not aware of.
>
> My point is that, in contrast to userspace processes, it is dangerous to
> throttle kthreads in the reclaimer, because they might be responsible
> for reclaimer progress (e.g. performing writeback).

Wouldn't it be better if your writeback kthread did PF_MEMALLOC/__GFP_MEMALLOC
instead because it is in fact a reclaimer so it even get to the reclaim.

There way too many allocations done from the kernel thread context to be
not throttled (just look at worker threads).

--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/