Re: [PATCH 8/8] vm: Add an tuning knob for vm.max_writeback_mb

From: Wu Fengguang
Date: Fri Oct 02 2009 - 06:35:41 EST


On Fri, Oct 02, 2009 at 05:54:59PM +0800, Jan Kara wrote:
> On Fri 02-10-09 10:25:12, Wu Fengguang wrote:
> > On Fri, Oct 02, 2009 at 05:35:23AM +0800, Jan Kara wrote:
> > > On Thu 01-10-09 22:54:43, Wu Fengguang wrote:
> > > > > > > You probably didn't understand my comment in the previous email. This is
> > > > > > > too late to wakeup all the tasks. There are two limits - background_limit
> > > > > > > (set to 5%) and dirty_limit (set to 10%). When amount of dirty data is
> > > > > > > above background_limit, we start the writeback but we don't throttle tasks
> > > > > > > yet. We start throttling tasks only when amount of dirty data on the bdi
> > > > > > > exceeds the part of the dirty limit belonging to the bdi. In case of a
> > > > > > > single bdi, this means we start throttling threads only when 10% of memory
> > > > > > > is dirty. To keep this behavior, we have to wakeup waiting threads as soon
> > > > > > > as their BDI gets below the dirty limit or when global number of dirty
> > > > > > > pages gets below (background_limit + dirty_limit) / 2.
> > > > > >
> > > > > > Sure, but the design goal is to wakeup the throttled tasks in the
> > > > > > __bdi_writeout_inc() path instead of here. As long as some (background)
> > > > > > writeback is running, __bdi_writeout_inc() will be called to wakeup
> > > > > > the tasks. This "unthrottle all on exit of background writeback" is
> > > > > > merely a safeguard, since once background writeback (which could be
> > > > > > queued by the throttled task itself, in bdi_writeback_wait) exits, the
> > > > > > calls to __bdi_writeout_inc() is likely to stop.
> > > > > The thing is: In the old code, tasks returned from balance_dirty_pages()
> > > > > as soon as we got below dirty_limit, regardless of how much they managed to
> > > > > write. So we want to wake them up from waiting as soon as we get below the
> > > > > dirty limit (maybe a bit later so that they don't immediately block again
> > > > > but I hope you get the point).
> > > >
> > > > Ah good catch! However overhitting the threshold by 1MB (maybe more with
> > > > concurrent dirtiers) should not be a problem. As you said, that avoids the
> > > > task being immediately blocked again.
> > > >
> > > > The old code does the dirty_limit check in an opportunistic manner. There were
> > > > no guarantee. 2.6.32 further weakens it with the removal of congestion back off.
> > > Sure, there are no guarantees but if we let threads sleep in
> > > balance_dirty_pages longer than necessary it will have a performance impact
> > > (application will sleep instead of doing useful work). So we should better
> > > make sure applications sleep as few as necessary in balance_dirty_pages.
> >
> > To avoid long sleep, we limit write_chunk size for balance_dirty_pages.
> > That's all we need. The "abort earlier if below dirty_limit" logic is
> > not necessary (or even undesirable) in three ways.
> > - just found that pre-31 kernels will normally succeed in writing the
> > whole write_chunk because nonblocking=0, thus it won't backoff on
> > congestion. So it's not over_bground_thresh() but over_dirty_limit()
> > that will change behavior.
> OK, good point.
>
> > - whether it be abort on over_bground_thresh() or over_dirty_limit(),
> > there is some constant threshold around which applications are
> > throttled. The exact threshold level won't change the throttled
> > dirty throughput. It is determined by the write IO throughput the
> > block device can handle.
> But the aim is to throttle applications at higher limit than a limit at
> which we start pdflush-style writeback. So that if writeback thread is fast
> enough to flush the data, applications don't get throttled at all. That's
> the reason for a difference between dirty_thresh and background_thresh.

When doing over_bground_thresh(), the real threshold won't be far from dirty_limit.
- for single dirtier, the threshold may be (dirty_limit - 4MB).
- for N dirtiers, it may be (dirty_limit - N*1MB) in worst case (the
ratelimit will backoff on dirty_exceeded). However it's highly
unlikely to reach worst case, because there are so many dirtiers and
so much dirtying pressure, a small fraction of "unthrottled at the
moment" dirtiers will be able to pump up the dirty pages to the
dirty limit. Since the dirtiers are unthrottled one by one, it is
unlikely for them to block at the same time. In stochastic, the
more N, the less probability for N processes to enqueue at the same
time. It's an exponential decreasing function.

> > - The over_bground_thresh() check is merely a safeguard which is not
> > relevant in 99.9% time. But when increased to over_dirty_limit(), it
> > may become a hot wakeup path comparable to the __bdi_writeout_inc()
> > path. The problem of this wakeup path is, it is "wakeup all". It's
> > preferable to wake up processes one by one in __bdi_writeout_inc().
> Well, it depends on the number of applications writing data (if there are
> 100 threads writing data, the last would get unblocked after 400 MB are
> written assuming ratelimit_pages = 1024). So in this case there are high
> chances that quite some threads will get woken up because we reach even
> background_thresh.

There is such a chance, but should be extremely low in probability :)

> What I'm in fact a bit worried about is the latency - in the example
> above it can take quite a long time for an application to be woken in
> balance_dirty_pages (that's not a new problem I agree). When the threads

No worry it's fine :) The over_dirty_limit() could make things better,
but is not a guarantee. In fact there are no guarantee of latency at
all, when there are so many dirtiers competing the IO channel..

> are writing continuously losts of data, there's no way around this. But
> when it was just a short spike of IO, we'd win if we woke those threads
> earlier. But OK, probaly we can sort that out later.

Yes in this case it would be beneficial. The good thing is, the
over_dirty_limit() would be trivial to add if necessary.

Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/