Re: RFC - how to balance Dirty+Writeback in the face of slow writeback.

From: Jens Axboe
Date: Thu Aug 17 2006 - 04:31:51 EST

On Wed, Aug 16 2006, Andrew Morton wrote:
> On Thu, 17 Aug 2006 13:59:41 +1000
> Neil Brown <neilb@xxxxxxx> wrote:
> > > CFQ used to have 1024 requests and we did have problems with excessive
> > > numbers of writeback pages. I fixed that in 2.6.early, but that seems to
> > > have got lost as well.
> > >
> >
> > What would you say constitutes "excessive"? Is there any sense in
> > which some absolute number is excessive (as it takes too long to scan
> > some list) or is it just a percent-of-memory thing?
> Excessive = 100% of memory dirty or under writeback against a single disk
> on a 512MB machine. Perhaps that problem just got forgotten about when CFQ
> went from 1024 requests down to 128. (That 128 was actually
> 64-available-for-read+64-available-for-write, so it's really 64 requests).

That's not quite true, if you set nr_requests to 128 that's 128 for
reads and 128 for writes. With the batching you will actually typically
see 128 * 3 / 2 == 192 requests allocated. Which translates to about
96MiB of dirty data on the queue, if everything works smoothly. The 3/2
limit is quite new, before I introduced that, if you had a lot of writes
each of them would be allowed 16 requests over the limit. So you would
sometimes see huge queues, as with just eg 16 writes, you could have 128
+ 16*16 requests allocated.

I've always been of the opinion that the vm should handle all of this,
and things should not change or break if I set 10000 as the request
limit. A rate-of-dirtying throttling per process sounds like a really
good idea, we badly need to prevent the occasional write (like a process
doing sync reads, and getting stuck in slooow reclaim) from being
throttled in the presence of a heavy dirtier.

Jens Axboe

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at