Re: RFC - how to balance Dirty+Writeback in the face of slowwriteback.
From: Andrew Morton
Date: Thu Aug 17 2006 - 02:21:45 EST
On Thu, 17 Aug 2006 13:59:41 +1000
Neil Brown <neilb@xxxxxxx> wrote:
> > CFQ used to have 1024 requests and we did have problems with excessive
> > numbers of writeback pages. I fixed that in 2.6.early, but that seems to
> > have got lost as well.
> What would you say constitutes "excessive"? Is there any sense in
> which some absolute number is excessive (as it takes too long to scan
> some list) or is it just a percent-of-memory thing?
Excessive = 100% of memory dirty or under writeback against a single disk
on a 512MB machine. Perhaps that problem just got forgotten about when CFQ
went from 1024 requests down to 128. (That 128 was actually
64-available-for-read+64-available-for-write, so it's really 64 requests).
> > Something like that - it'll be relatively simple.
> Unfortunately I think it is also relatively simple to get it badly
> wrong:-) Make one workload fast, and another slower.
I think it's unlikely in this case. As long as we keep the queues
reasonably full, the disks will be running flat-out and merging will be as
good as we're going to get.
One thing one does have to watch out for is the many-disks scenario: do
concurrent dd's onto 12 disks and make sure that none of their LEDs go
out. This is actually surprisingly hard to do, but it would be very hard
to do worse than 2.4.x ;)
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/