Re: IO scheduler based IO controller V10

From: Mike Galbraith
Date: Fri Oct 02 2009 - 11:32:26 EST


On Fri, 2009-10-02 at 17:27 +0200, Corrado Zoccolo wrote:
> On Fri, Oct 2, 2009 at 2:49 PM, Vivek Goyal <vgoyal@xxxxxxxxxx> wrote:
> > On Fri, Oct 02, 2009 at 12:55:25PM +0200, Corrado Zoccolo wrote:
> >
> > Actually I am not touching this code. Looking at the V10, I have not
> > changed anything here in idling code.
>
> I based my analisys on the original patch:
> http://lkml.indiana.edu/hypermail/linux/kernel/0907.1/01793.html
>
> Mike, can you confirm which version of the fairness patch did you use
> in your tests?

That would be this one-liner.

o CFQ provides fair access to disk in terms of disk time used to processes.
Fairness is provided for the applications which have their think time with
in slice_idle (8ms default) limit.

o CFQ currently disables idling for seeky processes. So even if a process
has think time with-in slice_idle limits, it will still not get fair share
of disk. Disabling idling for a seeky process seems good from throughput
perspective but not necessarily from fairness perspecitve.

0 Do not disable idling based on seek pattern of process if a user has set
/sys/block/<disk>/queue/iosched/fairness = 1.

Signed-off-by: Vivek Goyal <vgoyal@xxxxxxxxxx>
---
block/cfq-iosched.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

Index: linux-2.6/block/cfq-iosched.c
===================================================================
--- linux-2.6.orig/block/cfq-iosched.c
+++ linux-2.6/block/cfq-iosched.c
@@ -1953,7 +1953,7 @@ cfq_update_idle_window(struct cfq_data *
enable_idle = old_idle = cfq_cfqq_idle_window(cfqq);

if (!atomic_read(&cic->ioc->nr_tasks) || !cfqd->cfq_slice_idle ||
- (cfqd->hw_tag && CIC_SEEKY(cic)))
+ (!cfqd->cfq_fairness && cfqd->hw_tag && CIC_SEEKY(cic)))
enable_idle = 0;
else if (sample_valid(cic->ttime_samples)) {
if (cic->ttime_mean > cfqd->cfq_slice_idle)


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/