Re: [RESEND][RFC] BFQ I/O Scheduler

From: Jens Axboe
Date: Thu Apr 17 2008 - 09:55:07 EST


On Thu, Apr 17 2008, Aaron Carroll wrote:
> Fabio Checconi wrote:
> >>From: Aaron Carroll <aaronc@xxxxxxxxxxxxxxx>
> >>How do you figure that? This is a situation where time-slices work
> >>nicely,
> >>because they implicitly account for the performance penalty of poor access
> >>patterns. The sequential-accessing processes (and the system overall)
> >>ends
> >>up with higher throughput.
> >>
> >
> >The unfairness is not WRT tasks generating poor access patterns.
> >If you have two tasks doing sequential accesses on two different
> >regions of the disk the exact amount of service they receive in the
> >same amount of time depends on the transfer rate of the disk on
> >that regions, and, depending on the media, it is not always the same.
>
> Ok... you're talking about ZBR.
>
> I'm not convinced this should be treated differently to, say, random vs.
> sequential workloads. You still end up with reduced global throughput as
> you've shown in the ``Short-term time guarantees'' table. It is an
> interesting case though... since the lower performance is not though fault
> of the process it doesn't seem fair to ``punish'' it.

It is indeed a valid observation, but I think we are getting into
details still. CFQ wants to provide fair access to the drive, it doesn't
claim to be 100% fair wrt throughput or transfer sums at all costs. This
is where fairness and real life for an all round scheduler divert
somewhat.

So while it IS true that you could have 40mb/sec at one end and 65mb/sec
at the other and thus give the process at the start an 'unfair' share of
bandwidth, it's honestly mostly a theoretical problem. I can envision
some valid concerns for media streaming filling the entire drive, but
then my solution would be to just bump the time slice if you are not
meeting deadlines. I've never heard anyone complain about this issue.

--
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/