Re: IO scheduler based IO controller V10

From: Ingo Molnar
Date: Fri Oct 02 2009 - 13:21:32 EST



* Jens Axboe <jens.axboe@xxxxxxxxxx> wrote:

> It's not _that_ easy, it depends a lot on the access patterns. A good
> example of that is actually the idling that we already do. Say you
> have two applications, each starting up. If you start them both at the
> same time and just care for the dumb low latency, then you'll do one
> IO from each of them in turn. Latency will be good, but throughput
> will be aweful. And this means that in 20s they are both started,
> while with the slice idling and priority disk access that CFQ does,
> you'd hopefully have both up and running in 2s.
>
> So latency is good, definitely, but sometimes you have to worry about
> the bigger picture too. Latency is more than single IOs, it's often
> for complete operation which may involve lots of IOs. Single IO
> latency is a benchmark thing, it's not a real life issue. And that's
> where it becomes complex and not so black and white. Mike's test is a
> really good example of that.

To the extent of you arguing that Mike's test is artificial (i'm not
sure you are arguing that) - Mike certainly did not do an artificial
test - he tested 'konsole' cache-cold startup latency, such as:

sh -c "perf stat -- konsole -e exit" 2>&1|tee -a $LOGFILE

against a streaming dd.

That is a _very_ relevant benchmark IMHO and konsole's cache footprint
is far from trivial. (In fact i'd argue it's one of the most important
IO benchmarks on a desktop system - how does your desktop hold up to
something doing streaming IO.)

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/