Re: More on 2.2.18pre2aa2

From: Andrea Arcangeli (andrea@suse.de)
Date: Wed Sep 13 2000 - 08:33:06 EST


On Wed, 13 Sep 2000, Stephen C. Tweedie wrote:

>You could use number-of-sectors, but that can't distinguish between
>random access and sequential access.

We actually use the number of `buffer headers` that enters the
ll_rw_block layer. (if all the fses have 4k blocksize it become the number
of sectors)

>might improve things too, however. We could easily reduce the
>elevator latency targets --- right down to maybe 5 or 10 --- if
>we didn't count merges against the latencies. That would still give

I think we have to count merges against the latencies if we want to
provide good latency. The big requests are the mosts latency problematic
ones.

And anyway all those discussions ignore the major point. We don't consider
the implicit latency that a request gets when it's inserted into the
queue, then we have no way to provide good latency whatever else we
change. Even the elevator_noop have no way to provide good latency at this
moment. What will happen with the elevator_noop right now is this (same
thing will happen by disabling the elevator in 2.2.15 of course):

        write some mbyte
        read some kbyte
        write some mbyte
        read some kbyte

The real elevator (not the noop one) will decrease even more the rate
of the reads to decrease the number of seeks (it's an elevator after
all, while elevator_noop does nothing :).

The reason I'm ok with the above behaviour is because we still allow the
`read some kbyte` to happen in the middle of the writes and that fixes the
"starvation while somebody is writing to disk problem", and of course it
also helps people doing fsync and journal commits. But there's no way you
can run `find` and have a fast output on the screen (as you can have with
2.2.16 for example).

So all we need is a rasonable limit of "request passing" that doesn't show
up in benchmarks (like 250/500 with the new logic, I tried and it was ok)
and that still allows those "read" in the middle to happen at a rasonable
not extremely low rate.

>We could even try being more advanced, and account out-of-order
>non-merge queue inserts as being more costly than merges: decrement
>request latencies by one on a merge, as we do now, but decrement them
>by a higher number on other request inserts.

This certainly make sense, it would be a very nice addon to the heuristic.

>The point is that you don't actually _have_ to measure in absolute
>wall-clock time, but simply ignoring the fact that some requests are
>faster than others isn't a very good answer either.

I 100% agree.

Andrea

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Fri Sep 15 2000 - 21:00:20 EST