Re: elevator-starvation-4 (2.2.14 && 2.3.42)

From: Bruce Thompson (bruce@otherother.com)
Date: Thu Feb 10 2000 - 21:04:54 EST


I tend to disagree. If we assume that cat /dev/zero > file fills
consecutive blocks with zeroes then I would argue that the relative
priority between the driver space handling block requests and user
space generating requests is slightly screwy. In a case where
requests are being generated faster than the driver can deal with
them then no matter how we order the request queue, we've got a
problem.

I would argue that as long as the relative priority between user
space and driver space is such that a swamped driver can handle at
least N + 1 requests in the time that user space can generate N
requests then we have a stable situation and the driver can
eventually catch up and service all requests. Otherwise, I claim that
I can come up with a pathological case for any aging scheme that will
result in requests being starved. Even a "pure" elevator algorithm
will leave requests for a potentially long time on the queue. BUT,
that's not the same as indefinite postponement (which is how I
interpreted the phrase "starved"). Indefinite postponement simply
cannot happen with a correct elevator implementation. It may take a
while to service a request, but there is a guaranteed maximum delay
(absurd though it may be in the worst case).

Lemme sort of restate the question I have then: Is the goal to ensure
that the elevator algorithm does not result in indefinite
postponement? If so then I assert that a correct implementation of
the elevator algorithm is guaranteed to prevent indefinite
postponement. Or, is the goal to ensure that any given request is not
delayed for an unreasonable length of time in the face of a flood of
requests, and in particular to ensure that there are no pathological
cases or more appropriate that any pathological cases are difficult
to achieve in practice? I would argue then that a fruitful line of
attack would be to ensure that you can service N + 1 requests in the
time that N requests are generated and hence ensure that the request
queue cannot grow to unbounded length which I think is the real crux
of the issue. In my opinion the /dev/zero example is nothing more
(and please note it's nothing less too) than a pathological case for
the current implementation, and I think ought to be addressed as such.

        Cheers,
        Bruce.

At 10:41 -0500 02/10/2000, Olivier Galibert wrote:
>On Thu, Feb 10, 2000 at 06:27:50AM -0800, Bruce Thompson wrote:
>> That's the basic algorithm I learned in an OS course from my
>> undergrad degree nearly 15 years ago, have I missed something major?
>
>The fact that eventually can be too far in the future to matter. Just
>try "cat /dev/zero >file". You'll have to wait until either your
>partition fills or you hit the 2Gb limit before you can do anything.
>This takes too long a while.
>
> OG.

-- 
--
------------------------------------------------------------------------
Bruce Thompson                  | "Pinky! Are you pondering what I'm
Palm Computing, Inc.            |  pondering?"
                                 | "Uh, I think so Brain, but where will
                                 |  we find a duck and a hose at this
bruce@otherother.com            |  hour?"
                    My opinions are strictly my own.

PGP Fingerprint: 8F48 7FEF EE22 14FB 1F2E 3BF2 0D40 9628 53E8 72EB

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Tue Feb 15 2000 - 21:00:19 EST