Re: Bandwidth limits??

Wayne Schlitt (wayne@backbone.midwestcs.com)
Tue, 22 Oct 96 15:21 CDT


> Wayne Schlitt said:
>
> > Personally, I think this should be done it the kernel somehow and
> > apply it to any/all shared I/O devices. A large 'tar x' should not
> > kill disk I/O for everyone. Ftp shouldn't kill the Ethernet for all
> > users of the network. A program that is dumping stuff to the X server
> > shouldn't kill X performance for all the other programs.
>
> Linux covers the tar above quite well, but I guess from your tone that
> you have IDE disks.

Actually, I have both IDE and SCSI disks, and I think it is a problem
on both. The problem with tar is that it can easily create several
read/write requests on the disk w/o blocking (data blocks, directory
entries, inodes, modification times, etc.) This allows it to saturate
a data bus, and doesn't allow other process that do just a few I/O
operations to complete quickly.

> The X problem isn't related to the kernel, but
> a feature of the single threaded X server, but as just about all video
> cards only permit a single command queue it is still very difficult to
> keeps things moving when one client is dumping data to the screen.

Again, several process may be blocking waiting on the X server socket,
and the kernel should should make sure that if one process is sending
lots of data that it gets unblocked last, and maybe has a small amount
of forced idle time. If no other process is blocking (recently), this
idle time could be minimal or zero.

> All nice ideas, but sometimes it is the hardware that lets you down.

The cpu and memory are also single threaded peices of hardware, and
people have found good solutions to those.

Managing a queue and prioritizing I/O requests is harder to do because
it takes so long for a single request to finish. The processor can be
interrupted quickly and it is not hard to take away a memory page from
a process. On the other hand, _because_ I/O requests can take so
long, this can make a big difference on how quick a system "feels".

Yes, this prioritizing can cause an overall decrease in throughput,
but then, so does forced context switches. (You have to reload caches
and reschedule things, instead of just letting a processes continue
until it has to do some I/O)

No, I don't think Linux handles things poorly now, or that this has to
be the highest priority project or anything like that. I do think
that there are a lot of related problems that _might_ be able to be
handled by a kernel that monitors I/O requests and doesn't just use
FIFO queuing.

-wayne