Re: Benchmark results for elv_test

From: Robert Cohen (robert@apex.net.au)
Date: Fri Oct 06 2000 - 05:18:57 EST


Your point is taken to a certain extent.
Its true that the files here are not necessarily going to be laid our
sequentially on disk.
However, they will be laid out far enough apart to cause some seeking
which will put load on the elevator.

And even if this program isn't putting incredible stress on the
elevator, it is a simple program doing simple file operations and the
kernel should be able to handle it efficiently. Which current 2.4.0
kernels aren't doing.

Your right about the reads being synchronous, but the writes shouldnt be
synchronous. I dont do the fsync's until all the files are written. So
this should be testing the write elevator at least, which is where the
performance problem is showing.

This test was deliberately written as a single process so that the order
of operations is controlled.
If I use multiple threads, each working on a separate file, then each
thread tends to get an entire timeslice and so a lot of operations on
one file will be done sequentially which defeats the purpose.
I might try a version using asynch io calls.

I wanted to write it using standard IO paths as much as possible. If I
use esoteric technolgies like the NWFS stuff, then its not clear if
performance problems found are in the kernel or in the unusual libraries
used.

Robert Cohen
Australian National University

 J. Robert von Behren (jrvb@cs.berkeley.edu) write

>Looking over your test program, I don't think you are actually testing
>the elevator algorithm at all. There are a couple of key flaws:

> * The reads and writes are synchronous, so the elevator algorithm
> at _most_ gets to effect the blocks within a single read or
> write (ie - inside the same file).

> * The fact that you have written the files out all at once will not
> place all 240 megs of data consecutively on the disk. The file
> system spreads out data on disk to allow breathing room for
> adding new files, or appending to existing files. In particular,
> this means that although large subsections of blocks that are
> adjacent in the logical file _will_ be close by on disk, you cannot
> generalize from this that these large subsections will be close to
> each other - either within a single file, or between the files
> created by the test program.

> * Unless the partition is completely empty, existing file data will
> effect where new data is placed (and in particular how well
>co-located it can be).

>A better approach to testing the elevator algorithm would be to write
>directly to a raw device, instead of going through the file system.
>This would allow you better control over actual disk placement of
>blocks, and let you know what you were testing. It would also allow you

>to make sure that subsequent re-tests would be repeatable, as they
>shouldn't be effected by existing data. Finally, to do this test right,

>you need to be able to issue multiple IO requests to the disk at
>essentially the same time. To do this, you'll probably need to go
>multithreaded, and use a barrier of some sort to make sure the threads
>stay synchronized.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Sat Oct 07 2000 - 21:00:18 EST