Re: 2.4.22pre6aa1

From: Andrea Arcangeli (andrea@suse.de)
Date: Thu Jul 17 2003 - 17:50:02 EST


On Fri, Jul 18, 2003 at 12:30:45AM +0200, Marc-Christian Petersen wrote:
> On Friday 18 July 2003 00:13, Marc-Christian Petersen wrote:
>
> > On Thursday 17 July 2003 12:28, Andrea Arcangeli wrote:
> >
> > Hi Andrea,
> >
> > > Only in 2.4.22pre6aa1: 00_elevator-lowlatency-1
> > > Only in 2.4.22pre6aa1: 00_elevator-read-reservation-axboe-2l-1
> >
> > Hmm, this is now my first day testing out .22-pre6 and .22-pre6aa1 with the
> > new I/O stall fixes. At a first look & feel it's very good, but I've
> > noticed a side effect (if it can be called so):
> >
> > VMware4 Workstation
> > -------------------
> >
> > 2.4.22-pre[6|6aa1]: ~ 1 minute 02 seconds from: Start this virtual machine
> > ... 2.4.22-pre2 : ~ 30 seconds from: Start this virtual
> > machine ...
> >
> > ... to start up Windows 2000 Professional completely.
> >
> > Well, personally I don't care about the slowdown of vmware startup with a
> > VM but there may be many other slowdows?!
> hmmm:
>
> 2.4.22-pre[6|6aa1]:
> -------------------
> root@codeman:[/] # dd if=/dev/zero of=/home/largefile bs=16384 count=131072
> 131072+0 records in
> 131072+0 records out
> 2147483648 bytes transferred in 128.765686 seconds (16677453 bytes/sec)
>
> 2.4.22-pre2:
> ------------
> root@codeman:[/] # dd if=/dev/zero of=/home/largefile bs=16384 count=131072
> 131072+0 records in
> 131072+0 records out
> 2147483648 bytes transferred in 98.489331 seconds (21804226 bytes/sec)
>
> both kernels freshly rebooted.

this explains it.

Can you try to change include/linux/blkdev.h like this:

-#define MAX_QUEUE_SECTORS (4 << (20 - 9)) /* 4 mbytes when full sized */
+#define MAX_QUEUE_SECTORS (16 << (20 - 9)) /* 4 mbytes when full sized */

This will raise the queue from 4 to 16M. That is the first(/only) thing
that can explain a drop in performnace while doing contigous I/O.
However I didn't expect it to make a difference, or at least not so
relevant.

If this doesn't help at all, it might not be an elevator/blkdev thing.
At least on my machines the contigous I/O still at the same speed.

You also where the only one reporting a loss of performance with
elevator-lowlatency, it could be still the same problem that you've
seen at that time.

Last but not the least, if it's an elevator/blkdev thing, you must be
able to measure it with reads too, not only with writes. Can you try to
read that file back? (careful about the cache effects if you read it
multiple times and you interrupt it, best it to benchmark reads after a
mount to be sure)

> ext3fs (data=ordered)

can you try with data=writeback (or ext2) or hdparm -W1 and see if you
can still see the same delta between the two kernels? (careful with -W1
as it invalidates journaling)

thanks,

Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Wed Jul 23 2003 - 22:00:31 EST