I'm also really looking forward to your bandwidth vs. seek time
optimization option for the driver. Keep up the good work!
Thanks!
MOLNAR Ingo <mingo@chiara.csoma.elte.hu> writes:
> On 14 Dec 1998, Camm Maguire wrote:
>
> > Greetings! I'm running 2.1.130, SMP, 2 PII 350, 128 MB RAM, Adaptec
> > 7890 U2 wide, 2 Seagate ST39173LW "Barracuda" drives. I patched the
> > driver slightly so that the 80 MByte/s transfer speed is correctly
> > negotiated. (I think you might have posted the patch?)
> >
> > In any case, I'm not getting anything like these numbers. Basically,
> > I don't see any noticeable improvement with RAID, except maybe for
> > seeks. [...]
>
> as seeks are about the worst things about harddisks, the current RAID1
> code does read-balancing to improve seeks, not bandwith. Here are some
> numbers to show how this works out. the test system is a sufficiently
> beefy SMP PII box, with 4 identical, moderate speed UW SCSI disks,
> 7.7MB/sec read, 8.0MB/sec write performance.
>
> single-disk RAID1 (which is basically identical to a RAID-less filesystem)
> performance with a 500MB Bonnie run (RAM 128M):
>
> -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
> MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
> 500 6554 98.6 8019 7.6 3525 8.8 6964 95.6 7707 6.7 94.9 1.0
>
> shows what we already knew: 8.0MB/sec write speed, 7.7MB/sec read speed,
> and 10.5 msec average seek time.
>
> a 4-way RAID1 array spanned over the 4 disks gives:
>
> -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
> MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
> 500 3751 70.5 7705 7.9 3931 12.1 6072 83.4 11848 12.0 175.6 2.9
>
> 7.7MB/sec write speed, slight write-degradation (starting to get SCSI-bus
> bound, they are on a single 40MB/sec interface for this test, and 4x 7.7
> is 30.8MB/sec used bandwith). 11.8 MB/sec read speed, this is about 50%
> bandwith improvement. But seeks are much better, 5.7 msec.
>
> the statistical formula with the current RAID1 read-balancing code goes
> like this: average seek time is SEEK0/sqrt(N) where SEEK0 is single-disk
> seek performance, and N is the number of mirrors. (There are ideas
> floating around to make the formula ~SEEK0/N, these will soon be
> incorporated into the RAID1 design.)
>
> RAID0 on the same 4-disk array gives 7.5 msec seek times for the
> 'combined' disk, and there is no way we can improve this.
>
> note that no money can buy better seek performance, except if you buy 1TB
> worth of RAM or solid-state disks (~2.5M $). If you use 4 high-end 6msec
> disks, you will soon be able to see 1-2 msec seek performance (!), this is
> worth to have for some applications. This is 4 times the cost for the data
> area, but still much cheaper than pure RAM, if you go into terrabytes.
> With the RAID driver you can scale 'N' to whatever value you wish. The
> current practical limit is 10-12 disks, but it's not really architectural.
> The current factor between RAM and disk costs is about 1:30-1:50, so you
> obviously do not want to run a 30-ways RAID1 array, but a 10-ways RAID1
> array is still a factor 3 or 5 cheaper than RAM.
>
> i'm planning to introduce a new RAID1-layout that is bandwith-oriented,
> this can be specified in /etc/raidtab at creation time just like RAID5
> layouts can be specified. The theory roughly: we can either optimize for
> seek performance, or for bandwith, but not both. There will be two
> (physically incompatible) RAID1 layouts implemented in the RAID driver,
> one optimized fully for bandwith, one fully for seek performance.
>
> i hope this answers your questions.
>
> -- mingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/