On Mon, Mar 30, 2009 at 7:33 PM, Vladislav Bolkhovitin <vst@xxxxxxxx> wrote:As part of 1.0.1 release preparations I made some performance tests to make
sure there are no performance regressions in SCST overall and iSCSI-SCST
particularly. Results were quite interesting, so I decided to publish them
together with the corresponding numbers for IET and STGT iSCSI targets. This
isn't a real performance comparison, it includes only few chosen tests,
because I don't have time for a complete comparison. But I hope somebody
will take up what I did and make it complete.
Setup:
Target: HT 2.4GHz Xeon, x86_32, 2GB of memory limited to 256MB by kernel
command line to have less test data footprint, 75GB 15K RPM SCSI disk as
backstorage, dual port 1Gbps E1000 Intel network card, 2.6.29 kernel.
Initiator: 1.7GHz Xeon, x86_32, 1GB of memory limited to 256MB by kernel
command line to have less test data footprint, dual port 1Gbps E1000 Intel
network card, 2.6.27 kernel, open-iscsi 2.0-870-rc3.
The target exported a 5GB file on XFS for FILEIO and 5GB partition for
BLOCKIO.
All the tests were ran 3 times and average written. All the values are in
MB/s. The tests were ran with CFQ and deadline IO schedulers on the target.
All other parameters on both target and initiator were default.
These are indeed interesting results. There are some aspects of the
test setup I do not understand however:
* All tests have been run with buffered I/O instead of direct I/O
(iflag=direct / oflag=direct). My experience is that the results of
tests with direct I/O are easier to reproduce (less variation between
runs). So I have been wondering why the tests have been run with
buffered I/O instead ?
* It is well known that having more memory in the target system
improves performance because of read and write caching. What did you
want to demonstrate by limiting the memory of the target system ?
* Which SCST options were enabled on the target ? Was e.g. the
NV_CACHE option enabled ?