Re: tuning for large files in xfs

From: Avi Kivity
Date: Tue May 23 2006 - 04:05:20 EST


fitzboy wrote:
>
> What's your testing methodology?
>

here is my code... pretty simple, opens the file and reads around to a random block, 32k worth (note, doing a 2k read doesn't make much of a difference, only marginal, from 6.7ms per seek down to 6.1).

int main (int argc, char* argv[]) {
char buffer[256*1024];
int fd = open(argv[1],O_RDONLY);
struct stat s;
fstat(fd,&s);
s.st_size=s.st_size-(s.st_size%256*1024);
initTimers();
srandom(startSec);
long long currentPos;
int currentSize=32*1024;
int startTime=getTimeMS();
for (int currentRead = 0;currentRead<10000;currentRead++) {
currentPos=random();
currentPos=currentPos*currentSize;
This will overflow. I think that

currentPos = drand48() * s.st_size;

will give better results

currentPos=currentPos%s.st_size;

I'd suggest aligning currentPos to currentSize. Very likely your database does the same. Won't matter much on a single-threaded test though.

if (pread(fd,buffer,currentSize,currentPos) != currentSize)
std::cout<<"couldn't read in"<<std::endl;
}
std::cout << "blocksize of "<<currentSize<<"took"<<getTimeMS()-startTime<<" ms"<<std::endl;
}

> You can try to measure the amount of seeks going to the disk by using
> iostat, and see if that matches your test program.
>

I used iostat and found exactly what I was expecting: 10,000 rounds x 16 (number of 2k blocks in a 32k read) x 4 (number of 512 blocks per 2k block) = 640,000 reads, which is what iostat told me. so now my question remains, if each seek is supposed to average 3.5ms, how come I am seeing an average of 6-7ms?


Sorry, I wasn't specific enough: please run iostat -x /dev/whatever 1 and look at the 'r/s' (reads per second) field. If that agrees with what your test says, you have a block layer or lower problem, otherwise it's a filesystem problem.

--
error compiling committee.c: too many arguments to function

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/