File preallocation

Thomas =?ISO-8859-1?Q?K=F6nig?= (Thomas.Koenig@ciw.uni-karlsruhe.de)
Thu, 2 May 1996 13:24:29 +0200 (MET DST)


Would it be useful to preallocate larger chunks of a file?

If two or more files in the same directory are written to at the
same time, for example with

for i in 1 2 3 4
do
dd if=/dev/zero of=/mnt/large-$i bs=8k count=8000 &
done

then the data will interleave on the disk, making subsequent reading
slower.

The current preallocation strategy is to use a maximum of 8k. This
could be extended to preallocate a fraction of the current file size,
from a minimum of the current 8k to a certain maximum (for example 256k).

Comments?

[This isn't my idea, it's originally by Kristian Koehntopp. He's
also got a patch for it. I thought it might be worth mentioning
before 2.0]

-- 
Thomas Koenig, Thomas.Koenig@ciw.uni-karlsruhe.de, ig25@dkauni2.bitnet.
The joy of engineering is to find a straight line on a double
logarithmic diagram.