Re: Large disk partition over 300GB

Stephen C. Tweedie (sct@redhat.com)
Thu, 15 Jul 1999 13:01:07 +0100 (BST)


Hi,

On Wed, 14 Jul 1999 22:16:18 -0700, merblich <merblich@gateway.net>
said:

> Normally, a file system will allocate disk blocks based on a time
> vs space algorithm. By having blocks free in a number of locations,
> the best location to allocate the disk block due to the growing
> of fragments or just a new block can be realized. A couple of factors
> in determining this is interleave, rotation speed, time to seek, etc.

Not any more. That used to be the space, nobody makes drives like that
any more. For the last ten years there have been large numbers of
pipeline stages added into the disk IO path, from track buffers to
on-disk caches to asynchronous scsi tagged commands to operating system
readahead buffers. The _only_ things you can effectively optimise for
when allocating data are (a) sequential access, and (b) approximate
proximity of related data. Effective explicit data placement just
cannot be done by the OS any more, and ext2 (being less than ten years
old) has never tried to do so.

> Once the file system's free space decreases to a specified point, the
> allocation then just allocates blocks due to their availability. My
> guess, is that 5% is the point where this tradeoff is made.

Not at all. Ext2 always uses the same algorithm. It just becomes less
effective if there is less free space. The exact rate at which things
fall off obviously depends on the access patterns, which is one reason
why the reserved space region is configurable.

--Stephen

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/