Re: Filesystem optimization..

Hans-Joachim Baader (hans@grumbeer.inka.de)
Mon, 29 Dec 97 22:00 MET


In article <x74t3szrih.fsf@metal.iinet.net.au> you write:
>While staring dejectedly at a filesystem holding a tad of 3 million
>files, and looking at the 'orrible latency to open a file, it occured
>to me that with most usage patterns, two things are true:

How many of these files are actually accessed within an our? Or within
a day?

>
> 1) There are very few inodes that have more than one parent
> (i.e. more than one filename pointing to them)

No, a news spool of INN actually contains many hard links.

> 3) On most modern hard disks, the cost of reading 4K as
> opposed to reading 1K is minute (it's dominated by
> seek times etc).

It could actually speed up access. But I think prefetching is
done already anyway, at various levels.

>Then it occured to me that if you force aligning inodes on 64 byte
>boundries, the addressable filesystem goes to 256gig (probably enough
>for now).

No. Terabyte disk arrays already exist, and terabyte disks and files
will be there within a few years. If we ignore this we end up with
a degenerate system like Windoze.

>Comments?? (people dying to implement such a beast? :)

I guess something should be done, but not this way. There are
several new filesystems under construction. Perhaps one of these
can implement some optimizations.

hjb

-- 
Veni, Vidi, VISA:
        I came, I saw, I did a little shopping.