Re: very large directories

From: Jesse Pollard (pollard@tomcat.admin.navo.hpc.mil)
Date: Tue Feb 29 2000 - 13:07:29 EST


Jan Bobrowski <jb@wizard.ae.krakow.pl>:
>> Every time a file needs to be opened, the dir is opened and the
>> filename must be searched for.
>>
>> Anyone concerned with file access performance should never put
>> that many files in a single dir.
>>
>> Mike A. Harris
>
>That is not true.
>
>If there's enough RAM for directory cache, the directory size is
>irrelevant. Directory is scanned when file is first open and
>then entry is in fast global cache. Organisation of directory tree and
>directory sizes doesn't affect lookup speed at all.
>
>When your'e getting short on memory or you're listing directory,
>linear ext2 directory structure will slow down operation, however.

That is only sort of not true. On the first access the overhead is at maximum.
This type of application usually only opens one file, sometimes two or three.
The overhead on the second and third is low, but the average overhead is still
very high. The update time for new file entries stays the same.
-------------------------------------------------------------------------
Jesse I Pollard, II
Email: pollard@navo.hpc.mil

Any opinions expressed are solely my own.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Tue Feb 29 2000 - 21:00:22 EST