Re: Filesize limitation

htw6966@htw-dresden.de
Sat, 8 Nov 1997 17:58:40 +0100 (NFT)


"Daniel G. Linder" <dlinder@zeus.webcentric.net> wrote:
>kris@koehntopp.de (Kristian Koehntopp) wrote:
>>In netuse.lists.linux-kernel you write:
>>>How difficult would it be to upgrade this to a ext3 filesystem with
>>>32 bits reserved for gid and uid, and 64 bits for length and block count?:
>>It is possible, but it would make the on-disk inode slightly
>>larger than 128 byte, so there is about 100 unused bytes per
>>inode. These bytes could be used otherwise (more direct blocks,
>>for example), which would be a very traditional and evolutionary
>>approach to the problem of large files. Such an ext3 would
>>relate to ext2 as xiafs relates to minixfs: It simply extends
>>the existing structures, but does nothing to improve the
>>infrastructure.
>
>[snip -- discussion of "radical" filesystem ideas by SGI, Microsoft,
>reiserfs, and others.]
>
>>Kristian
>
> I don't have nearly as much knowledge on this as I would like, but my
>gut feeling is if we are going to re-work/create a filesystem to allow 64
>bit lengths, counts, id's, etc., why stop at 64 bits? A quick bit of math
>shows that 128 bit would "really" hold us for a while. (2^128 "atoms" one
>Angstrom sized would be a cube over 650meters per side!) For the next two
>years or so, I forsee that the vast majority of Linux users will be able
>to live with the current ext2 filesystem. My dream would be that for the
>year 2000, we could introduce to the world a stable, reliable, tolerant
>128-bit based filesystem.
>
> I have not had enough time to read through Hans Reisers' white paper on
>his ReiserFS (http://idiom.com/~beverly/reiserfs.html) to know if it is
>extensible to large physical drives, if it can stripe/RAID drives, etc.
>
I don't have much experience but according to some of the other mails on
this thread it takes too much time to operate with 128 bit - so 64 bit
would just be great. But some years forward (lets say 10 to 20) we will
hit the wall with 64bit (as we do now with 32). So why not allocating
space and _not_ using it?
like
long long i_length;
long long i_length_reserve;
....
the time we need 128 bit we can transform it very easy (even I could do it
- what means a current non-kernelist)

so to say: we also should save some space for future features making Unix
(not only Linux) more comfortable - I think on things like more than one
group per file (a thing that would be great for shared files in big
projects - imagine a source-file: group A works on it, group B should read
it, all others shouldn't access it because it's alpha-state; so it
would be rw- for owner and A, r-- for B and --- for others ...) or a
better granularity in directory-permissions (like on Novell - I know we
talk about _real_ systems - but this is the only thing I like on Novell)

By the side: _good_ databases don't need such big files - it's more
controlable to use small clusters between 1MB and 100MB plus metafiles -
only on giving the data to the "outer world" it can become very large

keep hacking,
Konrad

-------------------------------------
Konrad Rosenbaum
The TeaTime project: a new kind of OO-Databases (under Linux+GPL), see
http://www.htw-dresden.de/~htw6966/teatime.html