Re: Filesystem limitations

Clayton Weaver (cgweav@eskimo.com)
Sat, 8 Nov 1997 22:10:57 -0800 (PST)


Ok, all humor aside (indexing the filesegments in the virtual file is

((voff_t) value / VSEG) /* filesegment size */

and the off_t into the filesegment is

((voff_t) value % VSEG)

Setting voff_t in a header, non-ANSI on 32-bit, courtesy of
/usr/src/linux/include/asm/alpha/types.h:

#if ((~0UL) == 0xffffffff) /* true if long is 32-bit */
typedef unsigned long long voff_t;
else
typedef unsigned long voff_t;
#endif
)

Making a virtual file of segments INT_MAX long is no problem. Writing
custom vfileops() functions for initialization, open(), close(), read(),
write(), fsync(), ftruncate(), flock() is no problem. Even using an inuse
bit in a struct with a pointer to the filename and off_t to last byte in
use for the filesegments is no problem, so the last file written to can be
truncated to the actual bytes used and grow dynamically (just set the msb
of the off_t VSEG.offset to 1 if filesegment is not in use, 0 if in use,
and mask it away before using it; off_t is signed, but size of file has
unsigned semantics, so the msb should never be 1 if the filesegment has
data).

But if your vfile_t is 256 INT_MAX filesegments, for example, then your
filesystem needs 512gb of disk space. The question shifts to what are the
partition limits, struct stat limits, inode limits, etc ad infinitum. How
much space can you accomodate in what looks to vfs like one virtual
filesystem?

None of this fixes someone's existing binary that didn't plan for more
than 2gb of data in a file by breaking their storage up into modular
chunks guaranteed not to overflow external i/o size defines. That really
does need a filesystem without 32-bit limits at the kernel interface,
since they may not have the code for the program. And it still might crash
even with a 64-bit filesystem and kernel, if the calls coded into the
program have 32-bit type dependencies, ie the types for the access
functions coded into the binary are only 32-bits wide, because the authors
didn't anticipate using that program on files that grow beyond 2gb.

Regards, Clayton Weaver cgweav@eskimo.com (Seattle)