Re: Filesize limitation

Clayton Weaver (cgweav@eskimo.com)
Tue, 4 Nov 1997 23:00:41 -0800 (PST)


On Tue, 4 Nov 1997, Kristian Koehntopp wrote:

> In netuse.lists.linux-kernel you write:
> >What's gcc's maximum array dimension? INT_MAX? So I can make a virtual
> >file that indexes an array of "file segments" (real files) of size up to
> >LONG_MAX (off_t)?
>
> Correct. And you can work around the 640 KB memory limit in
> MS-DOS by using a proper compiler and creating an overlay file.
>
> Both are braindead approaches, curing symptoms of a problem, but
> not removing the cause. The 640 KB MS-DOS problem is properly
> cured by installing Linux. The 2 GB file size limit is properly
> cured by removing this limit in the kernel source.
>
> Kristian

You mean change off_t to long long, so it is 64-bit? Is that
posix-compliant? My point was simply that you can scale a file to beyond
the byte-width limits for ssize_t and off_t, whatever they are.

A modulo, an array lookup for the filename, and a subtraction don't
impress me as an exceptionally high price to pay to let the libc sizes
work until they are enlarged via a standards-compliant process. I think
the whole system is broken as designed when you have different max and min
for file offsets than you have for size_t, but that's not a linux
invention and it is an industry standard.

Just how many files bigger than 2gb do you have, anyway? We should break
the vast majority of portable, pd or gpl C code for that vast number of
users that use files bigger than 2gb?

It's just a hack. Feel free to modify the sizes in your own kernel and
libc (and what all else) instead if that seems more convenient.

Regards, Clayton Weaver cgweav@eskimo.com (Seattle)