Devices/Partitions over 2TB

From: Berkley Shands
Date: Mon Mar 14 2005 - 11:48:27 EST


With a Broadcom BC4852 and suitable Sata drives, it is easy to create
functional devices with well in excess of 2TB raw space. This presents a severe
problem to partitioning tools, such as fdisk/cfdisk and the like as the
kernel partition structure has a 32 bit integer max for sector counts. Since
the read_int() function combined with cround() overflows, having such a large
device makes life difficult. While mkfs.ext3 doesn't care, there is not
any way to slice that space up other than to use the raw device (/dev/sda).
Ever though about backing up 4TB to tape? :-)
Even with a severely hacked up fdisk, the 32-bit field is just too
hard coded to make the effort worthwhile. A newer revision of the partitioning
system needs to be thought of. The kernel must be told how to deal with large
devices (the filesystems already do this), using at least 64 bits here.
Using a large SAN server, I've seen 200 Peta-byte arrays. While that is excessive
for a local workstation, 4-16 TB is clearly available and being deployed.
I have not found any documentation of efforts to overcome the 2TB partition limit,
though someone should be thinking of this. If there is an effort, could someone
point me to that place? If not, is there any interest in starting such a project?

thanks for the bandwidth.

berkley@xxxxxxxxxxxx
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/