Sunday, September 02, 2001, 11:21:37 PM, Bob McElrath wrote:
BM> I would like to take an extremely large file (multi-gigabyte) and edit
BM> it by removing a chunk out of the middle. This is easy enough by
BM> reading in the entire file and spitting it back out again, but it's
BM> hardly efficent to read in an 8GB file just to remove a 100MB segment.
BM> Is there another way to do this?
BM> Is it possible to modify the inode structure of the underlying
BM> filesystem to free blocks in the middle? (What to do with the half-full
BM> blocks that are left?) Has anyone written a tool to do something like
BM> this?
BM> Is there a way to do this in a filesystem-independent manner?
A COW fs is a far more useful and cool. A fs where a copy of a file
does not duplicate all blocks. Blocks get copied-on-write only when
copy of a file is written to. There could be even a fs compressor
which looks for and merges blocks with exactly same contents from
different files.
Maybe ext2/3 folks will play with this idea after ext3?
I'm planning to write a test program which will scan my ext2 fs and
report how many duplicate blocks with the same contents it sees (i.e
how many would I save with a COW fs)
-- Best regards, VDA mailto:VDA@port.imtp.ilyichevsk.odessa.ua http://port.imtp.ilyichevsk.odessa.ua/vda/- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Fri Sep 07 2001 - 21:00:15 EST