Re: Nasty ext2fs bug!

From: Andreas Dilger (
Date: Thu Aug 01 2002 - 12:48:56 EST

On Aug 01, 2002 12:54 -0400, wrote:
> Summary: When using lftp with the pget -n option for large files, once the
> file is complete the problem begins. If you try to copy, ftp, or
> pretty much anything that involves reading the file, it is "stuck"
> at a rate of 800KB/s to 1600KB/s.

The problem is obvious - there are many threads writing to the same
file, and the filesystem cannot really do a good job of allocating
blocks for these threads. When you are reading the file back, the
disk is seeking like crazy to read the data from the disk.

It would be possible, even desirable, to have the block allocation
algorithm try and keep enough empty space on the disk for sparsely
written files, but this is a rather uncommon usage.

If you copy from this fragmented file to a new file, then the new file
is layed out contiguously on disk so readahead works and no seeking is
involved when reading the new file.

> Problem: The pget -n feature of lftp is very nice if you want to maximize
> your download bandwidth, however, if getting a large file, such
> as the one I am getting, once the file is successfully
> retrived, transferring it to another HDD or FTPing it to another
> computer is very slow (800KB-1600KB/s).

I find it hard to believe that this would actually make a huge
difference, except in the case where the source is throttling bandwidth
on a per-connection basis. Either your network is saturated by the
transfer, or some point in between is saturated. I could be wrong, of
course, and it would be interesting to hear the reasoning behind the

Cheers, Andreas

PS - thanks for the very detailed bug report - if only all bug reports
     were so full of useful information...

Andreas Dilger

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to More majordomo info at Please read the FAQ at

This archive was generated by hypermail 2b29 : Wed Aug 07 2002 - 22:00:16 EST