Re: Deleting large files

From: Morten Welinder
Date: Wed May 07 2008 - 19:14:49 EST


> Suppose you had an N GB file that just filled up the disk. You now
> delete it, but get control back before it is really deleted. You
> now start to write a new file that will eventually just fill up
> the disk. [...]

That argument ought to stop right there. If you believe that deleting a
file will necessarily and immediately give you back the space, then you
are wrong in the current state of the affairs already.

NFS does not do that -- in fact, I don't believe any file system does that
unless you can guarantee at least that no other process or the kernel has
that file open; AFS did not do that last I looked a decade ago; versioning
file systems do not; journaling file systems might not. File systems that
support undelete do not do that. In short: assuming such a thing is a
bug in need of a fix today.

Right now, unlink is a commonly used syscall with unbounded response
time. If your GUI program deletes a file, the GUI generally locks up until
the kernel feels like returning -- that is certainly not how you get a smooth
user experience. Forking a process to do the deletion (a) is pathetic,
(b) is not currently done, and (c) does not work: you cannot get a result
right away, i.e., you lose error handling.

Morten
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/