Re: fsync on large files

Linus Torvalds (torvalds@transmeta.com)
Wed, 17 Feb 1999 22:53:13 -0800 (PST)


On Wed, 17 Feb 1999, Oliver Xymoron wrote:
>
> Yes, definitely. I've certainly found it useful in reconstructing damage
> by hand. Doesn't mean the kernel has to use it, it just has to maintain
> it. I can see how having the kernel not rely on them (already the case) is
> a win, but I can't see why saving the couple dozen bytes per directory and
> the overhead at mkdir time is a big deal. Redundancy, if done right, can
> increase robustness.

Redundancy, if done wrong, can also screw you quite badly. So it cuts both
ways.

For example, a directory structure that is actually fairly powerful is a
special case of hardlinked directories: not something you want to allow
most people to do, but what I have actually been asked about a few times
is a way to have the same directory show up in multiple places. That
implies that ".." is actually dependent not on the directory itself, but
on how you got there.

The standard answer to this in unix is symlinks, but I bet I'm not the
only person who has ever cursed about "ls subdir/.." being very different
from "ls .". And there are actually filesystems out there that can do it,
it's just that they cannot have ".." entries in their directories.

Loopback mounts do this right, but they tend to be higher overhead. Nobody
does loopback mounts of /usr/X11 -> /usr/X11R6, people use symlinks and
live with the confusion of ".." because they are used to it.

Now, as it is, the linux dentry cache would be confused by having
potentially multiple aliases for the same path, but that's something that
I consider to be a misfeature - but one that wasn't worth fixing
considering that we didn't have any serious filesystems that could take
advantage of it anyway (I think the only two filesystems we support right
now that can do it at all are AFFS and iso9660, and for the latter I don't
actuall yknow of anybody who does that kind of disks).

But being able to handle hardlinked directories would actually be really
nice: if you avoid loops (which is easy to do), it goes from being a
dangerous feature to a really nice one.

Basically, I used to think that hardlinked directories are evil, and
horrible. What changed my opinion was that (a) I found a few cases where I
really wanted to use them and (b) I noticed that most of the reason for
hating them was because they were hard to do right rather than anything
fundamentally wrong with the concept.

We can't do them right as is, but getting rid of ".." in the on-disk
directory structure would be one step, and I think I can handle the dentry
aliasing issue too.

Imagine, for example, a directory tree with a shared component. Wouldn't
it be nice to just link it into the tree at multiple points? Imagine a
chroot() environment, for a moment - symlinks don't work to the outside,
but hardlinking does.

Maybe it's not worth it, but it _is_ an example of redundancy that just
screws you.

Linus

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/