Re: FS: hardlinks on directories

From: Helge Hafting (helgehaf@aitel.hist.no)
Date: Tue Aug 05 2003 - 07:51:46 EST


Stephan von Krawczynski wrote:
> On Mon, 4 Aug 2003 16:09:28 -0500
> Jesse Pollard <jesse@cats-chateau.net> wrote:
>
>
>>>tar --dereference loops on symlinks _today_, to name an example.
>>>All you have to do is to provide a way to find out if a directory is a
>>>hardlink, nothing more. And that should be easy.
>>
>>Yup - because a symlink is NOT a hard link. it is a file.
>>
>>If you use hard links then there is no way to determine which is the "proper"
>>link.
>
>
> Where does it say that an fs is not allowed to give you this information in
> some way?

Because there is no such thing as the "proper" link.

Every filename and directoryname is a "hard link" to
some inode. The "ln" command lets you make more
links to the same inode, there is _no_ difference
at all between the "first" and the "second" (or third) hard link.
There is no information recorded anywhere about which one
where "first". You can remove the "first" and still
use the file through the second link.

>
>>>>It was also done in one of the "popular" code management systems under
>>>>unix. (it allowed a "mount" of the system root to be under the CVS
>>>>repository to detect unauthorized modifications...). Unfortunately,
>>>>the system could not be backed up anymore. 1. A dump of the CVS
>>>>filesystem turned into a dump of the entire system... 2. You could not
>>>>restore the backups... The dumps failed (bru at the time) because the
>>>>pathnames got too long, the restore failed since it ran out of disk space
>>>>due to the multiple copies of the tree being created.
>>>
>>>And they never heard of "--exclude" in tar, did they?
>>
>>Doesn't work. Remember - you have to --exclude EVERY possible loop. And
>>unless you know ahead of time, you can't exclude it. The only way we found
>>to reliably do the backup was to dismount the CVS.
>
>
> And if you completely run out of ideas in your wild-mounts-throughout-the-tree
> problem you should simply use "tar -l".
>
> And in just the same way fs could provide a mode bit saying "hi, I am a
> hardlink", and tar can then easily provide option number 1345 saying "stay away
> from hardlinks".
>
Then you stay away from each and every file on the system, because
every file is a hard link. This is useless.

Making up a new bit in the directory saying "this directory entry
was created with 'ln' as opposed to 'open'" is indeed possible,
but wouldn't solve your problems. Consider this:

A file is created in the normal way by a user. (link count=1)
Someone else finds it useful creates a link to it. (link count=2)
The first user don't need the file anymore and deletes it. (link count=1)
The file still exist, because disk blocks are only released when
the link count goes to zero.
The second user can still use the file through his link, and think
it is safe. But the file is no longer backed up because your
tar avoids the directory entry created with 'ln', and there is no
longer any directory entry created the normal way. Similiar
problems arise for all other utilities you might want to modify
to use this sort of linking flags.

The problem don't aries with symlinks because
the file really disappear when deleted, leaving invalid
symlinks. (Everybody knows that and don't think they
can keep a file by making a symlink, as you can with a hardlink)

The number of hard links to some inode is a reference count,
the number of symlinks is not.

Another post of yours:
> tar --dereference loops on symlinks _today_, to name an example.
> All you have to do is to provide a way to find out if a directory is a
> hardlink, nothing more. And that should be easy.

There is currently no way to find out, as explained above. And a
"flag" solution is useless, as you then will get files (and directories)
that exist only as links when the "original" link is deleted.

Another post:

>> Things would break badly if the hierarchy became an arbitrary graph.
> Care to name one? What exactly is the rule you see broken? Sure you
can build
> loops,

Loops are funny in more than one way. Tools can be made that detect
them via inode numbers, and handle them properly. This can be costly in time
and memory, but backing up or otherwise traversing a generic graph is
possible.

Even more fun is when you have a directory loop like this:

mkdir A
cd A
mkdir B
cd B
make hard link C back to A

cd ../..
rmdir A

You now removed A from your home directory, but the
directory itself did not disappear because it had
another hard link from C in B.

Expected behaviour, but there is no longer a path
from _anywhere_ to the B and C directories. That
means they occupy disk space but cannot be deleted,
accessed or used in any way short of garbage collecting
the entire directory structure. That can be a big job.
The space usage can be big too, for the inaccessible B or C
directories might hold some large files.

This isn't easily avoidable by "not doing stupid things"
either, interactions between several users who don't
know what the others do can easily form loops by
linking and moving some directories around.

Helge Hafting

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Thu Aug 07 2003 - 22:00:28 EST