Re: Filesystem optimization..

Michael O'Reilly (michael@metal.iinet.net.au)
06 Jan 1998 08:57:03 +0800


tytso@mit.edu writes:
> > While I was there, I toyed with trying to add a similar (although not
> > quite as ambitious as was described in their paper) to ext2fs. That is,
> > instead of eliminating the inode table altogether, put a copy of the
> > inode in the directory, but leave a forwarding pointer in the inode
> > table for programs like e2fsck and dump which need to scan the inode
> > table.
>
> That did cross my mind. Without the redudancy of the inode used
> bitmap, and the inodes in known positions, it's a little harder to
> detect inconsistencies...
>
> It's actually darned new impossible. One of the reasons why the ext2
> filesystem is so robust against errors, and why e2fsck can do a good job
> recovering files, is because we always know where to find all of the
> inodes. If you store the inode information in the directories, and the
> root directory gets smashed, you can say good bye to basically all of
> your data.

I was thinking of using two bits per blocks in the free blocks
bitmap. 1 bit set for block in use, 1 bit set for 'this is a
directory/inode block'. That would give you all the redundancy that
ext2fs currently has wouldn't it?

> A human being might be able to deduce that a certain block was a
> directory, and then work forwards and backwards to recover the some or
> all of the filesystem, but it would require an expert, and it's not the
> sort of work that could be automated.

Nod. Checksums were the other thought that crossed my mind, but they'd
be too expensive to update all the time.

> What was that again about basing the filesystem on ext2 so it would have
> all of the robustness of the original filesystem? It won't work that
> way at all, I'm afraid.

Tell me about it. :)

> - Ted

Michael.