Re: safe file systems

Miguel de Icaza (miguel@luthien.nuclecu.unam.mx)
25 Sep 1997 14:24:11 -0500


> > Do you think it would be possible to build a safe, slow file system?
> > By safe, I mean that I could hit reset in the middle of 50 parallel
> > un-tars and reboot the system and the file system comes up clean (no fsck,
> > but data loss)?
>
> What about:
>
> kurt@TittyTwister:/home/kurt > man mount
> [...]
> sync All I/O to the file system should be done
> synchronously.
> [...]

This does not eliminate the need for running fsck after a system
crash, this only guarantees that the kernel will update the metadata
information synchronously. And this is only useful to make the
inconsistency window on the file system smaller.

Consider the case where you write to a file, the file system code
needs to: allocate a free block and register this in a couple of
places: the allocation bitmap needs to know about this change; the
file system's superblock summary information needs to know about this
as well and the inode's information should be updated as well.

So, even if you use synchronous writes, the file system will be
inconsistent if the crash happens in the middle of any of those three
steps at reboot time. Fsck's job is to fix those inconsistencies when
the system is rebooted.

By adding Write-Ahead metadata logging, you keep a log of which things
must be done/undone at reboot time if there is a system failure,
ideally, at mount time, the file system code fixes the file system by
replaying the log commands.

Miguel.