Re: safe file systems

Darin Johnson (darin@connectnet.com)
Wed, 24 Sep 1997 16:04:20 -0700 (PDT)


> From: Rob Hagopian <hagopiar@vu.union.edu>

> The filesystem you're describing is called a "journaled" filesystem...
> The system "journals" all writes, so that if an operation doesn't complete
> (ie. in a crash), it can be "rolled back" (reversed) safely.

Actuall, the purpose of journaled filesystems is to speed up file
writes, not necessarily to provide crash protection. Crash protection
is a side effect for the most part.

Ie, file caches have caused file reads to become fast, so that the
performance difference between a smart file system and a stupid one
are minimalized. This left file writes as a big bottleneck, and thus
was born the idea for a journaled file system (write the data and
attribute changes to the same place, no multiple seeks or what not).

I feel that a file system that was purposely designed to be safe would
be safer than a journaled file system. You can still lose recent
changes in a journaled file system if the machine crashes. A
journaled file system will let writes get cached up, it doesn't write
them right away (this makes it fast).

A "safe" filesystem might the properties of:
- being slow (for writes, reads will be fast because of caches)
- multiple backups of vital information
- data is updated before file attributes, indexes, and whatnot
- data writes may be delayed; but attributes and index writes synchronous
- additional data written to support recovery (ie, even if you lose
the root directory, you can recover the subdirectories with their
correct names); and a lost file will know its name
- maybe file undelete capability; or hidden versioning
- bad block mapping

A "safe" filesystem *can* use journalling techniques, but the
journalling techniques by themselves aren't necessarily enough.