Throw the book away, its author is incompetent to make that statement. You
must therefore assume anythign else the author wrote is probably incorrect
and unsuitable for learning form
> ensure that the file system is consistent, the lack of ordering on
> operations can lead to confused applications or, even worse, crashing
> applications because of the inconsistencies in the order of
> modifications to the file system."
And he fails to understand that writing metadata first is provably the same
problem
> _Practical File System Design_, Dominic Giampaolo, p. 36
Thanks. Another book never to buy
> small, and why it is nonzero, and what, if anything, an application
> programmer seeking robustness could do to work around it.
Thats a different sensible question. With all Unix OS's there is a writeback
cache. Thats almost always the right thing (tm). If you need to update a
specific file in a safe manner and be sure the update completes perfectly
you can open a file with the O_SYNC flag. This tells the unix OS that all
writes must be synchronous, and they are thus completed that way.
This is still of course no guarantee. A power failure can occur during the
writing of a set of synchronously updated disk blocks and then you may still
lose. In fact there isn't a way to win the game only narrow the odds, evwen
if your file system is uncached.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/