Re: [RFC PATCH v1 00/30] fs: inode->i_version rework and optimization

From: Dave Chinner
Date: Tue Mar 21 2017 - 17:46:15 EST


On Tue, Mar 21, 2017 at 01:23:24PM -0400, Jeff Layton wrote:
> On Tue, 2017-03-21 at 12:30 -0400, J. Bruce Fields wrote:
> > - It's durable; the above comparison still works if there were reboots
> > between the two i_version checks.
> > - I don't know how realistic this is--we may need to figure out
> > if there's a weaker guarantee that's still useful. Do
> > filesystems actually make ctime/mtime/i_version changes
> > atomically with the changes that caused them? What if a
> > change attribute is exposed to an NFS client but doesn't make
> > it to disk, and then that value is reused after reboot?
> >
>
> Yeah, there could be atomicity there. If we bump i_version, we'll mark
> the inode dirty and I think that will end up with the new i_version at
> least being journalled before __mark_inode_dirty returns.

The change may be journalled, but it isn't guaranteed stable until
fsync is run on the inode.

NFS server operations commit the metadata changed by a modification
through ->commit_metadata or sync_inode_metadata() before the
response is sent back to the client, hence guaranteeing that
i_version changes through the NFS server are stable and durable.

This is not the case for normal operations done through the POSIX
API - the journalling is asynchronous and the only durability
guarantees are provided by fsync()....

> That said, I suppose it is possible for us to bump the counter, hand
> that new counter value out to a NFS client and then the box crashes
> before it makes it to the journal.

Yup, this has aways been a problem when you mix posix applications
running on the NFS server modifying the same files as the NFS
clients are accessing and requiring synchronisation.

> Not sure how big a problem that really is.

This coherency problem has always existed on the server side...

Cheers,

Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx