Re: [RFC v4 Patch 0/4] fs/inode.c: optimization for inode lock usage

From: Guo Chao
Date: Tue Sep 25 2012 - 05:00:03 EST


On Mon, Sep 24, 2012 at 06:26:54PM +1000, Dave Chinner wrote:
> @@ -783,14 +783,19 @@ static void __wait_on_freeing_inode(struct inode *inode);
> static struct inode *find_inode(struct super_block *sb,
> struct hlist_head *head,
> int (*test)(struct inode *, void *),
> - void *data)
> + void *data, bool locked)
> {
> struct hlist_node *node;
> struct inode *inode = NULL;
>
> repeat:
> - hlist_for_each_entry(inode, node, head, i_hash) {
> + rcu_read_lock();
> + hlist_for_each_entry_rcu(inode, node, head, i_hash) {
> spin_lock(&inode->i_lock);
> + if (inode_unhashed(inode)) {
> + spin_unlock(&inode->i_lock);
> + continue;
> + }

Is this check too early? If the unhashed inode happened to be the target
inode, we are wasting our time to continue the traversal and we do not wait
on it.

> @@ -1078,8 +1098,7 @@ struct inode *iget_locked(struct super_block *sb, unsigned long ino)
> struct inode *old;
>
> spin_lock(&inode_hash_lock);
> - /* We released the lock, so.. */
> - old = find_inode_fast(sb, head, ino);
> + old = find_inode_fast(sb, head, ino, true);
> if (!old) {
> inode->i_ino = ino;
> spin_lock(&inode->i_lock);

Emmmm ... couldn't we use memory barrier API instead of irrelevant spin
lock on newly allocated inode to publish I_NEW?

I go through many mails of the last trend of scaling VFS. Many patches
seem quite natural, say RCU inode lookup or per-bucket inode hash lock or
per-superblock inode list lock, did not get merged. I wonder what
stopped them back then and what has changed that (part of) them can be
considered again.

Regards,
Guo Chao

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/