[fs] inode_lru_isolate(): Move counter increment into spinlocksection

From: Christoph Lameter
Date: Wed Dec 18 2013 - 14:37:14 EST


The counter increment in inode_lru_isolate is happening after
spinlocks have been dropped with preemption on using __count_vm_events
making counter increment races possible.

Move the counter increments to be done when the spinlock is
reacquired later so that the counter can be safely incremented.

Signed-off-by: Christoph Lameter <cl@xxxxxxxxx>

Index: linux/fs/inode.c
===================================================================
--- linux.orig/fs/inode.c 2013-12-18 13:14:43.211693438 -0600
+++ linux/fs/inode.c 2013-12-18 13:15:58.489266129 -0600
@@ -715,21 +715,21 @@ inode_lru_isolate(struct list_head *item
}

if (inode_has_buffers(inode) || inode->i_data.nrpages) {
+ unsigned long reap = 0;
__iget(inode);
spin_unlock(&inode->i_lock);
spin_unlock(lru_lock);
if (remove_inode_buffers(inode)) {
- unsigned long reap;
reap = invalidate_mapping_pages(&inode->i_data, 0, -1);
- if (current_is_kswapd())
- __count_vm_events(KSWAPD_INODESTEAL, reap);
- else
- __count_vm_events(PGINODESTEAL, reap);
if (current->reclaim_state)
current->reclaim_state->reclaimed_slab += reap;
}
iput(inode);
spin_lock(lru_lock);
+ if (current_is_kswapd())
+ __count_vm_events(KSWAPD_INODESTEAL, reap);
+ else
+ __count_vm_events(PGINODESTEAL, reap);
return LRU_RETRY;
}

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/