Re: Unionmount and overlayfs testsuite

From: David Howells
Date: Tue Jun 03 2014 - 06:34:36 EST


Miklos Szeredi <miklos@xxxxxxxxxx> wrote:

> Fix now pushed to overlayfs.v22/overlayfs.current.

I ran my testscript, which leaves a clean set up and mounted overlay fs
behind. I then ran:

for ((i=100; i<=129; i++)); do mv /mnt/a/foo$i /mnt/a/bar$i; done
for ((i=100; i<=129; i++)); do mv /mnt/a/dir$i /mnt/a/dir2$i; done

leading to:

=============================================
[ INFO: possible recursive locking detected ]
3.15.0-rc6-fsdevel+ #382 Tainted: G W
---------------------------------------------
mv/27935 is trying to acquire lock:
(&sb->s_type->i_mutex_key#9){+.+.+.}, at: [<ffffffff8111e555>] vfs_rmdir+0x59/0xe8

but task is already holding lock:
(&sb->s_type->i_mutex_key#9){+.+.+.}, at: [<ffffffff811e12a9>] ovl_clear_empty+0x175/0x1eb

other info that might help us debug this:
Possible unsafe locking scenario:

CPU0
----
lock(&sb->s_type->i_mutex_key#9);
lock(&sb->s_type->i_mutex_key#9);

*** DEADLOCK ***

May be due to missing lock nesting notation

5 locks held by mv/27935:
#0: (sb_writers#15){.+.+.+}, at: [<ffffffff8112c06c>] mnt_want_write+0x1c/0x40
#1: (&sb->s_type->i_mutex_key#17/1){+.+.+.}, at: [<ffffffff8111eb96>] do_rmdir+0x9f/0x152
#2: (&sb->s_type->i_mutex_key#17){+.+.+.}, at: [<ffffffff8111e555>] vfs_rmdir+0x59/0xe8
#3: (sb_writers#8){.+.+.+}, at: [<ffffffff8112c06c>] mnt_want_write+0x1c/0x40
#4: (&sb->s_type->i_mutex_key#9){+.+.+.}, at: [<ffffffff811e12a9>] ovl_clear_empty+0x175/0x1eb

stack backtrace:
CPU: 1 PID: 27935 Comm: mv Tainted: G W 3.15.0-rc6-fsdevel+ #382
Hardware name: /DG965RY, BIOS MQ96510J.86A.0816.2006.0716.2308 07/16/2006
0000000000000000 ffff880038ac9af0 ffffffff8148b889 ffffffff81fda190
ffff880038ac9bb0 ffffffff81074b0e 0000000000000002 ffff88003823c890
0000000000000000 ffff880000000000 ffff880000000004 ffff880000000000
Call Trace:
[<ffffffff8148b889>] dump_stack+0x4d/0x66
[<ffffffff81074b0e>] __lock_acquire+0x75a/0x1861
[<ffffffff810762b6>] lock_acquire+0x9c/0x112
[<ffffffff8111e555>] ? vfs_rmdir+0x59/0xe8
[<ffffffff8148e6df>] mutex_lock_nested+0x60/0x2ff
[<ffffffff8111e555>] ? vfs_rmdir+0x59/0xe8
[<ffffffff8111e555>] vfs_rmdir+0x59/0xe8
[<ffffffff811e0fbe>] ovl_cleanup+0x1d/0x40
[<ffffffff811e12b4>] ovl_clear_empty+0x180/0x1eb
[<ffffffff811e1360>] ovl_check_empty_and_clear+0x41/0x5c
[<ffffffff81058fcc>] ? creds_are_invalid+0x18/0x45
[<ffffffff811e1aca>] ovl_do_remove+0x17c/0x35e
[<ffffffff811e1cbd>] ovl_rmdir+0x11/0x13
[<ffffffff8111e584>] vfs_rmdir+0x88/0xe8
[<ffffffff8111ebdd>] do_rmdir+0xe6/0x152
[<ffffffff810af5b4>] ? __audit_syscall_entry+0xa1/0xc3
[<ffffffff8100d737>] ? syscall_trace_enter+0x197/0x1eb
[<ffffffff8111f414>] SyS_unlinkat+0x16/0x29
[<ffffffff814925f4>] tracesys+0xdd/0xe2
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/