Re: 2.6.31+2.6.31.4: XFS - All I/O locks up to D-state after 24-48hours (sysrq-t+w available)

From: Justin Piszcz
Date: Tue Oct 20 2009 - 04:33:56 EST




On Tue, 20 Oct 2009, Dave Chinner wrote:

On Mon, Oct 19, 2009 at 06:18:58AM -0400, Justin Piszcz wrote:
On Mon, 19 Oct 2009, Dave Chinner wrote:
On Sun, Oct 18, 2009 at 04:17:42PM -0400, Justin Piszcz wrote:
It has happened again, all sysrq-X output was saved this time.
.....

All pointing to log IO not completing.

....
So far I do not have a reproducible test case,

Ok. What sort of load is being placed on the machine?
Hello, generally the load is low, it mainly serves out some samba shares.


the only other thing not posted was the output of ps auxww during
the time of the lockup, not sure if it will help, but here it is:

USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 10320 684 ? Ss Oct16 0:00 init [2]
....
root 371 0.0 0.0 0 0 ? R< Oct16 0:01 [xfslogd/0]
root 372 0.0 0.0 0 0 ? S< Oct16 0:00 [xfslogd/1]
root 373 0.0 0.0 0 0 ? S< Oct16 0:00 [xfslogd/2]
root 374 0.0 0.0 0 0 ? S< Oct16 0:00 [xfslogd/3]
root 375 0.0 0.0 0 0 ? R< Oct16 0:00 [xfsdatad/0]
root 376 0.0 0.0 0 0 ? S< Oct16 0:00 [xfsdatad/1]
root 377 0.0 0.0 0 0 ? S< Oct16 0:03 [xfsdatad/2]
root 378 0.0 0.0 0 0 ? S< Oct16 0:01 [xfsdatad/3]
root 379 0.0 0.0 0 0 ? S< Oct16 0:00 [xfsconvertd/0]
root 380 0.0 0.0 0 0 ? S< Oct16 0:00 [xfsconvertd/1]
root 381 0.0 0.0 0 0 ? S< Oct16 0:00 [xfsconvertd/2]
root 382 0.0 0.0 0 0 ? S< Oct16 0:00 [xfsconvertd/3]
.....

It appears that both the xfslogd and the xfsdatad on CPU 0 are in
the running state but don't appear to be consuming any significant
CPU time. If they remain like this then I think that means they are
stuck waiting on the run queue. Do these XFS threads always appear
like this when the hang occurs? If so, is there something else that
is hogging CPU 0 preventing these threads from getting the CPU?
Yes, the XFS threads show up like this on each time the kernel crashed. So far
with 2.6.30.9 after ~48hrs+ it has not crashed. So it appears to be some issue
between 2.6.30.9 and 2.6.31.x when this began happening. Any recommendations
on how to catch this bug w/certain options enabled/etc?



Cheers,

Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/