Re: 1352 NUL bytes at the end of a page? (was Re: Assertion `s && s->tree' failed: The saga continues.)

From: Larry McVoy
Date: Mon May 17 2004 - 20:36:27 EST


On Mon, May 17, 2004 at 05:13:30PM -0700, Andrew Morton wrote:
> I guess it would be interesting to run it on a filesystem which has 2k or
> even 1k blocksize. If the corruption then terminates on a 2k- or
> 1k-boundary then that will rule out a few culprits.

Does anyone have a theory that accounts for the fact that the zeroed
section is always tail aligned and seems to be the same length? The
data seems to be

[ good page ] [ GGGB ] [ more good pages ] [ GGGB ] etc.

where the GGG is the first 2817 bytes and the B is the last 1279 (decimal)
bytes. That's just too weird to be random, right?

> I'd really like to see this happen on some other machine though. It'd be
> funny if you have a dud disk drive or something.

We can easily rule that out. Steven, do a

dd if=/dev/zero of=USE_SOME_SPACE bs=1048576 count=500

which will eat up 500 MB and should eat up any bad blocks. I _really_
doubt it is a bad disk.

Steven, if you have a copy of lmbench then use lmdd from that like so

lmdd of=XXX opat=1

and that will write non-zero data to the disk, you then remove that file
and if we are getting random crud from the disk then we won't have nulls.
--
---
Larry McVoy lm at bitmover.com http://www.bitkeeper.com
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/