2.6.x strangeness with large buffer usage via network transfer/disk and SEGV processes

From: Shawn Starr
Date: Sat Mar 27 2004 - 18:06:54 EST


I don't get something maybe someone can explain why this is happening:

1) When using a large amount of buffers via sending say a 800MB file from
one PC to another, the Linux system will segfault processes but not preform
an OOM. Even though the system itself has not touched swap memory. Why is
the kernel killing/or why are the processes dying with Segfault?

I see this happening when I extract a Linux source tarball and have certain
processes running, while tar extracts the process will just receive a
segmentation fault w/o core.

When using a virtual OS emulator, the emulator will just die.

I don't remember this behaviour in 2.4 at all and I don't think this is
correct. I have PREEMPT enabled as well.

Is this a problem or is this correct behavour?

Thanks

Shawn S.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/