RE: Recursive malloc crashing Linux. (Well almost)

John Ripley (john@empeg.com)
Sat, 18 Dec 1999 19:10:51 +0000 (GMT)


On Fri, 17 Dec 1999, David Schwartz wrote:

> > Allocating memory in a single 1GB chunk fails gracefully, but allocating
> > in 4MB chunks works until you access it.
>
> *Yawn* You've written a program that manages to kill itself by shooting
> itself in the foot. I can to that too:
>
> char *j=NULL;
> *j++;
>
> Yes, Linux will overcommit even when overcommitting is turned off. There
> really is no good way to solve this problem.
>
> DS

Yes, my example program is purposely a bit silly - but it demonstrates
that mmap() succeeds (and returns a VALID point) even when there isn't
really enough memory for it.

But there are sensible programs which may have handling for running out of
memory, such as freeing up data that can be reread. But if it's never told
it's run out, then it'll just carry on until it faults - this could be
just a few 4KB more than available:

- 8MB system, 6MB free (including buffer+cache etc)
- Allocate 3MB (succeeds)
- Allocate 3.01MB (succeeds)
- Use first 6MB for a few minutes (this works)
- Suddenly algorithm touches the last .01MB, SIGBUS.

On a side note, even nastier behaviour is when you're currently
mlockall(MCL_FUTURE), because mmap will cause all pages to be paged in
immediately == SIGBUS during the mmap call. So not only can a program not
determine how much memory is free, but it'll SIGBUS if it got the guess
wrong.

I have an idea about solving this that involves another page count of
anonymous mmap pages not yet paged - updated by mmap/munmap, and
decremented on page fault. This way, vm_enough_memory() can subtract it
from the free page count and take into account other mmaps which have not
yet been paged. This would probably have no performance impact.

- John Ripley.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/