Re: Out of memory kernel death

Chris Griffin (boozer@vanadium.rollins.edu)
Mon, 5 May 1997 13:34:02 -0400 (EDT)


On Mon, 5 May 1997, Christoph Lameter wrote:

> You should limit the memory an individual process can allocate during
> testing. See the ulimit command.
>
Well, I do usually use ulimits when many people have access to the
machine, but this one is all mine. I could use a ulimit, but I am really
trying to get at a bigger issue... What is the proper behavior of the
kernel when there isn't enough memory left? Even with acceptable ulimit
sizes, its very possible for many users to eat up all the memory.
Shouldn't a process be killed by the kernel if it tries to allocate more
memory than is available rather than crashing the machine?

Chris

> On Mon, 5 May 1997, Chris Griffin wrote:
>
> > A small question?
> > How should linux react when a run-away process attempts to grab
> > all the memory? I had such an occurance today. Because of STUPID
> > PROGRAMMING ERROR Technology(tm) I had a job that went into an allocation
> > loop and grabbed all the memory. As soon as every little bit of swap was
> > used up the system froze and I got a "kernel: unable to load
> > interpretor". Is this the proper/planned behavior in this case? I would
> > think either the process would die with an unable to get free page error,
> > or the kernel would oops with the same problem and reset. I am using
> > kernel 2.0.30 / libc5.3.12 on an SMP enabled p6dnf 2x200mhz ppro system.
> > BTW: ps reported the process was in an uninteruptable sleep state (I suppose
> > because of the memory allocation loop) so I couldn't kill it :(
> >
> > thanks
> > Chris
> >
> >
>
> --- +++ --- +++ --- +++ --- +++ --- +++ --- +++ --- +++ ---
> Please always CC me when replying to posts on mailing lists.
>
>