Re: Simple DoS...out of resources?

Romano Giannetti (romano@dea.icai.upco.es)
Tue, 9 Mar 1999 13:45:47 +0100


I think this is (or should be...) a FAQ...

On Tue, Mar 09, 1999 at 12:00:04PM +0100, Mikulas Patocka wrote:

> Anyway, if the user takes less memory than
> ((amount of memory+swap)/number of active users), his processes won't get
> killed. And if he allocates more, he does it at his own risk.

I allocate 8M because there are 4 users, than another user log in...
what do you do now? Kill processes at random :) until my memory share
go to the "sane" level? No, this is a userland thing.

(0)pern:~/tex/PFC% ulimit -a
cpu time (seconds) unlimited
file size (blocks) unlimited
data seg size (kbytes) unlimited
stack size (kbytes) 8192
core file size (blocks) 0
resident set size (kbytes) unlimited
processes 256
file descriptors 1024
locked-in-memory size (kb) unlimited
virtual memory size (kb) unlimited

> In many cases allocation fails, but sometimes the process could get SIGBUS
> or something like it. But it makes no difference - programs die if they
> can't allocate memory; "better" programs write something like "out of
> memory, terminating", worse programs work with NULL pointer and crash on
> SIGSEGV.

Processes die because the out of memory situation occurs *at memory
use* time, not *allocation* time. A daemon that want to be absolutely
sure it never will be in such a situation should mlockall() its
memory, althoug it is overshooting. Ah, by the way, from the
/usr/src/linux/Documentation/proc.txt file:

overcommit_memory
This file contains one value. The following algorithm is used to
decide if there's enough memory: if the value of overcommit_memory
is positive, then there's always enough memory. This is a useful
feature, since programs often malloc() huge amounts of memory 'just
in case', while they only use a small part of it. Leaving this
value at 0 will lead to the failure of such a huge malloc(), when
in fact the system has enough memory for the program to run.

On the other hand, enabling this feature can cause you to run out
of memory and thrash the system to death, so large and/or important
servers will want to set this value to 0.

...but it's quite a pain not to overcommit memory (and I do not think
this resolves the problem: i can ever mmap(/dev/zero), methink).

Bye,
Romano

-- 
Romano Giannetti, Professor  -  Univ. Pontificia Comillas (Madrid, Spain)
Electronic Engineer - phone +34 915 422 800 ext 2410  fax +34 915 596 569

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/