Re: How a normal user can crash any linux system (fwd)

From: Matthew Dillon (dillon@apollo.backplane.com)
Date: Wed Mar 22 2000 - 19:08:53 EST


:In reply:
:> This is cross-posted to both linux-kernel and freebsd-hackers, please
:> set your replies properly.
:>
:> > I found the following by accident playing with PVM. If you start the
:> > 'gexample' from the examples directory with dimension=10000 and no of
:> > tasks=32 on one machine, it becomes almost immediately completely un-
:> > usable and begins with heavy swapping. Considering how much memory
:> > would be necessary for this computation before starting it would have
:> > avoided the trouble.
:
:well, there are other ways to make a system slow to a crawl....
:
:a good preventative measure is to never give shell accounts unless
:everyone is accountable.
:
:#!/bin/csh
:/usr/games/primes 1 4294967295 >&/dev/null&
:/usr/games/primes 1 4294967295 >&/dev/null&
:/usr/games/primes 1 4294967295 >&/dev/null&
:...

    Generally speaking if a user wants to crash a machine, he can crash
    a machine. We've probably 'fixed' a dozen crashability holes in the
    last year but the count comes from a virtually unlimited supply. Short
    of being ridiculously draconian there will always be a way to twist
    the resource limits you are given to take a machine down or DOS it into
    unusability. And resource limits do not cover every conceivable facet.

    For example, just before the 4.0 release it was found that one can
    easily run the kernel out of vm_map_entry resources by mmap()ing
    hundreds of thousands of tiny files. We added a vm.max_proc_mmap
    sysctl to control that. And it still quite easy to bypass the datasize
    resource limit through the use of MAP_ANON mmap's.

    What resource limits are good at doing is preventing 'stupid user'
    mistakes. For example, at BEST we capped the CGI exec time at 10 cpu
    seconds (else our web servers would build up hundreds of broken CGI
    processes), but we set it as a soft limit so users who know what they
    are doing can unlimit it. I imposed a 40 process and 64 descriptor
    limit as being reasonable, but that's still 2560 descriptors to a
    user trying to mess things up on purpose.

    Quotas are useful for preventing a user from accidently filling up
    a disk partition. For example, I had a 100MB quota on /tmp, but
    quotas can be bypassed in a number of ways - for example, excessive
    syslogging (log files are owned by root).

    Resource limits are useful for protecting against a purposeful DOS
    attack insofar as it is generally the script kiddies who don't know
    their asses from their fingers who are trying to crash the machines.
    There is no way to protect against someone who knows what they are
    doing.

    So, in conclusion, resource limits are absolutely necessary in keeping
    a multi-user machine running smoothly, but the necessity has virtually
    nothing to do with preventing purposeful attacks.

                                                -Matt

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Thu Mar 23 2000 - 21:00:37 EST