Re: Endless overcommit memory thread.

From: Grendel (grendel@vip.net.pl)
Date: Mon Mar 27 2000 - 13:19:28 EST


** On Mar 27, Linda Walsh scribbled:

> > > If you allow overcommit, you can have *either* the SEGFAULT behavior
> > > *or* the no-mem/locked out (process touches everything up front). If you
> > > disallow overcommit, you reduce the possible behaviors to 1. The problem is
> > > that suppose the process allocates it's memory -- with the 'overcommit'
> > > you have a system where it is hard to predict *what* will fail. Will it be
> > > a system process? A deamon? It's a low-integrity system because you can't
> > > figure out the behavior of failure in advance.
> > And what if the processes are prioritized in respect of memory?
> ---
> So...scenario...suppose we have a Compartmented Mode Computer that
> enforces Mandatory Access. Users who log in don't know about each other and
> users at different clearance levels can't access the data of those above
> them. The sysadmin may administer's data but isn't rated to read or know
> about processes own by those with higher sensitivity levels. How can
> anyone -- the sysadmin know that the President was logged in writing his
The clearance levels can be equivalent to the process privilege levels - in
respect to memory usage - so that the processes of the most privileged users
can be ruled out of the 'kill on OOM' pool - simply marked "non killable".
In a word the processes are guaranteed not to be killed in such situation.
If the compartmented mode is to be completely secure, the machine should use
separate _hardware_ for every clearance level, or guaranteed separation of
VMs, am I correct? The hypervisor model Alan has told us about before would
be quite suitable here, but is there any chance it can work on Linux in a
forseeable future?

> latest top-secret whatchamajigger. Now mr. President doesn't know about
> memory, but tries to read into an edit buffer a very large and secret file.
> The viewer allocs
> alot of space to hold the document in memory -- and it triggers *the memory
> killer*. "Prez" ain't running as SYSADMIN, he hasn't been running long, has
> low CPU value...is maybe the largest process on the system. Bang -- real
> good candidate for being an evil memory hog targeted for termination by
> the auto-killer. Um...gee, sorry mr. prez. But lets say in reading his
> large file, the "reader" program is smart enough to know..."hmmm, I got
> a NULL pointer from that malloc, maybe I'll page the rest of the file in
> from disk later". ...
Combining the clearance levels, appropriate capabilities (as you proposed
before) and memory access priorities could guarantee that mr. President
won't run out of memory :)

> The point is you don't necessarily to hard code in memory "policy"
> that kills processes. Optional option? Maybe yes... But default behavior?
Of course not default and it doesn't have to be hardcoded - the scheme can
be quite flexible and tunable on the runtime. Either using the memory quota
or implementing a priority system with one priority assigned to critical
processes that cannot be killed because of OOM could work, I suppose...

> Yuck. If we want Linux to become a "most trusted OS", having the kernel
> automatically take non-requested actions on user-programs isn't a great idea.
Yup, it definitely isn't.

marek



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Fri Mar 31 2000 - 21:00:20 EST