Re: fork: out of memory

Rik van Riel (H.H.vanRiel@fys.ruu.nl)
Tue, 25 Nov 1997 17:30:18 +0100 (MET)


On Tue, 25 Nov 1997, David E. Fox wrote:

> > 1) Slab allocator.
>
> OK, what is a slab? I figure it is a chunk of RAM that is used to
> serve pages from.

Yes, there are lots of places in the kernel where a 64 or 128 byte
piece of memory is needed, and having each driver allocate a full
page for a few bytes is a total waste of memory...

> > It needs 8!!! contiguos pages to allocate one files_struct in
> > files_cachep cache. Well, other 6 (or 5?) allocations will be fast
>
> Ouch. Well, one would think that would be a simple thing; even on a
> heavily(?) used system it's conceivable that there would be 8 pages
> in a row that could be used.

Of course there is, usually there are over 8 of those rows
available even on heavily used systems... Now try spawning
16 or more processes :)

> restarted that make and it ran to completion. Perhaps some "coalescing"
> is done to refragment pages, since all that compile stuff could
> easily fragment the memory.

No it's not. Zlatko's patch helps in that it replaces
heavy fragmentation by extra swapping...

> If not, then the cannot fork: message would
> get worse over time, which is something I have not yet observed.

After a while of heavy usage (say 15min) your system
reaches a state of equilibrium <sp> where chunks are
allocated and _released_ all the time.

> > BTW, right now, there are 46 processes on my machine. Neither one of
> > them use more than 64 fd's. Even such a bloat like Netscape uses only
> > 26. To allocate all needed files_struct, kernel used more than 200KB
> > of nonpageable memory which is mostly (95%) unused!!!
>
> Your ideas seem OK, but I'm not anywhere close to being a
> kernel hacker. :( But such an observation, if true, really needs to
> be fixed asap.

I think there are some people already working on it.
The problem however is not that 200K is unused, but
that the kernel is allocating large chunks of memory
while memory tends to get too fragmented for that
over time...

> Question, though: if files_struct is using non-
> pageable memory, it is using kernel (ring 0) memory, correct, which
> is contiguous by definition?
Worse, kernel memory _needs_ to be contiguous, even when there
aren't any contiguous areas available for allocation... This is
the source for a lot of problems

Rik.

----------
Send Linux memory-management wishes to me: I'm currently looking
for something to hack...