Re: fork: out of memory

Bill Hawes (whawes@star.net)
Mon, 24 Nov 1997 23:22:51 -0500


Zlatko Calusic wrote:
> Proposition:
>
> struct files_struct is currently defined like:
>
> struct files_struct {
> int count;
> fd_set close_on_exec;
> fd_set open_fds;
> struct file * fd[NR_OPEN];
> };
>
> Maybe it would be a wise idea to make few pointers instead of
> fd[NR_OPEN]. Every pointer would point to a smaller table of let's say
> 64 file descriptors and would be allocated as needed. First such table
> would be in files_struct itself.
>
> E.g.
>
> struct files_struct {
> int count;
> fd_set close_on_exec;
> fd_set open_fds;
> struct file * fd[64]; /* 64 instead of 1024 */
> struct file ** fd_a; /* allocate as necessary */
> struct file ** fd_b;
> struct file ** fd_c;
> struct file ** fd_d;
> ...
> };
>
> This could be modified, so that fd_a, fd_b ... point to bigger and
> bigger tables, so they can be allocated faster (something like 64,
> 128, 256, 512. All added = NR_OPEN).
>
> I would like to hear a second opinion on this!!!
>

This sounds like a good idea to me, if there's no fundamental reason why
it won't work. I would suggest though that rather than having multiple
pointers, just have one pointer to an allocated array, and dynamically
expand it when needed. (I.e. kmalloc a new array of twice the size, if
it succeeds copy the pointers from the old one to the new, and free the
old.) With this approach, all of the pointers would always be in the
same array, so accessing the files would be similar to the current code.

>From your analysis it seems like the initial array would seldom need to
be expanded.

Regards,
Bill