Re: How to increse [sic.] max open files?
Fri, 3 Jan 1997 13:56:25 -0600 (CST)

While I agree that many programs do not need to have this many fd's open,
a few do. Such test software as mentioned before would benifit greatly
from the availability of more available fd's.

I don't think we need to pick the "right" number, I think we need to pick
a more usable number. Wether it is good or not, some software wants this
many fd's, and will not function without it. I won't go into how it can
be redesigned to use fewer, or to use multiple processes. For some
people, it isn't practical to rewrite the software that works. Also, this
limit may also present problems for commercial developers. With some
platforms, they simply port their code that uses many fd's. When they
come to Linux and see the restriction, they may not feel it worth their
time and money to redesign their software around this restriction.

What can we do? Well, I think that in the next release of production
release of the kernel and libc, there should be some way of adjusting
this. Wether it be a /proc interface, or a define in the kernel source,
there should be some way to change this.

If Linux is to be viable in a commercial environment, it must be able to
do what the other major unices do. One thing many of them can do is
adjust their number of fd's, or have a high enough default for their

I think that this is a long term goal, not something to hack
out once and hope it works. This will require coordination between
kernel developers, libc developers, and application developers. It's a
big project, but one that should be done.

I would also like to thank Linus and everybody here for their great work
so far. I use Linux exclusively at home, and run many machines at work
it. I'm now sitting at a Sparc IPX running Linux and X. This machine was
unusable with solaris, as it was so slow. With Linux, it runs great.
Thank you all for your time and hard work.

Paul Frieden