Re: Number of open files.

David Luyer (luyer@ucs.uwa.edu.au)
Tue, 16 Jun 1998 17:05:27 +0800


> Hi,
>
>
> On a HP-UX I can set the number of open files and the number of nflocks
> (number of filelocks) to any arbitraly size (8000 & 4000).
>
> How can I do it on a linux-box?

Tune files in /proc/sys/kernel to set file-max and inode-max.
On a system which gets to around 200 users, is a PPro-200 w/512M RAM,

> How does a 'standard' linux-box perform with, say, 200 users? Feedback
> very welcome ...

I'm assuming 200 simultaneous users, and say 7,000 or so total users. If
you meant 200 total users then, hey, that's nothing and it shouldn't need
half the tuning I describe here and would probably run fine out of the box.

'standard' Debian distribution didn't do very well when I first installed it.
If you're going to have heaps of users on the system, RedHat probably isn't
a good choice (too much SUID stuff tends to sneak in instead of using groups
to control access to devices). But if you profile it a little and optimize
the things which are taking up CPU, then, quite good.

Important things can be making sure your version of init doesn't sync() in its
main loop, using a password DBM not a flat file (esp. important in finger),
and a flat 512k file (8 bytes per uid) mapping uids to usernames (seek uid*8,
read 8 bytes, null-terminate) can be a major plus if you hack things like
"ls" etc to take notice of it (and your adduser/vipw/... to maintain it).

The kernel, as distributed, to run on a large system needs something like...
inode-max = 32768
file-max = 5120
NR_TASKS = 1024 (linux/tasks.h)
MAX_TASKS_PER_USER = 128 (linux/tasks.h)

Oh, and if you're running 2.0.34, SMP was a MAJOR slowdown for a 200 interactive
user machine, the system I run here used to be 2 * PPro 200 but is far faster
as 1 * PPro 200. Buy a second machine, put in a dedicated 100M inter-machine
link and use NFS, don't bother trying to use SMP since most heavily multiuser
systems are highly I/O-bound.

Another important thing is to make sure you have at least 256 pty's/tty's;
most distributions don't create beyond 64. If you're not using Debian you
will probably find you have to recompile screen if you use it to recognize
the extra tty's (I got the Debian screen maintainer to hardcode it to search
for 256 even if only 64 are found on the machine its compiled on).

There's a heap of other stuff I've done to my local system and kernel like
ripping out all of the /dev/random related stuff and immutable/append-only
and other things (just generally cutting out unused code from frequently used
code paths). It gives you some speedup but unless you're really hitting the
machines it's not worth it.

(If you use multiple BusLogics on different interrupts you may want to
modify the driver to map the interrupt to a device - I never saw much gain
from this tho.)

Also, in 2.0.x you just have to tier everything, or the filesystem
can't handle it. Just like having to tier /news/control/cancel on a
news system or the system dies horribly.

/var/spool/mail/l/luser
/var/spool/TxT/l/luser
/usr/users/l/luser

I don't do the first of these on the system I've got setup here right now
and it's a pain. The next generation of the system will.

The TxT stuff is a good idea - if people only use pine and imap to access
mail, they can use a TxT format inbox and it saves on memory and CPU since
it can be handled a lot better than 'BSD format' mail boxes can.

There should be a FAQ covering some of this stuff on the web somewhere.
Probably dozens of them. If anyone maintains one of them and wants some
detail or reasons behind some of my comments, email me.

David.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu