Re: slow open() calls and o_nonblock

From: Aaron Wiebe
Date: Mon Jun 04 2007 - 10:39:31 EST


Sorry for the unthreaded responses, I wasn't cc'd here, so I'm
replying to these based on mailing list archives....

Al Viro wrote:

BTW, why close these suckers all the time? It's not that kernel would
be unable to hold thousands of open descriptors for your process...
Hash descriptors by pathname and be done with that; don't bother with
close unless you decide that you've got too many of them (e.g. when you
get a hash conflict).

A valid point - I currently keep a pool of 4000 descriptors open and
cycle them out based on inactivity. I hadn't seriously considered
just keeping them all open, because I simply wasn't sure how well
things would go with 100,000 files open. Would my backend storage
keep up... would the kernel mind maintaining 100,000 files open over
NFS?

The majority of the files would simply be idle - I would be keeping
file handles open for no reason. Pooling allows me to substantially
drop the number of opens I require, but I am hesitant to blow the pool
size to substantially higher numbers. Can anyone shed light on any
issues that may come up with a massive pool size, such as 128k?

-Aaron
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/