I get the feeling it gets swapped _out_ but not in again within a reasonable time period. I am no seasoned kernel hacker and not very knowledgable in the vmm stuff, but to me the concept of wanting to have free memory available for a program starting up is good, but what happens here is that _far_too_much_ code/data is being swapped out. If I sit and write a document in xemacs for an hour and then switch workspace to an xdvi it shouldn't take ten seconds before xdvi is alive because it has to swap in everything. Sure, the program is "idle" and therefore a candidate for swapping out, but unless there is a good reason for doing so I thing it should be left in memory. Having 50 mb free RAM just in case I would want to start a big program seems senseless.
It downs the responsiveness of the system enormously, completely out of proportion to the good it does.
One thing that intuitively strikes me (that might well be wrong) is that there should be some limit in percents of total ram that gets swapped out for this purpose. Or a _lower_ such limit. I think 75% free must be unnecessary on most systems. Perhaps a struct somwhere in /proc could be used to adjust the behavior?
-- Jakob Borg <jb@k2.lund.se> Finger for PGP key or info- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu