Re: ~500 megs cached yet 2.6.5 goes into swap hell

From: Paul Jackson
Date: Thu Apr 29 2004 - 16:24:15 EST


> How on earth is the kernel supposed to know that for this one particular
> job you don't care if it takes 3 hours instead of 10 minutes,

I'd pay ten bucks (yeah, I'm a cheapskate) for an option that I could
twiddle that would mark my nightly updatedb and backup jobs as ones to
use reduced memory footprint (both for file caching and backing user
virtual address space), even if it took much longer.

So, rather than protest in mock outrage that it's impossible for the
kernel to know this, instead answer the question as stated in all
seriousness ... well ... how _could_ the kernel know, and what _could_
the kernel do if it knew. What mechanism(s) would be needed so that
the kernel could restrict a jobs memory usage?

Heh - indeed perhaps the answer is closer than I realize. For SGI's big
NUMA boxes, managing memory placement is sufficiently critical that we
are inventing or encouraging ways (such as Andi Kleen's numa stuff) to
control memory placement per node per job. Perhaps this needs to be
extended to portions of a node (this job can only use 1 Gb of the memory
on that 2 Gb node) and to other memory uses (file cache, not just user
space memory).

--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@xxxxxxx> 1.650.933.1373
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/