Does it always just go right now and free it, or is there any realIf mmu_shrink() has already done a significant amount ofIt usually removes one page.
scanning, the use of 'nr_to_scan' inside shrink_kvm_mmu()
will also ensure that we do not over-reclaim when we have
already done a lot of work in this call.
In the end, this patch defines a "scan" as:
1. An attempt to acquire a refcount on a 'struct kvm'
2. freeing a kvm mmu page
This would probably be most ideal if we can expose some
of the work done by kvm_mmu_remove_some_alloc_mmu_pages()
as also counting as scanning, but I think we have churned
enough for the moment.
scanning that has to go on?
These applied to Linus's latest as of yesterday.diff -puN arch/x86/kvm/mmu.c~make-shrinker-more-aggressive arch/x86/kvm/mmu.cWhat tree are you patching?
--- linux-2.6.git/arch/x86/kvm/mmu.c~make-shrinker-more-aggressive 2010-06-14 11:30:44.000000000 -0700
+++ linux-2.6.git-dave/arch/x86/kvm/mmu.c 2010-06-14 11:38:04.000000000 -0700
@@ -2935,8 +2935,10 @@ static int shrink_kvm_mmu(struct kvm *kv
idx = srcu_read_lock(&kvm->srcu);
spin_lock(&kvm->mmu_lock);
- if (kvm->arch.n_used_mmu_pages> 0)
- freed_pages = kvm_mmu_remove_some_alloc_mmu_pages(kvm);
+ while (nr_to_scan> 0&& kvm->arch.n_used_mmu_pages> 0) {
+ freed_pages += kvm_mmu_remove_some_alloc_mmu_pages(kvm);
+ nr_to_scan--;
+ }