Re: [kvm-devel] [PATCH 6/13] KVM: memory slot management

From: Anthony Liguori
Date: Fri Oct 27 2006 - 11:47:54 EST


Avi Kivity wrote:
2. The next mmu implementation, which caches guest translations.

The potential problem above now becomes acute. The guest will have kernel mappings for every page, and after a short while they'll all be faulted in and locked. This defeats the swap integration which is IMO a very strong point.

We can work around that by periodically forcing out translations (some kind of clock algorithm) at some rate so the host vm can have a go at them. That can turn out to be expensive as we'll need to interrupt all running vcpus to flush (real) tlb entries.
Don't understand. Can't one CPU cause a TLB entry to be flushed on all
CPUs?


It's not about tlb entries. The shadow page tables collaples a GV -> HV -> HP double translation into a GV -> HP page table. When the Linux vm goes around evicting pages, it invalidates those mappings.

There are two solutions possible: lock pages which participate in these translations (and their number can be large) or modify the Linux vm to consult a reverse mapping and remove the translations (in which case TLB entries need to be removed).

If you locked pages that have active shadow mappings, you could then use a secondary mechanism to invalidate existing mappings when necessary.

You could even base this on a user-configurable heuristic (give this VM 1G of memory, with 512MB of dedicated memory for instance).

I seem to recall some discussion about having a memory pressure notification mechanism. If such a thing existed, this could be used to reduce the guests actual memory foot print. I'm woefully ignorant though of any recent developments in this area...

Regards,

Anthony Liguori

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/