Re: [PATCH v3 14/21] KVM: Don't allocate dirty bitmap if dirty ring is enabled

From: Peter Xu
Date: Thu Jan 09 2020 - 11:41:14 EST


On Thu, Jan 09, 2020 at 09:57:22AM -0500, Peter Xu wrote:
> Because kvm dirty rings and kvm dirty log is used in an exclusive way,
> Let's avoid creating the dirty_bitmap when kvm dirty ring is enabled.
> At the meantime, since the dirty_bitmap will be conditionally created
> now, we can't use it as a sign of "whether this memory slot enabled
> dirty tracking". Change users like that to check against the kvm
> memory slot flags.
>
> Note that there still can be chances where the kvm memory slot got its
> dirty_bitmap allocated, _if_ the memory slots are created before
> enabling of the dirty rings and at the same time with the dirty
> tracking capability enabled, they'll still with the dirty_bitmap.
> However it should not hurt much (e.g., the bitmaps will always be
> freed if they are there), and the real users normally won't trigger
> this because dirty bit tracking flag should in most cases only be
> applied to kvm slots only before migration starts, that should be far
> latter than kvm initializes (VM starts).
>
> Signed-off-by: Peter Xu <peterx@xxxxxxxxxx>
> ---
> include/linux/kvm_host.h | 5 +++++
> virt/kvm/kvm_main.c | 5 +++--
> 2 files changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index c96161c6a0c9..ab2a169b1264 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -353,6 +353,11 @@ struct kvm_memory_slot {
> u8 as_id;
> };
>
> +static inline bool kvm_slot_dirty_track_enabled(struct kvm_memory_slot *slot)
> +{
> + return slot->flags & KVM_MEM_LOG_DIRTY_PAGES;
> +}
> +
> static inline unsigned long kvm_dirty_bitmap_bytes(struct kvm_memory_slot *memslot)
> {
> return ALIGN(memslot->npages, BITS_PER_LONG) / 8;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index f0f766183cb2..46da3169944f 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1120,7 +1120,8 @@ int __kvm_set_memory_region(struct kvm *kvm,
> }
>
> /* Allocate page dirty bitmap if needed */
> - if ((new.flags & KVM_MEM_LOG_DIRTY_PAGES) && !new.dirty_bitmap) {
> + if ((new.flags & KVM_MEM_LOG_DIRTY_PAGES) && !new.dirty_bitmap &&
> + !kvm->dirty_ring_size) {
> if (kvm_create_dirty_bitmap(&new) < 0)
> goto out_free;
> }
> @@ -2309,7 +2310,7 @@ static void mark_page_dirty_in_slot(struct kvm *kvm,
> struct kvm_memory_slot *memslot,
> gfn_t gfn)
> {
> - if (memslot && memslot->dirty_bitmap) {
> + if (memslot && kvm_slot_dirty_track_enabled(memslot)) {
> unsigned long rel_gfn = gfn - memslot->base_gfn;
> u32 slot = (memslot->as_id << 16) | memslot->id;
>
> --
> 2.24.1
>

I think below should be squashed as well into this patch:

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 621b842a9b7b..0806bd12d8ee 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1308,7 +1308,7 @@ static inline bool memslot_valid_for_gpte(struct kvm_memory_slot *slot,
{
if (!slot || slot->flags & KVM_MEMSLOT_INVALID)
return false;
- if (no_dirty_log && slot->dirty_bitmap)
+ if (no_dirty_log && kvm_slot_dirty_track_enabled(slot))
return false;

return true;

Thanks,

--
Peter Xu