Re: [PATCH 4/6] kvm: nVMX: support EPT accessed/dirty bits

From: Bandan Das
Date: Wed Apr 12 2017 - 19:03:05 EST


Paolo Bonzini <pbonzini@xxxxxxxxxx> writes:

> ----- Original Message -----
>> From: "Bandan Das" <bsd@xxxxxxxxxx>
>> To: "Paolo Bonzini" <pbonzini@xxxxxxxxxx>
>> Cc: linux-kernel@xxxxxxxxxxxxxxx, kvm@xxxxxxxxxxxxxxx, david@xxxxxxxxxx
>> Sent: Wednesday, April 12, 2017 7:35:16 AM
>> Subject: Re: [PATCH 4/6] kvm: nVMX: support EPT accessed/dirty bits
>>
>> Paolo Bonzini <pbonzini@xxxxxxxxxx> writes:
>> ...
>> > accessed_dirty = have_ad ? PT_GUEST_ACCESSED_MASK : 0;
>> > +
>> > + /*
>> > + * FIXME: on Intel processors, loads of the PDPTE registers for PAE
>> > paging
>> > + * by the MOV to CR instruction are treated as reads and do not cause the
>> > + * processor to set the dirty flag in tany EPT paging-structure entry.
>> > + */
>>
>> Minor typo: "in any EPT paging-structure entry".
>>
>> > + nested_access = (have_ad ? PFERR_WRITE_MASK : 0) | PFERR_USER_MASK;
>> > +
>> > pt_access = pte_access = ACC_ALL;
>> > ++walker->level;
>> >
>> > @@ -338,7 +337,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker
>> > *walker,
>> > walker->pte_gpa[walker->level - 1] = pte_gpa;
>> >
>> > real_gfn = mmu->translate_gpa(vcpu, gfn_to_gpa(table_gfn),
>> > - PFERR_USER_MASK|PFERR_WRITE_MASK,
>> > + nested_access,
>> > &walker->fault);
>>
>> I can't seem to understand the significance of this change (or for that
>> matter what was before this change).
>>
>> mmu->translate_gpa() just returns gfn_to_gpa(table_gfn), right ?
>
> For EPT it is, you're right it's fishy. The "nested_access" should be
> computed in translate_nested_gpa, which is where kvm->arch.nested_mmu
> (non-EPT) requests to access kvm->arch.mmu (EPT).

Thanks for the clarification. Is it the case when L1 runs L2 without
EPT ? I can't figure out the case where translate_nested_gpa will actually
be called. FNAME(walk_addr_nested) calls walk_addr_generic
with &vcpu->arch.nested_mmu and init_kvm_nested_mmu() sets gva_to_gpa()
with the appropriate "_nested" functions. But the gva_to_gpa() pointers
don't seem to get invoked at all for the nested case.

BTW, just noticed that setting PFERR_USER_MASK is redundant since
translate_nested_gpa does it too.

Bandan

> In practice we need to define a new function
> vcpu->arch.mmu.gva_to_gpa_nested that computes the nested_access
> and calls cpu->arch.mmu.gva_to_gpa.
>
> Thanks,
>
> Paolo