Re: [PATCH 1/5] mmu: mark spte present if the x bit is set

From: Bandan Das
Date: Tue Jun 28 2016 - 13:34:07 EST


Paolo Bonzini <pbonzini@xxxxxxxxxx> writes:

> On 28/06/2016 06:32, Bandan Das wrote:
>> This is safe because is_shadow_present_pte() is called
>> on host controlled page table and we know the spte is
>> valid
>>
>> Signed-off-by: Bandan Das <bsd@xxxxxxxxxx>
>> ---
>> arch/x86/kvm/mmu.c | 3 ++-
>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>> index def97b3..a50af79 100644
>> --- a/arch/x86/kvm/mmu.c
>> +++ b/arch/x86/kvm/mmu.c
>> @@ -304,7 +304,8 @@ static int is_nx(struct kvm_vcpu *vcpu)
>>
>> static int is_shadow_present_pte(u64 pte)
>> {
>> - return pte & PT_PRESENT_MASK && !is_mmio_spte(pte);
>> + return pte & (PT_PRESENT_MASK | shadow_x_mask) &&
>> + !is_mmio_spte(pte);
>
> This should really be pte & 7 when using EPT. But this is okay as an
> alternative to a new shadow_present_mask.

I could revive shadow_xonly_valid probably... Anyway, for now I will
add a TODO comment here.

> Paolo
>
>> }
>>
>> static int is_large_pte(u64 pte)
>>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at http://vger.kernel.org/majordomo-info.html