Re: [PATCH v1 00/15] Add support for Nitro Enclaves

From: Paraschiv, Andra-Irina
Date: Mon Apr 27 2020 - 14:40:01 EST




On 26/04/2020 04:55, Longpeng (Mike, Cloud Infrastructure Service Product Dept.) wrote:

On 2020/4/24 17:54, Paraschiv, Andra-Irina wrote:

On 24/04/2020 11:19, Paraschiv, Andra-Irina wrote:

On 24/04/2020 06:04, Longpeng (Mike, Cloud Infrastructure Service Product
Dept.) wrote:
On 2020/4/23 21:19, Paraschiv, Andra-Irina wrote:
On 22/04/2020 00:46, Paolo Bonzini wrote:
On 21/04/20 20:41, Andra Paraschiv wrote:
An enclave communicates with the primary VM via a local communication
channel,
using virtio-vsock [2]. An enclave does not have a disk or a network device
attached.
Is it possible to have a sample of this in the samples/ directory?
I can add in v2 a sample file including the basic flow of how to use the ioctl
interface to create / terminate an enclave.

Then we can update / build on top it based on the ongoing discussions on the
patch series and the received feedback.

I am interested especially in:

- the initial CPU state: CPL0 vs. CPL3, initial program counter, etc.

- the communication channel; does the enclave see the usual local APIC
and IOAPIC interfaces in order to get interrupts from virtio-vsock, and
where is the virtio-vsock device (virtio-mmio I suppose) placed in memory?

- what the enclave is allowed to do: can it change privilege levels,
what happens if the enclave performs an access to nonexistent memory, etc.

- whether there are special hypercall interfaces for the enclave
An enclave is a VM, running on the same host as the primary VM, that launched
the enclave. They are siblings.

Here we need to think of two components:

1. An enclave abstraction process - a process running in the primary VM guest,
that uses the provided ioctl interface of the Nitro Enclaves kernel driver to
spawn an enclave VM (that's 2 below).

How does all gets to an enclave VM running on the host?

There is a Nitro Enclaves emulated PCI device exposed to the primary VM. The
driver for this new PCI device is included in the current patch series.

Hi Paraschiv,

The new PCI device is emulated in QEMU ? If so, is there any plan to send the
QEMU code ?
Hi,

Nope, not that I know of so far.
And just to be a bit more clear, the reply above takes into consideration that
it's not emulated in QEMU.

Thanks.

Guys in this thread are much more interested in the design of enclave VM and the
new device, but there's no any document about this device yet, so I think the
emulate code is a good alternative. However, Alex said the device specific will
be published later, so I'll wait for it.

True, that was mentioned wrt device spec. The device interface could also be updated based on the ongoing discussions on the patch series. Refs to the device spec should be included e.g. in the .h file of the PCI device, once it's available.

Thanks,
Andra


Thanks,
Andra

The ioctl logic is mapped to PCI device commands e.g. the NE_ENCLAVE_START
ioctl
maps to an enclave start PCI command or the KVM_SET_USER_MEMORY_REGION maps to
an add memory PCI command. The PCI device commands are then translated into
actions taken on the hypervisor side; that's the Nitro hypervisor running on
the
host where the primary VM is running.

2. The enclave itself - a VM running on the same host as the primary VM that
spawned it.

The enclave VM has no persistent storage or network interface attached, it uses
its own memory and CPUs + its virtio-vsock emulated device for communication
with the primary VM.

The memory and CPUs are carved out of the primary VM, they are dedicated for
the
enclave. The Nitro hypervisor running on the host ensures memory and CPU
isolation between the primary VM and the enclave VM.


These two components need to reflect the same state e.g. when the enclave
abstraction process (1) is terminated, the enclave VM (2) is terminated as
well.

With regard to the communication channel, the primary VM has its own emulated
virtio-vsock PCI device. The enclave VM has its own emulated virtio-vsock
device
as well. This channel is used, for example, to fetch data in the enclave and
then process it. An application that sets up the vsock socket and connects or
listens, depending on the use case, is then developed to use this channel; this
happens on both ends - primary VM and enclave VM.

Let me know if further clarifications are needed.

The proposed solution is following the KVM model and uses the KVM API to
be able
to create and set resources for enclaves. An additional ioctl command,
besides
the ones provided by KVM, is used to start an enclave and setup the
addressing
for the communication channel and an enclave unique id.
Reusing some KVM ioctls is definitely a good idea, but I wouldn't really
say it's the KVM API since the VCPU file descriptor is basically non
functional (without KVM_RUN and mmap it's not really the KVM API).
It uses part of the KVM API or a set of KVM ioctls to model the way a VM is
created / terminated. That's true, KVM_RUN and mmap-ing the vcpu fd are not
included.

Thanks for the feedback regarding the reuse of KVM ioctls.

Andra




Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar
Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in
Romania. Registration number J22/2621/2005.



Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar
Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in
Romania. Registration number J22/2621/2005.
---
Regards,
Longpeng(Mike)




Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.