Re: [PATCH v1 00/15] Add support for Nitro Enclaves

From: Paraschiv, Andra-Irina
Date: Mon Apr 27 2020 - 15:05:52 EST




On 26/04/2020 11:16, Tian, Kevin wrote:
From: Paraschiv, Andra-Irina <andraprs@xxxxxxxxxx>
Sent: Friday, April 24, 2020 9:59 PM


On 24/04/2020 12:59, Tian, Kevin wrote:
From: Paraschiv, Andra-Irina
Sent: Thursday, April 23, 2020 9:20 PM

On 22/04/2020 00:46, Paolo Bonzini wrote:
On 21/04/20 20:41, Andra Paraschiv wrote:
An enclave communicates with the primary VM via a local
communication
channel,
using virtio-vsock [2]. An enclave does not have a disk or a network
device
attached.
Is it possible to have a sample of this in the samples/ directory?
I can add in v2 a sample file including the basic flow of how to use the
ioctl interface to create / terminate an enclave.

Then we can update / build on top it based on the ongoing discussions on
the patch series and the received feedback.

I am interested especially in:

- the initial CPU state: CPL0 vs. CPL3, initial program counter, etc.

- the communication channel; does the enclave see the usual local APIC
and IOAPIC interfaces in order to get interrupts from virtio-vsock, and
where is the virtio-vsock device (virtio-mmio I suppose) placed in
memory?
- what the enclave is allowed to do: can it change privilege levels,
what happens if the enclave performs an access to nonexistent memory,
etc.
- whether there are special hypercall interfaces for the enclave
An enclave is a VM, running on the same host as the primary VM, that
launched the enclave. They are siblings.

Here we need to think of two components:

1. An enclave abstraction process - a process running in the primary VM
guest, that uses the provided ioctl interface of the Nitro Enclaves
kernel driver to spawn an enclave VM (that's 2 below).

How does all gets to an enclave VM running on the host?

There is a Nitro Enclaves emulated PCI device exposed to the primary VM.
The driver for this new PCI device is included in the current patch series.

The ioctl logic is mapped to PCI device commands e.g. the
NE_ENCLAVE_START ioctl maps to an enclave start PCI command or the
KVM_SET_USER_MEMORY_REGION maps to an add memory PCI
command.
The PCI
device commands are then translated into actions taken on the hypervisor
side; that's the Nitro hypervisor running on the host where the primary
VM is running.

2. The enclave itself - a VM running on the same host as the primary VM
that spawned it.

The enclave VM has no persistent storage or network interface attached,
it uses its own memory and CPUs + its virtio-vsock emulated device for
communication with the primary VM.
sounds like a firecracker VM?
It's a VM crafted for enclave needs.

The memory and CPUs are carved out of the primary VM, they are
dedicated
for the enclave. The Nitro hypervisor running on the host ensures memory
and CPU isolation between the primary VM and the enclave VM.
In last paragraph, you said that the enclave VM uses its own memory and
CPUs. Then here, you said the memory/CPUs are carved out and dedicated
from the primary VM. Can you elaborate which one is accurate? or a mixed
model?
Memory and CPUs are carved out of the primary VM and are dedicated for
the enclave VM. I mentioned above as "its own" in the sense that the
primary VM doesn't use these carved out resources while the enclave is
running, as they are dedicated to the enclave.

Hope that now it's more clear.
yes, it's clearer.

Good, glad to hear that.


These two components need to reflect the same state e.g. when the
enclave abstraction process (1) is terminated, the enclave VM (2) is
terminated as well.

With regard to the communication channel, the primary VM has its own
emulated virtio-vsock PCI device. The enclave VM has its own emulated
virtio-vsock device as well. This channel is used, for example, to fetch
data in the enclave and then process it. An application that sets up the
vsock socket and connects or listens, depending on the use case, is then
developed to use this channel; this happens on both ends - primary VM
and enclave VM.
How does the application in the primary VM assign task to be executed
in the enclave VM? I didn't see such command in this series, so suppose
it is also communicated through virtio-vsock?
The application that runs in the enclave needs to be packaged in an
enclave image together with the OS ( e.g. kernel, ramdisk, init ) that
will run in the enclave VM.

Then the enclave image is loaded in memory. After booting is finished,
the application starts. Now, depending on the app implementation and use
case, one example can be that the app in the enclave waits for data to
be fetched in via the vsock channel.

OK, I thought the code/data was dynamically injected from the primary
VM and then run in the enclave. From your description it sounds like
a servicing model that an auto-running application wait for and respond
service request from the application in the primary VM.

That was an example with a possible use case; in that one example, data can be dynamically injected e.g. fetch in the enclave VM a bunch data, get back the results after processing, then fetch in another set of data and so on.

The architecture of the solution depends on how the tasks are split between the primary VM and the enclave VM and what is sent via the vsock channel. The primary VM, the enclave VM and the communication between them is part of the foundational technology we provide. What's running inside each of them can vary based on the customer use case and updates to fit this infrastructure of several tasks now being split and running part of them in the enclave VM.

Thanks,
Andra



Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.