Re: [PATCH v2 1/3] x86/tdx: Add TDX Guest event notify interrupt support

From: Sathyanarayanan Kuppuswamy
Date: Tue Apr 25 2023 - 19:47:35 EST


Hi Kai,

On 4/14/23 6:34 AM, Huang, Kai wrote:
> On Wed, 2023-04-12 at 20:41 -0700, Kuppuswamy Sathyanarayanan wrote:
>> Host-guest event notification via configured interrupt vector is useful
>> in cases where a guest makes an asynchronous request and needs a
>> callback from the host to indicate the completion or to let the host
>> notify the guest about events like device removal. One usage example is,
>> callback requirement of GetQuote asynchronous hypercall.
>>
>> In TDX guest, SetupEventNotifyInterrupt hypercall can be used by the
>> guest to specify which interrupt vector to use as an event-notify
>> vector from the VMM. Details about the SetupEventNotifyInterrupt
>> hypercall can be found in TDX Guest-Host Communication Interface
>> (GHCI) Specification, section "VP.VMCALL<SetupEventNotifyInterrupt>".
>>
>> As per design, VMM will post the event completion IRQ using the same
>> CPU on which SetupEventNotifyInterrupt hypercall request is received.
>> So allocate an IRQ vector from "x86_vector_domain", and set the CPU
>> affinity of the IRQ vector to the CPU on which
>> SetupEventNotifyInterrupt hypercall is made.
>>
>> Add tdx_register_event_irq_cb()/tdx_unregister_event_irq_cb()
>> interfaces to allow drivers register/unregister event noficiation
> ^
> to register/unregister
>> handlers.
>>
>>
>
> [...]
>

With suggested changes, the final version looks like below.

+/**
+ * tdx_event_irq_init() - Register IRQ for event notification from the VMM to
+ * the TDX Guest.
+ *
+ * Use SetupEventNotifyInterrupt TDVMCALL to register the event notification
+ * IRQ with the VMM, which is used by the VMM to notify the TDX guest when
+ * needed, for instance, when VMM finishes the GetQuote request from the TDX
+ * guest. The VMM always notifies the TDX guest via the same CPU on which the
+ * SetupEventNotifyInterrupt TDVMCALL is called. For simplicity, just allocate
+ * an IRQ (and a vector) directly from x86_vector_domain for such notification
+ * and pin the IRQ to the same CPU on which TDVMCALL is called.
+ *
+ * Since tdx_event_irq_init() is triggered via early_initcall(), it will be
+ * called before secondary CPUs bring up, so no special logic is required to
+ * ensure that the same CPU is used for SetupEventNotifyInterrupt TDVMCALL and
+ * IRQ allocation.
+ */
+static int __init tdx_event_irq_init(void)
+{
+ struct irq_affinity_desc desc;
+ struct irq_alloc_info info;
+ struct irq_cfg *cfg;
+ int irq;
+
+ if (!cpu_feature_enabled(X86_FEATURE_TDX_GUEST))
+ return 0;
+
+ init_irq_alloc_info(&info, NULL);
+
+ cpumask_set_cpu(smp_processor_id(), &desc.mask);
+
+ irq = __irq_domain_alloc_irqs(x86_vector_domain, -1, 1, cpu_to_node(0),
+ &info, false, &desc);
+ if (irq <= 0) {
+ pr_err("Event notification IRQ allocation failed %d\n", irq);
+ return -EIO;
+ }
+
+ irq_set_handler(irq, handle_edge_irq);
+
+ /*
+ * The IRQ cannot be migrated because VMM always notifies the TDX
+ * guest on the same CPU on which the SetupEventNotifyInterrupt
+ * TDVMCALL is called. Set the IRQ with IRQF_NOBALANCING to prevent
+ * its affinity from being changed.
+ */
+ if (request_irq(irq, tdx_event_irq_handler, IRQF_NOBALANCING,
+ "tdx_event_irq", NULL)) {
+ pr_err("Event notification IRQ request failed\n");
+ goto err_free_domain_irqs;
+ }
+
+ cfg = irq_cfg(irq);
+
+ if (_tdx_hypercall(TDVMCALL_SETUP_NOTIFY_INTR, cfg->vector, 0, 0, 0)) {
+ pr_err("Event notification hypercall failed\n");
+ goto err_free_irqs;
+ }
+
+ tdx_event_irq = irq;
+
+ return 0;
+
+err_free_irqs:
+ free_irq(irq, NULL);
+err_free_domain_irqs:
+ irq_domain_free_irqs(irq, 1);
+
+ return -EIO;
+}
+early_initcall(tdx_event_irq_init)




--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer