Re: [PATCH v3 00/10] x86/sev: KEXEC/KDUMP support for SEV-ES guests

From: Tao Liu
Date: Fri Jul 29 2022 - 06:28:32 EST


Hi Tom,

On Fri, Apr 29, 2022 at 08:08:28AM -0500, Tom Lendacky wrote:
> On 4/29/22 04:06, Tao Liu wrote:
> > On Thu, Jan 27, 2022 at 11:10:34AM +0100, Joerg Roedel wrote:
>
> >
> > Hi Joerg,
> >
> > I tried the patch set with 5.17.0-rc1 kernel, and I have a few questions:
> >
> > 1) Is it a bug or should qemu-kvm 6.2.0 be patched with specific patch? Because
> > I found it will exit with 0 when I tried to reboot the VM with sev-es enabled.
> > However with only sev enabled, the VM can do reboot with no problem:
>
> Qemu was specifically patched to exit on reboot with SEV-ES guests. Qemu
> performs a reboot by resetting the vCPU state, which can't be done with an
> SEV-ES guest because the vCPU state is encrypted.
>

Sorry for the late response, and thank you for the explanation!

> >
> > [root@dell-per7525-03 ~]# virsh start TW-SEV-ES --console
> > ....
> > Fedora Linux 35 (Server Edition)
> > Kernel 5.17.0-rc1 on an x86_64 (ttyS0)
> > ....
> > [root@fedora ~]# reboot
> > .....
> > [ 48.077682] reboot: Restarting system
> > [ 48.078109] reboot: machine restart
> > ^^^^^^^^^^^^^^^ guest vm reached restart
> > [root@dell-per7525-03 ~]# echo $?
> > 0
> > ^^^ qemu-kvm exit with 0, no reboot back to normal VM kernel
> > [root@dell-per7525-03 ~]#
> >
> > 2) With sev-es enabled and the 2 patch sets applied: A) [PATCH v3 00/10] x86/sev:
> > KEXEC/KDUMP support for SEV-ES guests, and B) [PATCH v6 0/7] KVM: SVM: Add initial
> > GHCB protocol version 2 support. I can enable kdump and have vmcore generated:
> >
> > [root@fedora ~]# dmesg|grep -i sev
> > [ 0.030600] SEV: Hypervisor GHCB protocol version support: min=1 max=2
> > [ 0.030602] SEV: Using GHCB protocol version 2
> > [ 0.296144] AMD Memory Encryption Features active: SEV SEV-ES
> > [ 0.450991] SEV: AP jump table Blob successfully set up
> > [root@fedora ~]# kdumpctl status
> > kdump: Kdump is operational
> >
> > However without the 2 patch sets, I can also enable kdump and have vmcore generated:
> >
> > [root@fedora ~]# dmesg|grep -i sev
> > [ 0.295754] AMD Memory Encryption Features active: SEV SEV-ES
> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ patch set A & B
> > not applied, so only have this string.
> > [root@fedora ~]# echo c > /proc/sysrq-trigger
> > ...
> > [ 2.759403] kdump[549]: saving vmcore-dmesg.txt to /sysroot/var/crash/127.0.0.1-2022-04-18-05:58:50/
> > [ 2.804355] kdump[555]: saving vmcore-dmesg.txt complete
> > [ 2.806915] kdump[557]: saving vmcore
> > ^^^^^^^^^^^^^ vmcore can still be generated
> > ...
> > [ 7.068981] reboot: Restarting system
> > [ 7.069340] reboot: machine restart
> >
> > [root@dell-per7525-03 ~]# echo $?
> > 0
> > ^^^ same exit issue as question 1.
> >
> > I doesn't have a complete technical background of the patch set, but isn't
> > it the issue which this patch set is trying to solve? Or I missed something?
>
> The main goal of this patch set is to really to solve the ability to perform
> a kexec. I would expect kdump to work since kdump shuts down all but the
> executing vCPU and performs its operations before "rebooting" (which will
> exit Qemu as I mentioned above). But kexec requires the need to restart the
> APs from within the guest after they have been stopped. That requires
> specific support and actions on the part of the guest kernel in how the APs
> are stopped and restarted.

Recently I got one sev-es flaged machine borrowed and retested the patch, which
worked fine for kexec when sev-es enabled. With the patchset applied in 5.17.0-rc1,
kexec'ed kernel can bring up all APs with no problem.

However as for kdump, I find one issue. Although kdump kernel can work well on one
cpu, but we can still enable multi-cpus by removing the "nr_cpus=1" kernel parameter
in kdump sysconfig. I was expecting kdump kernel can bring up all APs as kexec did,
however:

[ 0.000000] Command line: elfcorehdr=0x5b000000 BOOT_IMAGE=(hd0,gpt2)/vmlinuz-5.17.0-rc1+ ro resume=/dev/mapper/rhel-swap biosdevname=0 net.ifnames=0 console=ttyS0 irqpoll reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail acpi_no_memhotplug transparent_hugepage=never nokaslr novmcoredd hest_disable disable_cpu_apicid=0 iTCO_wdt.pretimeout=0
...
[ 0.376663] smp: Bringing up secondary CPUs ...
[ 0.377599] x86: Booting SMP configuration:
[ 0.378342] .... node #0, CPUs: #1
[ 10.377698] smpboot: do_boot_cpu failed(-1) to wakeup CPU#1
[ 10.379882] #2
[ 20.379645] smpboot: do_boot_cpu failed(-1) to wakeup CPU#2
[ 20.380648] smp: Brought up 1 node, 1 CPU
[ 20.381600] smpboot: Max logical packages: 4
[ 20.382597] smpboot: Total of 1 processors activated (4192.00 BogoMIPS)

Turns out for kdump, the APs were not stopped properly, so I modified the following code:

--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -26,6 +26,7 @@
#include <asm/cpu.h>
#include <asm/nmi.h>
#include <asm/smp.h>
+#include <asm/sev.h>

#include <linux/ctype.h>
#include <linux/mc146818rtc.h>
@@ -821,6 +822,7 @@ static int crash_nmi_callback(unsigned int val, struct pt_regs *regs)

atomic_dec(&waiting_for_crash_ipi);
/* Assume hlt works */
+ sev_es_stop_this_cpu();
halt();
for (;;)
cpu_relax();

[ 0.000000] Command line: elfcorehdr=0x5b000000 BOOT_IMAGE=(hd0,gpt2)/vmlinuz-5.17.0-rc1-hack+ ro resume=/dev/mapper/rhel-swap biosdevname=0 net.ifnames=0 console=ttyS0 irqpoll reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail acpi_no_memhotplug transparent_hugepage=never nokaslr novmcoredd hest_disable disable_cpu_apicid=0 iTCO_wdt.pretimeout=0
...
[ 0.402618] smp: Bringing up secondary CPUs ...
[ 0.403308] x86: Booting SMP configuration:
[ 0.404171] .... node #0, CPUs: #1 #2 #3
[ 0.407362] smp: Brought up 1 node, 4 CPUs
[ 0.408907] smpboot: Max logical packages: 4
[ 0.409172] smpboot: Total of 4 processors activated (16768.01 BogoMIPS)

Now all APs can work in kdump kernel.

Thanks,
Tao Liu

>
> Thanks,
> Tom
>
> >
> > Thanks,
> > Tao Liu
> > > _______________________________________________
> > > Virtualization mailing list
> > > Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
> > > https://lists.linuxfoundation.org/mailman/listinfo/virtualization
> >
>