[PATCH RFC 00/12] Core-sched v6+: kernel protection and hotplug fixes

From: Joel Fernandes (Google)
Date: Sat Aug 15 2020 - 17:57:39 EST


Hello!

This series is continuation of main core-sched v6 series [1] and adds support
for syscall and IRQ isolation from usermode processes and guests. It is key to
safely entering kernel mode in an HT while the other HT is in use by a user or
guest. The series also fixes CPU hotplug issues arising because of the
cpu_smt_mask changing while the next task is being picked. These hotplug fixes
are needed also for kernel protection to work correctly.

The series is based on Thomas's x86/entry tree.

[1] https://lwn.net/Articles/824918/

Background:

Core-scheduling prevents hyperthreads in usermode from attacking each
other, but it does not do anything about one of the hyperthreads
entering the kernel for any reason. This leaves the door open for MDS
and L1TF attacks with concurrent execution sequences between
hyperthreads.

This series adds support for protecting all syscall and IRQ kernel mode entries
by cleverly tracking when any sibling in a core enter the kernel, and when all
the siblings exit the kernel. IPIs are sent to force siblings into the kernel.

Care is taken to avoid waiting in IRQ-disabled sections as Thomas suggested
thus avoiding stop_machine deadlocks. Every attempt is made to avoid
unnecessary IPIs.

Performance tests:
sysbench is used to test the performance of the patch series. Used a 8 cpu/4
Core VM and ran 2 sysbench tests in parallel. Each sysbench test runs 4 tasks:
sysbench --test=cpu --cpu-max-prime=100000 --num-threads=4 run

Compared the performance results for various combinations as below.
The metric below is 'events per second':

1. Coresched disabled
sysbench-1/sysbench-2 => 175.7/175.6

2. Coreched enabled, both sysbench tagged
sysbench-1/sysbench-2 => 168.8/165.6

3. Coresched enabled, sysbench-1 tagged and sysbench-2 untagged
sysbench-1/sysbench-2 => 96.4/176.9

4. smt off
sysbench-1/sysbench-2 => 97.9/98.8

When both sysbench-es are tagged, there is a perf drop of ~4%. With a
tagged/untagged case, the tagged one suffers because it always gets
stalled when the sibiling enters kernel. But this is no worse than smtoff.

Also a modified rcutorture was used to heavily stress the kernel to make sure
there is not crash or instability.

Joel Fernandes (Google) (5):
irq_work: Add support to detect if work is pending
entry/idle: Add a common function for activites during idle entry/exit
arch/x86: Add a new TIF flag for untrusted tasks
kernel/entry: Add support for core-wide protection of kernel-mode
entry/idle: Enter and exit kernel protection during idle entry and
exit

Vineeth Pillai (7):
entry/kvm: Protect the kernel when entering from guest
bitops: Introduce find_next_or_bit
cpumask: Introduce a new iterator for_each_cpu_wrap_or
sched/coresched: Use for_each_cpu(_wrap)_or for pick_next_task
sched/coresched: Make core_pick_seq per run-queue
sched/coresched: Check for dynamic changes in smt_mask
sched/coresched: rq->core should be set only if not previously set

arch/x86/include/asm/thread_info.h | 2 +
arch/x86/kvm/x86.c | 3 +
include/asm-generic/bitops/find.h | 16 ++
include/linux/cpumask.h | 42 +++++
include/linux/entry-common.h | 22 +++
include/linux/entry-kvm.h | 12 ++
include/linux/irq_work.h | 1 +
include/linux/sched.h | 12 ++
kernel/entry/common.c | 88 +++++----
kernel/entry/kvm.c | 12 ++
kernel/irq_work.c | 11 ++
kernel/sched/core.c | 281 ++++++++++++++++++++++++++---
kernel/sched/idle.c | 17 +-
kernel/sched/sched.h | 11 +-
lib/cpumask.c | 53 ++++++
lib/find_bit.c | 56 ++++--
16 files changed, 564 insertions(+), 75 deletions(-)

--
2.28.0.220.ged08abb693-goog