Re: [PATCH v5 08/24] sched: Introduce per memory space current virtual cpu id

From: Mathieu Desnoyers
Date: Wed Nov 09 2022 - 10:09:35 EST


On 2022-11-09 04:42, Peter Zijlstra wrote:
On Thu, Nov 03, 2022 at 04:03:43PM -0400, Mathieu Desnoyers wrote:

+void sched_vcpu_exit_signals(struct task_struct *t)
+{
+ struct mm_struct *mm = t->mm;
+ unsigned long flags;
+
+ if (!mm)
+ return;
+ local_irq_save(flags);
+ mm_vcpu_put(mm, t->mm_vcpu);
+ t->mm_vcpu = -1;
+ t->mm_vcpu_active = 0;
+ local_irq_restore(flags);
+}
+
+void sched_vcpu_before_execve(struct task_struct *t)
+{
+ struct mm_struct *mm = t->mm;
+ unsigned long flags;
+
+ if (!mm)
+ return;
+ local_irq_save(flags);
+ mm_vcpu_put(mm, t->mm_vcpu);
+ t->mm_vcpu = -1;
+ t->mm_vcpu_active = 0;
+ local_irq_restore(flags);
+}
+
+void sched_vcpu_after_execve(struct task_struct *t)
+{
+ struct mm_struct *mm = t->mm;
+ unsigned long flags;
+
+ WARN_ON_ONCE((t->flags & PF_KTHREAD) || !t->mm);
+
+ local_irq_save(flags);
+ t->mm_vcpu = mm_vcpu_get(mm);
+ t->mm_vcpu_active = 1;
+ local_irq_restore(flags);
+ rseq_set_notify_resume(t);
+}

+static inline void mm_vcpu_put(struct mm_struct *mm, int vcpu)
+{
+ lockdep_assert_irqs_disabled();
+ if (vcpu < 0)
+ return;
+ spin_lock(&mm->vcpu_lock);
+ __cpumask_clear_cpu(vcpu, mm_vcpumask(mm));
+ spin_unlock(&mm->vcpu_lock);
+}
+
+static inline int mm_vcpu_get(struct mm_struct *mm)
+{
+ int ret;
+
+ lockdep_assert_irqs_disabled();
+ spin_lock(&mm->vcpu_lock);
+ ret = __mm_vcpu_get(mm);
+ spin_unlock(&mm->vcpu_lock);
+ return ret;
+}


This:

local_irq_disable()
spin_lock()

thing is a PREEMPT_RT anti-pattern.

At the very very least this should then be raw_spin_lock(), not in the
least because you're calling this from under rq->lock, which itself is a
raw_spin_lock_t.

Very good point, will fix using raw_spinlock_t.

Thanks,

Mathieu



--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com