Re: [RFC PATCH] sched: Add missing memory barrier in switch_mm_cid

From: Mathieu Desnoyers
Date: Tue Mar 12 2024 - 14:07:21 EST


On 2024-03-11 04:45, Yeo Reum Yun wrote:
Hi. Mathieu. Sorry to late answer.

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 35389b2af88e..0d5e54201eb2 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -79,6 +79,9 @@ do { \
#define __smp_mb__before_atomic() do { } while (0)
#define __smp_mb__after_atomic() do { } while (0)

+/* Writing to CR3 provides a full memory barrier in switch_mm(). */
+#define smp_mb__after_switch_mm() do { } while (0)
+
#include <asm-generic/barrier.h>

IIUC, ppc already does smp_mb() in switch_mm.

Would it better to add the same macro which do nothing to pcc?\\

Does it ?

Based on arch/powerpc/include/asm/membarrier.h, it appears that
powerpc does _not_ have a guaranteed barrier in switch_mm():

static inline void membarrier_arch_switch_mm(struct mm_struct *prev,
struct mm_struct *next,
struct task_struct *tsk)
{
/*
* Only need the full barrier when switching between processes.
* Barrier when switching from kernel to userspace is not
* required here, given that it is implied by mmdrop(). Barrier
* when switching from userspace to kernel is not needed after
* store to rq->curr.
*/
if (IS_ENABLED(CONFIG_SMP) &&
likely(!(atomic_read(&next->membarrier_state) &
(MEMBARRIER_STATE_PRIVATE_EXPEDITED |
MEMBARRIER_STATE_GLOBAL_EXPEDITED)) || !prev))
return;

/*
* The membarrier system call requires a full memory barrier
* after storing to rq->curr, before going back to user-space.
*/
smp_mb();
}

AFAIU the barrier provided in powerpc switch_mm_irqs_off() is only in the
"new_on_cpu" case. Am I missing something ?

Thanks,

Mathieu


Thanks!

--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com