Re: [RFC PATCH] introduce sys_membarrier(): process-wide memorybarrier (v5)

From: Mathieu Desnoyers
Date: Thu Jan 14 2010 - 14:34:09 EST


* Steven Rostedt (rostedt@xxxxxxxxxxx) wrote:
> On Thu, 2010-01-14 at 13:37 -0500, Mathieu Desnoyers wrote:
>
> > To make this painfully clear, I'll reorder the accesses to match that of
> > the CPU to memory:
> >
> > CPU 0 (membarrier) CPU 1 (another mm -our mm)
> > <user-space>
> > <kernel-space>
> > switch_mm()
> > smp_mb()
> > clear_mm_cpumask()
> > set_mm_cpumask()
> > smp_mb() (by load_cr3() on x86)
> > switch_to()
> > <buffered current = next>
> > <switch back to user-space>
> > urcu read lock()
> > access critical section data (3)
> > memory access before membarrier
> > <call sys_membarrier()>
> > smp_mb()
> > mm_cpumask includes CPU 1
> > rcu_read_lock()
> > if (CPU 1 mm != our mm)
> > skip CPU 1.
>
> I still don't see how the above conditional fails?

First, I just want to fix one detail I had wrong. It does not change the
end result, but it changes the order of the scenario:

A cpu "current" task struct is not the same thing as that same CPU
rq->curr. So we are talking about the rq->curr update here, not the cpu
"current" task (as I mistakenly assumed previously).

if (CPU 1 mm != our mm) translates into:

if (cpu_curr(1)->mm != current->mm)

where cpu_curr(cpu) is:

#define cpu_rq(cpu) (&per_cpu(runqueues, (cpu)))
#define cpu_curr(cpu) (cpu_rq(cpu)->curr)

struct rq "curr" field is a struct task_struct *, updated by
schedule() before calling context_switch().

So the requirement is that we need a smp_mb() before and after rq->curr
update in schedule(). The smp_mb() after the update is ensured by
context_switch() -> switch_mm() -> load_cr3(). However, updating my
scenario to match the fact that we are really talking about rq->curr
update here (which happens _before_ switch_mm() and not after), we can
see that the problematic case happens if there is no smp_mb() before
rq->curr update:

It's a case where CPU 1 switches from our mm to another mm:

CPU 0 (membarrier) CPU 1 (another mm -our mm)
<user-space> <user-space>
<buffered access C.S. data>
urcu read unlock()
barrier()
store local gp
<kernel-space>
rq->curr = next (1)
memory access before membarrier
<call sys_membarrier()>
smp_mb()
mm_cpumask includes CPU 1
rcu_read_lock()
if (cpu_curr(1)->mm != our mm)
skip CPU 1 -> here, rq->curr new version is already visible
rcu_read_unlock()
smp_mb()
<return to user-space>
memory access after membarrier
-> this is where we allow freeing
the old structure although the
buffered access C.S. data is
still in flight.
User-space access C.S. data (2)
(buffer flush)
switch_mm()
smp_mb()
clear_mm_cpumask()
set_mm_cpumask()
smp_mb() (by load_cr3() on x86)
switch_to()
<buffered current = next>
<switch back to user-space>
current = next (1) (buffer flush)
access critical section data (3)

As we can see, the reordering of (1) and (2) is problematic, as it lets
the check skip over a CPU that have global side-effects not committed to
memory yet.

Hopefully this explanation helps ?

Thanks,

Mathieu

>
> -- Steve
>
> > rcu_read_unlock()
> > smp_mb()
> > <return to user-space>
> > memory access after membarrier
> > current = next (1) (buffer flush)
> > read gp
> > store local gp (2)
> >
> > This should make the problem a bit more evident. Access (3) is done
> > outside of the read-side C.S. as far as the userspace synchronize_rcu()
> > is concerned.
> >
> > Thanks,
> >
> > Mathieu
> >
> >
>
>

--
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/