Re: [patch 2/3] scheduler: add full memory barriers upon task

From: George Spelvin
Date: Tue Feb 02 2010 - 17:42:48 EST


> again:
> tmp_mask = mm_cpumask(current->mm);
> smp_mb();
> rcu_read_lock(); /* ensures validity of cpu_curr(cpu) tasks */
> for_each_cpu(cpu, tmp_mask) {
> spin_lock_irq(&cpu_rq(cpu)->lock);
> ret = current->mm == cpu_curr(cpu)->mm;
> spin_unlock_irq(&cpu_rq(cpu)->lock);
> if (ret)
> smp_call_function_single(cpu, membarrier_ipi, NULL, 1);
> }
> rcu_read_unlock();
> smp_mb();
> if (tmp_mask != mm_cpumask(current->mm)) {
> /* do check for signals here */
> goto again;
> }

How about the harder-to-livelock version that avoids sending
a second IPI to all the processors if the retry condition
hits?

(It also caches current->mm across the various barriers, as I think the
compiler will have difficulty inferring that otherwise).


cpumask_t unsent_mask;
cpumask_setall(&unsent_mask);
cpumask_clear_cpu(smp_processor_id(), &unsent_mask);

struct mm_struct const *current_mm = current->mm;

for (;;) {
cpumask_t const *tmp_mask = mm_cpumask(current_mm);
int cpu = cpumask_next_and(-1, tmp_mask, &unsent_mask);

if (cpu > nr_cpu_ids)
break;

smp_mb();
rcu_read_lock(); /* ensures validity of cpu_curr(cpu) tasks */
do {
struct mm_struct const *other_mm;
spin_lock_irq(&cpu_rq(cpu)->lock);
other_mm = cpu_curr(cpu)->mm;
spin_unlock_irq(&cpu_rq(cpu)->lock);
if (other_mm == current_mm) {
smp_call_function_single(cpu, membarrier_ipi, NULL, 1);
cpumask_clear_cpu(cpu, &unsent_mask);
}
cpu = cpumask_next_and(cpu, tmp_mask, &unsent_mask);
} while (cpu < nr_cpu_ids);
rcu_read_unlock();
smp_mb();
/* And now check again if any more CPUs have joined the mm */
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/