Re: [this_cpu_xx V4 00/20] Introduce per cpu atomic operations andavoid per cpu address arithmetic

From: Christoph Lameter
Date: Fri Oct 02 2009 - 13:55:15 EST


On Fri, 2 Oct 2009, Ingo Molnar wrote:

> > Right. There will be a time period in which other arches will need to
> > add support for this_cpu_xx first.
>
> Size comparison should be only on architectures that support it (i.e.
> x86 right now). The generic fallbacks might be bloaty, no argument about
> that. ( => the more reason for any architecture to add optimizations for
> this_cpu_*() APIs. )

The fallbacks basically generate the same code (at least for the core
code) that was there before. F.e.

Before:

#define SNMP_INC_STATS(mib, field) \
do { \
per_cpu_ptr(mib[!in_softirq()], get_cpu())->mibs[field]++; \
put_cpu(); \
} while (0)

After

#define SNMP_INC_STATS_USER(mib, field) \
this_cpu_inc(mib[1]->mibs[field])



For the x86 case this means that we can use a simple atomic increment
with a segment prefix to do all the work.


The fallback case for arches not providing per cpu atomics is:

preempt_disable();
*__this_cpu_ptr(&mib[1]->mibs[field]) += 1;
preempt_enable();

If the arch can optimize __this_cpu_ptr (and provides __my_cpu_offset)
because it has the per cpu offset of the local cpu in some priviledged
location then this is still going to be a win since we avoid
smp_processor_id() entirely and we also avoid the array lookup.

If the arch has no such mechanism then we fall back for this_cpu_ptr too:

#ifndef __my_cpu_offset
#define __my_cpu_offset per_cpu_offset(raw_smp_processor_id())
#endif

And then the result in terms of overhead is the same as before the
per_cpu_xx patches since get_cpu() does both a preempt_disable as well as
a smp_processor_id() call.



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/