Re: [PATCH] netfilter: use per-cpu recursive lock (v10)

From: Paul E. McKenney
Date: Mon Apr 20 2009 - 17:06:17 EST


On Mon, Apr 20, 2009 at 01:42:49PM -0700, Stephen Hemminger wrote:
> On Mon, 20 Apr 2009 20:25:14 +0200
> Eric Dumazet <dada1@xxxxxxxxxxxxx> wrote:
>
> > Stephen Hemminger a écrit :
> > > This version of x_tables (ip/ip6/arp) locking uses a per-cpu
> > > recursive lock that can be nested. It is sort of like existing kernel_lock,
> > > rwlock_t and even old 2.4 brlock.
> > >
> > > "Reader" is ip/arp/ip6 tables rule processing which runs per-cpu.
> > > It needs to ensure that the rules are not being changed while packet
> > > is being processed.
> > >
> > > "Writer" is used in two cases: first is replacing rules in which case
> > > all packets in flight have to be processed before rules are swapped,
> > > then counters are read from the old (stale) info. Second case is where
> > > counters need to be read on the fly, in this case all CPU's are blocked
> > > from further rule processing until values are aggregated.
> > >
> > > The idea for this came from an earlier version done by Eric Dumazet.
> > > Locking is done per-cpu, the fast path locks on the current cpu
> > > and updates counters. This reduces the contention of a
> > > single reader lock (in 2.6.29) without the delay of synchronize_net()
> > > (in 2.6.30-rc2).
> > >
> > > The mutex that was added for 2.6.30 in xt_table is unnecessary since
> > > there already is a mutex for xt[af].mutex that is held.
> > >
> > > Signed-off-by: Stephen Hemminger <shemminger@xxxxxxxxxx
> > >
> > > ---
> > > Changes from earlier patches.
> > > - function name changes
> > > - disable bottom half in info_rdlock
> >
> > OK, but we still have a problem on machines with >= 250 cpus,
> > because calling 250 times spin_lock() is going to overflow preempt_count,
> > as each spin_lock() increases preempt_count by one.
> >
> > PREEMPT_MASK: 0x000000ff
> >
> > add_preempt_count() should warn us about this overflow if CONFIG_DEBUG_PREEMPT is set
>
> Wouldn't 256 or higher CPU system be faster without preempt? If there
> are that many CPU's, it is faster to do the work on other cpu and avoid
> the overhead of a hotly updated preempt count.

The preempt count is maintained per-CPU, so has low overhead. The
problem is that for CONFIG_PREEMPT builds, the preempt disabing is
built into spin_lock().

Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/