Re: kernbench on 512p

From: Paul E. McKenney
Date: Fri Aug 20 2004 - 11:01:58 EST


On Thu, Aug 19, 2004 at 08:16:10PM -0400, Jesse Barnes wrote:
> On Thursday, August 19, 2004 7:38 pm, Paul E. McKenney wrote:
> > > 41.4% 58.6% 0% rcu_state
> > > 61.3% 38.7% 0% __rcu_process_callbacks+0x260
> > > 41.4% 58.6% 0% rcu_check_quiescent_state+0xf0
> > >
> > > So it looks like the dcache lock is the biggest problem on this system
> > > with this load. And although the rcu stuff has improved tremendously for
> > > this system, it's still highly contended.
> >
> > Was this run using all of Manfred's RCU patches? If not, it would be
> > interesting to see what you get with full RCU_HUGE patchset.
>
> So that stuff isn't in 2.6.8.1-mm1? What I tested was pretty close to a stock
> version of that tree. Where can I grab a version of that patchset that
> applies to a recent kernel?

If I am not too confused, you need #4 and #5 out of Manfred's set of
five RCU_HUGE patchset. They are at:

http://marc.theaimsgroup.com/?l=linux-kernel&m=108546358123902&w=2
http://marc.theaimsgroup.com/?l=linux-kernel&m=108546384711797&w=2

A little work will be required to get them to apply. For example, in
the first patch, instead of:

spin_lock(&rcu_state.mutex);

one would say:

spin_lock(&rsp->lock);

due to the addition of the _bh() primitives. The "rcu_ctrlblk."
and "rcu_state." have been replaced by pointers in order to handle
multiple different types of grace periods:

o Traditional grace periods based on context switch, executing
in user mode, or running the idle loop.

o New-age low-latency grace periods based on running with
interrupts enabled.

Thanx, Paul
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/