Re: [patch 1/2] rcu batch tuning

From: Paul E. McKenney
Date: Sun Jan 29 2006 - 22:29:10 EST


On Sat, Jan 28, 2006 at 09:07:13PM +0300, Oleg Nesterov wrote:
> "Paul E. McKenney" wrote:
> >
> > On Fri, Jan 27, 2006 at 10:57:59PM +0300, Oleg Nesterov wrote:
> > >
> > > When ->qlen exceeds qhimark for the first time we send reschedule IPI to
> > > other CPUs and force_quiescent_state() records ->last_rs_qlen = ->qlen.
> > > But we don't reset ->last_rs_qlen when ->qlen goes to 0, this means that
> > > next time we need ++rdp->qlen > qhimark + rsinterval to force other CPUS
> > > to pass quiescent state, no?
> >
> > Good catch -- this could well explain Lee's continuing to hit
> > latency problems. Although this would not cause the first
> > latency event, only subsequent ones, it seems to me that ->last_rs_qlen
> > should be reset whenever ->blimit is reset.
>
> May be it's better to do it in other way?
>
> struct rcu_ctrlblk {
> ...
> int signaled;
> ...
> };
>
> void force_quiescent_state(rdp, rcp)
> {
> if (!rcp->signaled) {
> // racy, but tolerable
> rcp->signaled = 1;
>
> for_each_cpu_mask(cpu, cpumask)
> smp_send_reschedule(cpu);
> }
> }
>
> void rcu_start_batch(rcp, rdp)
> {
> if (->next_pending && ->completed == ->cur) {
> ...
> rcp->signaled = 0;
> ...
> }
> }

Possibly... But the best thing would be for you and Dipankar to
get together to work out the best strategy for this.

Thanx, Paul

> Probably it is also makes sense to tasklet_schedule(rcu_tasklet)
> in call_rcu() when ++rdp->qlen > qhimark, this way we can detect
> that we need to start the next batch earlier.
>
> > > Also, it seems to me it's better to have 2 counters, one for length(->donelist)
> > > and another for length(->curlist + ->nxtlist). I think we don't need
> > > force_quiescent_state() when all rcu callbacks are placed in ->donelist,
> > > we only need to increase rdp->blimit in this case.
> >
> > True, currently the patch keeps the sum of the length of all three lists,
> > and takes both actions when the sum gets too large. But the only way
> > you would get unneeded IPIs would be if callback processing was
> > stalled, but callback generation and grace-period processing was
> > still proceeding. Seems at first glance to be an unusual corner
> > case, with the only downside being some extra IPIs. Or am I missing
> > some aspect?
>
> Yes, it is probably not worth to complicate the code.
>
> Oleg.
>
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/