Re: [PATCH tip/core/rcu] classic RCU locking and memory-barrier cleanups

From: Manfred Spraul
Date: Wed Aug 06 2008 - 01:37:47 EST


Hi Paul,

Paul E. McKenney wrote:
This patch is in preparation for moving to a hierarchical
algorithm to allow the very large SMP machines -- requested by some
people at OLS, and there seem to have been a few recent patches in the
4096-CPU direction as well.

I thought about hierarchical RCU, but I never found the time to implement it.
Do you have a concept in mind?

Right now, I try to understand the current code first - and some of it doesn't make much sense.

There are three per-cpu lists:
->nxt
->cur
->done.

Obviously, there must be a quiescent state between cur and done.
But why does the code require a quiescent state between nxt and cur?
I think that's superflous. The only thing that is required is that all cpus have moved their callbacks from nxt to cur. That doesn't need a quiescent state, this operation could be done in hard interrupt as well.

Thus I think this should work:

1) A callback is inserted into ->nxt.
2) As soon as too many objects are sitting in the ->nxt lists, a new rcu cycle is started.
3) As soon as a cpu sees that a new rcu cycle is started, it moves it's callbacks from ->nxt to ->cur. No checks for hard_irq_count & friends necessary. Especially: same rule for _bh and normal.
4) As soon as all cpus have moved their lists from ->nxt to ->cur, the real grace period is started.
5) As soon as all cpus passed a quiescent state (i.e.: now with tests for hard_irq_count, different rules for _bh and normal), the list is moved from ->cur to ->completed. Once in completed, they can be destroyed by performing the callbacks.

What do you think? would that work? It doesn't make much sense that step 3) tests for a quiescent state.

Step 2) could depend memory pressure.
Step 3) and 4) could be accelerated by force_quiescent_state(), if the memory pressure is too high.

--
Manfred
-> nxt

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/