Re: [PATCH] rcu: Reduce synchronize_rcu() delays when all wait heads are in use

From: Joel Fernandes
Date: Wed Mar 13 2024 - 12:13:52 EST




On 3/13/2024 12:04 PM, Neeraj Upadhyay wrote:
> Hi Joel,
>
> On 3/13/2024 8:10 PM, Joel Fernandes wrote:
>> Hi Neeraj,
>>
>> On 3/13/2024 4:32 AM, Neeraj Upadhyay wrote:
>>> When all wait heads are in use, which can happen when
>>> rcu_sr_normal_gp_cleanup_work()'s callback processing
>>> is slow, any new synchronize_rcu() user's rcu_synchronize
>>> node's processing is deferred to future GP periods. This
>>> can result in long list of synchronize_rcu() invocations
>>> waiting for full grace period processing, which can delay
>>> freeing of memory. Mitigate this problem by using first
>>> node in the list as wait tail when all wait heads are in use.
>>> While methods to speed up callback processing would be needed
>>> to recover from this situation, allowing new nodes to complete
>>> their grace period can help prevent delays due to a fixed
>>> number of wait head nodes.
>>>
>>> Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@xxxxxxx>
>>> ---
>>> kernel/rcu/tree.c | 27 +++++++++++++--------------
>>> 1 file changed, 13 insertions(+), 14 deletions(-)
>>>
>>> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
>>> index 9fbb5ab57c84..bdccce1ed62f 100644
>>> --- a/kernel/rcu/tree.c
>>> +++ b/kernel/rcu/tree.c
>>> @@ -1470,14 +1470,11 @@ static void rcu_poll_gp_seq_end_unlocked(unsigned long *snap)
>>> * for this new grace period. Given that there are a fixed
>>> * number of wait nodes, if all wait nodes are in use
>>> * (which can happen when kworker callback processing
>>> - * is delayed) and additional grace period is requested.
>>> - * This means, a system is slow in processing callbacks.
>>> - *
>>> - * TODO: If a slow processing is detected, a first node
>>> - * in the llist should be used as a wait-tail for this
>>> - * grace period, therefore users which should wait due
>>> - * to a slow process are handled by _this_ grace period
>>> - * and not next.
>>> + * is delayed), first node in the llist is used as wait
>>> + * tail for this grace period. This means, the first node
>>> + * has to go through additional grace periods before it is
>>> + * part of the wait callbacks. This should be ok, as
>>> + * the system is slow in processing callbacks anyway.
>>> *
>>> * Below is an illustration of how the done and wait
>>> * tail pointers move from one set of rcu_synchronize nodes
>>> @@ -1725,15 +1722,17 @@ static bool rcu_sr_normal_gp_init(void)
>>> return start_new_poll;
>>>
>>> wait_head = rcu_sr_get_wait_head();
>>> - if (!wait_head) {
>>> - // Kick another GP to retry.
>>> + if (wait_head) {
>>> + /* Inject a wait-dummy-node. */
>>> + llist_add(wait_head, &rcu_state.srs_next);
>>> + } else {
>>> + // Kick another GP for first node.
>>> start_new_poll = true;
>>> - return start_new_poll;
>>> + if (first == rcu_state.srs_done_tail)
>>
>> small nit:
>> Does done_tail access here need smp_load_acquire() or READ_ONCE() to match the
>> other users?
>>
>
> As srs_done_tail is only updated in RCU GP thread context, I think it is not required.
> Please correct me if I am wrong here.

But will KCSAN not scream that its a data race?

thanks,

- Joel