Re: [PATCH] rcu-tasks: Make rude RCU-Tasks work well with CPU hotplug

From: Neeraj Upadhyay
Date: Sat Nov 26 2022 - 00:32:14 EST


Hi,


On 11/26/2022 10:04 AM, Joel Fernandes wrote:
On Sat, Nov 26, 2022 at 02:43:59AM +0000, Zhang, Qiang1 wrote:
On Fri, Nov 25, 2022 at 11:54:27PM +0800, Zqiang wrote:
Currently, for the case of num_online_cpus() <= 1, return directly,
indicates the end of current grace period and then release old data.
it's not accurate, for SMP system, when num_online_cpus() is equal
one, maybe another cpu that in offline process(after invoke
__cpu_disable()) is still in the rude RCU-Tasks critical section
holding the old data, this lead to memory corruption.

Therefore, this commit add cpus_read_lock/unlock() before executing
num_online_cpus().


I am not sure if this is needed. The only way what you suggest can happen is
if the tasks-RCU protected data is accessed after the num_online_cpus() value is
decremented on the CPU going offline.

However, the number of online CPUs value is changed on a CPU other than the
CPU going offline.

So there's no way the CPU going offline can run any code (it is already
dead courtesy of CPUHP_AP_IDLE_DEAD). So a corruption is impossible.

Or, did I miss something?

Hi joel

Suppose the system has two cpus

CPU0 CPU1
cpu_stopper_thread
take_cpu_down
__cpu_disable
dec __num_online_cpus
rcu_tasks_rude_wait_gp cpuhp_invoke_callback

Thanks for clarifying!

You are right, this can be a problem for anything in the stop machine on the
CPU going offline from CPUHP_AP_ONLINE to CPUHP_AP_IDLE_DEAD, during which
the code execute on that CPU is not accounted for in num_online_cpus().

Actually Neeraj found a similar issue 2 years ago and instead of hotplug
lock, he added a new attribute to rcu_state to track number of CPUs.

See:
https://lore.kernel.org/r/20200923210313.GS29330@paulmck-ThinkPad-P72
https://www.mail-archive.com/linux-kernel@xxxxxxxxxxxxxxx/msg2317853.html

Could we do something similar?

Off note is the comment in that thread:
Actually blocking CPU hotplug would not only result in excessive overhead,
but would also unnecessarily impede CPU-hotplug operations.

Neeraj is also on the thread and could chime in.


I agree that using a counter, which is updated on the control CPU - after the CPU is dead ( for offline case) and before the CPU starts executing in kernel (for online case) optimizes the fast path.
However, given that, in the common case (num_online_cpus() > 1),
we also need to acquire cpus_read_lock(), I am not sure of how much actual impact that optimization will have.


Thanks
Neeraj

Thanks,

- Joel


num_online_cpus() == 1
return;
when __num_online_cpus == 1, the CPU1 not completely offline.

Thanks
Zqiang


thanks,

- Joel




Signed-off-by: Zqiang <qiang1.zhang@xxxxxxxxx>
---
kernel/rcu/tasks.h | 20 ++++++++++++++++++--
1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
index 4a991311be9b..08e72c6462d8 100644
--- a/kernel/rcu/tasks.h
+++ b/kernel/rcu/tasks.h
@@ -1033,14 +1033,30 @@ static void rcu_tasks_be_rude(struct work_struct *work)
{
}
+static DEFINE_PER_CPU(struct work_struct, rude_work);
+
// Wait for one rude RCU-tasks grace period.
static void rcu_tasks_rude_wait_gp(struct rcu_tasks *rtp)
{
+ int cpu;
+ struct work_struct *work;
+
+ cpus_read_lock();
if (num_online_cpus() <= 1)
- return; // Fastpath for only one CPU.
+ goto end;// Fastpath for only one CPU.
rtp->n_ipis += cpumask_weight(cpu_online_mask);
- schedule_on_each_cpu(rcu_tasks_be_rude);
+ for_each_online_cpu(cpu) {
+ work = per_cpu_ptr(&rude_work, cpu);
+ INIT_WORK(work, rcu_tasks_be_rude);
+ schedule_work_on(cpu, work);
+ }
+
+ for_each_online_cpu(cpu)
+ flush_work(per_cpu_ptr(&rude_work, cpu));
+
+end:
+ cpus_read_unlock();
}
void call_rcu_tasks_rude(struct rcu_head *rhp, rcu_callback_t func);
--
2.25.1