Re: Commit 282d8998e997 (srcu: Prevent expedited GPs and blocking readers from consuming CPU) cause qemu boot slow

From: zhangfei.gao@xxxxxxxxxxx
Date: Wed Jun 15 2022 - 05:03:52 EST




On 2022/6/14 下午10:17, Paul E. McKenney wrote:
On Tue, Jun 14, 2022 at 10:03:35PM +0800, zhangfei.gao@xxxxxxxxxxx wrote:

On 2022/6/14 下午8:19, Neeraj Upadhyay wrote:
5.18-rc4 based               ~8sec

5.19-rc1                     ~2m43sec

5.19-rc1+fix1                 ~19sec

5.19-rc1-fix2                 ~19sec

If you try below diff on top of either 5.19-rc1+fix1 or 5.19-rc1-fix2 ;
does it show any difference in boot time?

--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -706,7 +706,7 @@ static void srcu_schedule_cbs_snp(struct srcu_struct
*ssp, struct srcu_node *snp
  */
 static void srcu_gp_end(struct srcu_struct *ssp)
 {
-       unsigned long cbdelay;
+       unsigned long cbdelay = 1;
        bool cbs;
        bool last_lvl;
        int cpu;
@@ -726,7 +726,9 @@ static void srcu_gp_end(struct srcu_struct *ssp)
        spin_lock_irq_rcu_node(ssp);
        idx = rcu_seq_state(ssp->srcu_gp_seq);
        WARN_ON_ONCE(idx != SRCU_STATE_SCAN2);
-       cbdelay = !!srcu_get_delay(ssp);
+       if (ULONG_CMP_LT(READ_ONCE(ssp->srcu_gp_seq),
READ_ONCE(ssp->srcu_gp_seq_needed_exp)))
+               cbdelay = 0;
+
        WRITE_ONCE(ssp->srcu_last_gp_end, ktime_get_mono_fast_ns());
Thank you both for the testing and the proposed fix!

Test here:
qemu: https://github.com/qemu/qemu/tree/stable-6.1
kernel:
https://github.com/Linaro/linux-kernel-uadk/tree/uacce-devel-5.19-srcu-test
(in case test patch not clear, push in git tree)

Hardware: aarch64

1. 5.18-rc6
real    0m8.402s
user    0m3.015s
sys     0m1.102s

2. 5.19-rc1
real    2m41.433s
user    0m3.097s
sys     0m1.177s

3. 5.19-rc1 + fix1 from Paul
real    2m43.404s
user    0m2.880s
sys     0m1.214s

4. 5.19-rc1 + fix2: fix1 + Remove "if (!jbase)" block
real    0m15.262s
user    0m3.003s
sys     0m1.033s

When build kernel in the meantime, load time become longer.

5. 5.19-rc1 + fix3: fix1 + SRCU_MAX_NODELAY_PHASE 1000000
real    0m15.215s
user    0m2.942s
sys    0m1.172s

6. 5.19-rc1 + fix4: fix1 + Neeraj's change of srcu_gp_end 
real    1m23.936s
user    0m2.969s
sys    0m1.181s
And thank you for the testing!

Could you please try fix3 + Neeraj's change of srcu_gp_end?

That is, fix1 + SRCU_MAX_NODELAY_PHASE 1000000 + Neeraj's change of
srcu_gp_end.

Also, at what value of SRCU_MAX_NODELAY_PHASE do the boot
times start rising? This is probably best done by starting with
SRCU_MAX_NODELAY_PHASE=100000 and dividing by (say) ten on each run
until boot time becomes slow, followed by a binary search between the
last two values. (The idea is to bias the search so that fast boot
times are the common case.)

SRCU_MAX_NODELAY_PHASE 100 becomes slower.


8. 5.19-rc1 + fix6: fix4 + SRCU_MAX_NODELAY_PHASE 1000000

real 0m11.154s ~12s

user 0m2.919s

sys 0m1.064s



9. 5.19-rc1 + fix7: fix4 + SRCU_MAX_NODELAY_PHASE 10000

real 0m11.258s

user 0m3.113s

sys 0m1.073s



10. 5.19-rc1 + fix8: fix4 + SRCU_MAX_NODELAY_PHASE 100

real 0m30.053s ~ 32s

user 0m2.827s

sys 0m1.161s



By the way, if build kernel on the board in the meantime (using memory), time become much longer.

real 1m2.763s



11. 5.19-rc1 + fix9: fix4 + SRCU_MAX_NODELAY_PHASE 1000

real 0m11.443s

user 0m3.022s

sys 0m1.052s


Thanks


More test details: https://docs.qq.com/doc/DRXdKalFPTVlUbFN5
And thank you for these details.

Thanx, Paul