Re: selftests: net: pmtu.sh: Unable to handle kernel paging request at virtual address

From: Eric Dumazet
Date: Thu Aug 31 2023 - 09:12:52 EST


On Thu, Aug 31, 2023 at 2:17 PM Hillf Danton <hdanton@xxxxxxxx> wrote:
>
> On Wed, 30 Aug 2023 21:44:57 +0900 Tetsuo Handa <penguin-kernel@xxxxxxxxxxxxxxxxxxx>
> >On 2023/08/30 20:26, Hillf Danton wrote:
> >>> <4>[ 399.014716] Call trace:
> >>> <4>[ 399.015702] percpu_counter_add_batch+0x28/0xd0
> >>> <4>[ 399.016399] dst_destroy+0x44/0x1e4
> >>> <4>[ 399.016681] dst_destroy_rcu+0x14/0x20
> >>> <4>[ 399.017009] rcu_core+0x2d0/0x5e0
> >>> <4>[ 399.017311] rcu_core_si+0x10/0x1c
> >>> <4>[ 399.017609] __do_softirq+0xd4/0x23c
> >>> <4>[ 399.017991] ____do_softirq+0x10/0x1c
> >>> <4>[ 399.018320] call_on_irq_stack+0x24/0x4c
> >>> <4>[ 399.018723] do_softirq_own_stack+0x1c/0x28
> >>> <4>[ 399.022639] __irq_exit_rcu+0x6c/0xcc
> >>> <4>[ 399.023434] irq_exit_rcu+0x10/0x1c
> >>> <4>[ 399.023962] el1_interrupt+0x8c/0xc0
> >>> <4>[ 399.024810] el1h_64_irq_handler+0x18/0x24
> >>> <4>[ 399.025324] el1h_64_irq+0x64/0x68
> >>> <4>[ 399.025612] _raw_spin_lock_bh+0x0/0x6c
> >>> <4>[ 399.026102] cleanup_net+0x280/0x45c
> >>> <4>[ 399.026403] process_one_work+0x1d4/0x310
> >>> <4>[ 399.027140] worker_thread+0x248/0x470
> >>> <4>[ 399.027621] kthread+0xfc/0x184
> >>> <4>[ 399.028068] ret_from_fork+0x10/0x20
> >>
> >> static void cleanup_net(struct work_struct *work)
> >> {
> >> ...
> >>
> >> synchronize_rcu();
> >>
> >> /* Run all of the network namespace exit methods */
> >> list_for_each_entry_reverse(ops, &pernet_list, list)
> >> ops_exit_list(ops, &net_exit_list);
> >> ...
> >>
> >> Why did the RCU sync above fail to work in this report, Eric?
> >
> > Why do you assume that synchronize_rcu() failed to work?
>
> In the ipv6 pernet_operations [1] for instance, dst_entries_destroy() is
> invoked after RCU sync to ensure that nobody is using the exiting net,
> but this report shows that protection falls apart.

Because synchronize_rcu() is not the same than rcu_barrier()

The dst_entries_add()/ percpu_counter_add_batch() call should not
happen after an rcu grace period.

Something like this (untested) patch

diff --git a/net/core/dst.c b/net/core/dst.c
index 980e2fd2f013b3e50cc47ed0666ee5f24f50444b..f02fdd1da6066a4d56c2a0aa8038eca76d62f8bd
100644
--- a/net/core/dst.c
+++ b/net/core/dst.c
@@ -163,8 +163,13 @@ EXPORT_SYMBOL(dst_dev_put);

void dst_release(struct dst_entry *dst)
{
- if (dst && rcuref_put(&dst->__rcuref))
+ if (dst && rcuref_put(&dst->__rcuref)) {
+ if (!(dst->flags & DST_NOCOUNT)) {
+ dst->flags |= DST_NOCOUNT;
+ dst_entries_add(dst->ops, -1);
+ }
call_rcu_hurry(&dst->rcu_head, dst_destroy_rcu);
+ }
}
EXPORT_SYMBOL(dst_release);

It is not even clear why we are still counting dst these days.
We removed the ipv4 route cache a long time ago, and ipv6 got a
similar treatment.