Re: [PATCH 1/1] sched/core: Fix stuck on completion for affine_move_task() when stopper disable

From: Peter Zijlstra
Date: Thu Sep 28 2023 - 11:16:43 EST


On Wed, Sep 27, 2023 at 03:57:35PM +0000, Kuyo Chang (張建文) wrote:
> On Wed, 2023-09-27 at 10:08 +0200, Peter Zijlstra wrote:
> >
> > External email : Please do not click links or open attachments until
> > you have verified the sender or the content.
> > On Wed, Sep 27, 2023 at 11:34:28AM +0800, Kuyo Chang wrote:
> > > From: kuyo chang <kuyo.chang@xxxxxxxxxxxx>
> > >
> > > [Syndrome] hung detect shows below warning msg
> > > [ 4320.666557] [ T56] khungtaskd: [name:hung_task&]INFO: task
> > stressapptest:17803 blocked for more than 3600 seconds.
> > > [ 4320.666589] [ T56] khungtaskd:
> > [name:core&]task:stressapptest state:D stack:0 pid:17803
> > ppid:17579 flags:0x04000008
> > > [ 4320.666601] [ T56] khungtaskd: Call trace:
> > > [ 4320.666607] [ T56] khungtaskd: __switch_to+0x17c/0x338
> > > [ 4320.666642] [ T56] khungtaskd: __schedule+0x54c/0x8ec
> > > [ 4320.666651] [ T56] khungtaskd: schedule+0x74/0xd4
> > > [ 4320.666656] [ T56] khungtaskd: schedule_timeout+0x34/0x108
> > > [ 4320.666672] [ T56] khungtaskd: do_wait_for_common+0xe0/0x154
> > > [ 4320.666678] [ T56] khungtaskd: wait_for_completion+0x44/0x58
> > > [ 4320.666681] [ T56]
> > khungtaskd: __set_cpus_allowed_ptr_locked+0x344/0x730
> > > [ 4320.666702] [ T56]
> > khungtaskd: __sched_setaffinity+0x118/0x160
> > > [ 4320.666709] [ T56] khungtaskd: sched_setaffinity+0x10c/0x248
> > > [ 4320.666715] [ T56]
> > khungtaskd: __arm64_sys_sched_setaffinity+0x15c/0x1c0
> > > [ 4320.666719] [ T56] khungtaskd: invoke_syscall+0x3c/0xf8
> > > [ 4320.666743] [ T56] khungtaskd: el0_svc_common+0xb0/0xe8
> > > [ 4320.666749] [ T56] khungtaskd: do_el0_svc+0x28/0xa8
> > > [ 4320.666755] [ T56] khungtaskd: el0_svc+0x28/0x9c
> > > [ 4320.666761] [ T56] khungtaskd: el0t_64_sync_handler+0x7c/0xe4
> > > [ 4320.666766] [ T56] khungtaskd: el0t_64_sync+0x18c/0x190
> > >
> > > [Analysis]
> > >
> > > After add some debug footprint massage, this issue happened at
> > stopper
> > > disable case.
> > > It cannot exec migration_cpu_stop fun to complete migration.
> > > This will cause stuck on wait_for_completion.
> >
> > How did you get in this situation?
> >
>
> This issue occurs at CPU hotplug/set_affinity stress test.
> The reproduce ratio is very low(about once a week).
>
> So I add/record some debug message to snapshot the task status while it
> stuck on wait_for_completion.
>
> Below is the snapshot status while issue happened:
>
> cpu_active_mask is 0xFC
> new_mask is 0x8
> pending->arg.dest_cpu is 0x3
> task_on_cpu(rq,p) is 1
> task_cpu is 0x2
> p__state = TASK_RUNNING
> flag is SCA_CHACK|SCA_USER
> stop_one_cpu_nowait(stopper->enabled) return value is false.
>
> I also record the footprint at migration_cpu_stop.
> It shows the migration_cpu_stop is not execute.

AFAICT this is migrate_enable(), which acts on current, so how can the
CPU that current runs on go away?

That is completely unexplained. You've not given a proper description of
the race scenario. And because you've not, we can't even begin to talk
about how best to address the issue.

> > struct task_struct *p, struct rq_flag
> > > task_rq_unlock(rq, p, rf);
> > >
> > > if (!stop_pending) {
> > > -stop_one_cpu_nowait(cpu_of(rq), migration_cpu_stop,
> > > - &pending->arg, &pending->stop_work);
> > > +if (!stop_one_cpu_nowait(cpu_of(rq), migration_cpu_stop,
> > > + &pending->arg, &pending->stop_work))
> > > +return -ENOENT;
> >
> > And -ENOENT is the right return code for when the target CPU is not
> > available?
> >
> > I suspect you're missing more than halp the picture and this is a
> > band-aid solution at best. Please try harder.
> >
>
> I think -ENOENT means stopper is not execute?
> Perhaps the error code is abused, or could you kindly give me some
> suggestions?

Well, at this point you're leaving the whole affine_move_task()
machinery in an undefined state, which is a much bigger problem than the
weird return value.

Please read through that function and its comments a number of times. If
you're not a little nervous, you've not understood the thing.

Your patch has at least one very obvious resource leak.