Re: [RFC PATCH 1/8] workqueue: Unconditionally set cpumask in worker_attach_to_pool()

From: Tejun Heo
Date: Tue Aug 16 2022 - 17:18:53 EST


cc'ing Waiman.

On Thu, Aug 04, 2022 at 04:41:28PM +0800, Lai Jiangshan wrote:
> From: Lai Jiangshan <jiangshan.ljs@xxxxxxxxxxxx>
>
> If a worker is spuriously woken up after kthread_bind_mask() but before
> worker_attach_to_pool(), and there are some cpu-hot-[un]plug happening
> during the same interval, the worker task might be pushed away from its
> bound CPU with its affinity changed by the scheduler and worker_attach_to_pool()
> doesn't rebind it properly.
>
> Do unconditionally affinity binding in worker_attach_to_pool() to fix
> the problem.
>
> Prepare for moving worker_attach_to_pool() from create_worker() to the
> starting of worker_thread() which will really cause the said interval
> even without spurious wakeup.

So, this looks fine but I think the whole thing can be simplified if we
integrate this with the persistent user cpumask change that Waiman is
working on. We can just set the cpumask once during init and let the
scheduler core figure out what the current effective mask is as CPU
availability changes.

http://lkml.kernel.org/r/20220816192734.67115-4-longman@xxxxxxxxxx

Thanks.

--
tejun