[PATCH -tip V4 8/8] workqueue: Fix affinity of kworkers when attaching into pool

From: Lai Jiangshan
Date: Mon Jan 11 2021 - 09:27:57 EST


From: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx>

When worker_attach_to_pool() is called, we should not put the workers
to pool->attrs->cpumask when there is or will be no CPU online in it.

Otherwise, it may cause BUG_ON(): (quote from Valentin:)
Per-CPU kworkers forcefully migrated away by hotplug via
workqueue_offline_cpu() can end up spawning more kworkers via

manage_workers() -> maybe_create_worker()

Workers created at this point will be bound using

pool->attrs->cpumask

which in this case is wrong, as the hotplug state machine already
migrated all pinned kworkers away from this CPU. This ends up
triggering the BUG_ON condition is sched_cpu_dying() (i.e. there's
a kworker enqueued on the dying rq).
(end of quote)

We need to find out where it is in the hotplug stages to determind
whether pool->attrs->cpumask is valid. So we have to check
%POOL_DISASSOCIATED and wq_unbound_online_cpumask which are indications
for the hotplug stages.

So for per-CPU kworker case, %POOL_DISASSOCIATED marks the kworkers
of the pool are bound or unboud, so it is used to detect whether
pool->attrs->cpumask is valid to use when attachment.

For unbound workers, we should not set online&!active cpumask to workers.
Just introduced wq_unound_online_cpumask has the features that going-down
cpu is cleared earlier in it than in cpu_active_mask and bring-up cpu
is set later in it than cpu_active_mask. So it is perfect to be used to
detect whether the pool->attrs->cpumask is valid to use.

To use wq_unound_online_cpumask in worker_attach_to_pool(), we need to protect
wq_unbound_online_cpumask in wq_pool_attach_mutex.

Cc: Qian Cai <cai@xxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Vincent Donnefort <vincent.donnefort@xxxxxxx>
Link: https://lore.kernel.org/lkml/20201210163830.21514-3-valentin.schneider@xxxxxxx/
Link: https://lore.kernel.org/r/ff62e3ee994efb3620177bf7b19fab16f4866845.camel@xxxxxxxxxx
Reported-by: Qian Cai <cai@xxxxxxxxxx>
Reviewed-by: Valentin Schneider <valentin.schneider@xxxxxxx>
Acked-by: Tejun Heo <tj@xxxxxxxxxx>
Tested-by: Paul E. McKenney <paulmck@xxxxxxxxxx>
Signed-off-by: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx>
---
kernel/workqueue.c | 39 ++++++++++++++++++++++++++++++---------
1 file changed, 30 insertions(+), 9 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index b012adbeff9f..d1f1b863c52a 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -310,7 +310,7 @@ static bool workqueue_freezing; /* PL: have wqs started freezing? */
/* PL: allowable cpus for unbound wqs and work items */
static cpumask_var_t wq_unbound_cpumask;

-/* PL: online cpus (cpu_online_mask with the going-down cpu cleared) */
+/* PL&A: online cpus (cpu_online_mask with the going-down cpu cleared) */
static cpumask_var_t wq_unbound_online_cpumask;

/* CPU where unbound work was last round robin scheduled from this CPU */
@@ -1849,19 +1849,36 @@ static struct worker *alloc_worker(int node)
static void worker_attach_to_pool(struct worker *worker,
struct worker_pool *pool)
{
+ bool pool_cpumask_active;
+
mutex_lock(&wq_pool_attach_mutex);

/*
- * set_cpus_allowed_ptr() will fail if the cpumask doesn't have any
- * online CPUs. It'll be re-applied when any of the CPUs come up.
+ * The wq_pool_attach_mutex ensures %POOL_DISASSOCIATED and
+ * wq_unbound_online_cpumask remain stable across this function.
+ * See the comments above the definitions of the flag and
+ * wq_unbound_online_cpumask for details.
+ *
+ * For percpu pools, whether pool->attrs->cpumask is legitimate
+ * for @worker task depends on where it is in the hotplug stages
+ * divided by workqueue_online/offline_cpu(). Refer the functions
+ * to see how they toggle %POOL_DISASSOCIATED and update cpumask
+ * of the workers.
+ *
+ * For unbound pools, whether pool->attrs->cpumask is legitimate
+ * for @worker task depends on where it is in the hotplug stages
+ * divided by workqueue_unbound_online/offline_cpu(). Refer the
+ * functions to see how they update wq_unbound_online_cpumask and
+ * update cpumask of the workers.
*/
- set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask);
+ pool_cpumask_active = pool->cpu >= 0 ? !(pool->flags & POOL_DISASSOCIATED) :
+ cpumask_intersects(pool->attrs->cpumask, wq_unbound_online_cpumask);
+
+ if (pool_cpumask_active)
+ WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask) < 0);
+ else
+ WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);

- /*
- * The wq_pool_attach_mutex ensures %POOL_DISASSOCIATED remains
- * stable across this function. See the comments above the flag
- * definition for details.
- */
if (pool->flags & POOL_DISASSOCIATED)
worker->flags |= WORKER_UNBOUND;

@@ -5149,7 +5166,9 @@ int workqueue_unbound_online_cpu(unsigned int cpu)
int pi;

mutex_lock(&wq_pool_mutex);
+ mutex_lock(&wq_pool_attach_mutex);
cpumask_set_cpu(cpu, wq_unbound_online_cpumask);
+ mutex_unlock(&wq_pool_attach_mutex);

/* update CPU affinity of workers of unbound pools */
for_each_pool(pool, pi) {
@@ -5176,7 +5195,9 @@ int workqueue_unbound_offline_cpu(unsigned int cpu)
int pi;

mutex_lock(&wq_pool_mutex);
+ mutex_lock(&wq_pool_attach_mutex);
cpumask_clear_cpu(cpu, wq_unbound_online_cpumask);
+ mutex_unlock(&wq_pool_attach_mutex);

/* update CPU affinity of workers of unbound pools */
for_each_pool(pool, pi) {
--
2.19.1.6.gb485710b