[PATCH v2] workqueue: let WORKER_CPU_INTENSIVE be included in watchdog

From: Tio Zhang
Date: Tue Aug 22 2023 - 07:07:06 EST


When a pool has a worker with WORKER_CPU_INTENSIVE set but other workers
are not that busy, the pool->worklist will mostly be empty, which leads
the intensive work always having a chance of escaping from the watchdog's
check. This may cause watchdog miss finding out a forever running work
in WQ_CPU_INTENSIVE.

Also, after commit '616db8779b1e3f93075df691432cccc5ef3c3ba0',
workers with potentially intensive works will automatically be converted
into WORKER_CPU_INTENSIVE. This might let watchdog to miss all work
potentially running forever.

Signed-off-by: Tio Zhang <tiozhang@xxxxxxxxxxxxxx>
---
kernel/workqueue.c | 22 +++++++++++++++++++---
1 file changed, 19 insertions(+), 3 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 02a8f402eeb5..564d96c38d4d 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -6277,13 +6277,29 @@ static void wq_watchdog_timer_fn(struct timer_list *unused)
if (!thresh)
return;

- rcu_read_lock();
+ mutex_lock(&wq_pool_mutex);

for_each_pool(pool, pi) {
+ struct worker *worker;
unsigned long pool_ts, touched, ts;
+ bool check_intensive = false;

pool->cpu_stall = false;
- if (list_empty(&pool->worklist))
+
+ /* Not sure if we should let WORKER_UNBOUND to
+ * be included? Since let a unbound work to last
+ * more than e,g, 30 seconds seem also unacceptable.
+ */
+ mutex_lock(&wq_pool_attach_mutex);
+ for_each_pool_worker(worker, pool) {
+ if (worker->flags & WORKER_CPU_INTENSIVE) {
+ check_intensive = true;
+ break;
+ }
+ }
+ mutex_unlock(&wq_pool_attach_mutex);
+
+ if (list_empty(&pool->worklist) && !check_intensive)
continue;

/*
@@ -6320,7 +6336,7 @@ static void wq_watchdog_timer_fn(struct timer_list *unused)

}

- rcu_read_unlock();
+ mutex_unlock(&wq_pool_mutex);

if (lockup_detected)
show_all_workqueues();
--
2.17.1