Re: [PATCH 4/6] workqueue: Automatically mark CPU-hogging work items CPU_INTENSIVE

From: Tejun Heo
Date: Wed May 10 2023 - 12:08:28 EST


Hello, Peter.

On Wed, May 10, 2023 at 05:09:46PM +0200, Peter Zijlstra wrote:
> > @@ -2348,6 +2382,7 @@ __acquires(&pool->lock)
> > worker->current_work = work;
> > worker->current_func = work->func;
> > worker->current_pwq = pwq;
> > + worker->current_at = worker->task->se.sum_exec_runtime;
>
> That only gets updated at scheduling events, it's not a 'running' clock.

I think it gets updated on each tick and preemption checks, right? That
should be more than enough here. Just using jiffies should be fine on higher
HZ machines; however, when the threshold is not significantly longer than
HZ, it becomes a bit problematic. Reading highres clock separately is an
option but is a little bit more expensive. So, sum_exec_runtime seems to fit
pretty well. FWIW, it's already used outside scheduler proper too.

> > work_data = *work_data_bits(work);
> > worker->current_color = get_work_color(work_data);
> >
>
> Anyway, it occurs to me that if all you want is to measure long running
> works, then would it not be much easier to simply forward the tick?

Ah, that reminds me that I forgot to update the origin timestamp on
sleeping. It should be tracking the continuous CPU consumption between
sleeps.

> Something like the below.. Then this tick handler (which will have just
> updated ->sum_exec_runtime BTW) can do that above 'work-be-long-running'
> check.
>
> Or am I missing something? Seems simpler than hijacking preempt-out.

One advantage of doing it from preempt-out is that workqueue can immediately
release other per-cpu work items without any delay as when the violating
work item leaves CPU is the exact point for both detection and action.

That said, this mechanism doesn't have to super accurate, so simpler code
has its benefits. I'll take a stab at it.

Thanks.

--
tejun