Re: workqueue thing

From: Tejun Heo
Date: Mon Dec 21 2009 - 09:19:36 EST


Hello, Arjan.

On 12/21/2009 10:53 PM, Arjan van de Ven wrote:
> I'm just wondering if even that is overkill; I suspect you can do
> entirely without the scheduler intrusion;
> just make a new thread for each work item, with some hesteresis:
>
> * threads should stay around for a bit before dying (you do that)
> * after some minimum nr of threads (say 4 per cpu), you wait, say, 0.1
> seconds before deciding it's time
> to spawn more threads, to smooth out spikes of very short lived stuff.
>
> wouldn't that be a lot simpler than "ask the scheduler to see if
> they are all blocked". If they are all very busy churning cpu (say
> doing raid6 work, or btrfs checksumming) you still would want more
> threads I suspect

Ah... okay, there are two aspects cmwq invovles the scheduler.

A. Concurrency management. This is achieved by the scheduler
callbacks which watches how many workers are working.

B. Deadlock avoidance. This requires migrating rescuers to CPUs under
allocation distress. The problem here is that
set_cpus_allowed_ptr() doesn't allow migrating tasks to CPUs which
are online but !active (CPU_DOWN_PREPARE).

B would be necessary in whichever way you implement shared worker pool
unless you create all the workers which might possibly be necessary
for allocation.

For A, it's far more efficient and robust with scheduler callbacks.
It's conceptually pretty simple too. If you look at the patch which
actually implements the dynamic pool, the amount of code necessary for
implementing this part isn't that big. Most of complexity in the
series comes from trying to sharing workers not the dynamic pool
management. Even if it switches to timer based one, there simply
won't be much reduction in complexity. So, I don't think there's any
reason to choose rather fragile heuristics when it can be implemented
in a pretty mechanical way.

Thanks.

--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/