Re: [PATCH] libata: use single threaded work queue

From: Tejun Heo
Date: Thu Aug 20 2009 - 08:46:08 EST


Hello, Alan.

Alan Cox wrote:
>> It's not about needing per-cpu binding but if works can be executed on
>> the same cpu they were issued, it's almost always beneficial. The
>> only reason why we have single threaded workqueue now is to limit the
>> number of threads.
>
> That would argue very strongly for putting all the logic in one place so
> everything shares queues.

Yes, it does.

>>> Only if you make the default assumed max wait time for the work too low -
>>> its a tunable behaviour in fact.
>> If the default workqueue is made to manage concurrency well, most
>> works should be able to just use it, so the queue will contain both
>> long running ones and short running ones which can disturb the current
>> batch like processing of the default workqueue which is assumed to
>> have only short ones.
>
> Not sure why it matters - the short ones will instead end up being
> processed serially in parallel to the hog.

The problem is how to assign works to workers. With long running
works, workqueue will definitely need some reserves in the worker
pool. When short works are consecutively queued, without special
provision, they'll end up served by different workers increasing cache
foot print and execution overhead. The special provision could be
something timer based but modding timer for each work is a bit
expensive. I think it needs to be more mechanical rather than depend
on heuristics or timing.

>> kthreads). It would be great if a single work API is exported and
>> concurrency is managed automatically so that no one else has to worry
>> about concurrency but achieving that requires much more intelligence
>> on the workqueue implementation as the basic concurrency policies
>> which used to be imposed by those segregations need to be handled
>> automatically. Maybe it's better trade-off to leave those
>> segregations as-are and just add another workqueue type with dynamic
>> thread pool.
>
> The more intelligence in the workqueue logic, the less in the drivers and
> the more it can be adjusted and adapt itself.

Yeap, sure.

> Consider things like power management which might argue for breaking
> the cpu affinity to avoid waking up a sleeping CPU in preference to
> jumping work between processors

Yeah, that's one thing to consider too but works being scheduled on a
particular cpu usually is the result of other activities going on the
cpu. I don't think workqueue needs to be modified for that. If other
things move, workqueue will automatically follow.

Thanks.

--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/