Re: [RFC v1] add new io-scheduler to use cgroup on high-speed device

From: sanbai
Date: Tue Jun 04 2013 - 23:32:27 EST


On 2013å06æ05æ 11:03, Tejun Heo wrote:
(cc'ing Kent. Original posting at
http://thread.gmane.org/gmane.linux.kernel/1502484 )

Hello,

On Wed, Jun 05, 2013 at 10:09:31AM +0800, Robin Dong wrote:
We want to use blkio.cgroup on high-speed device (like fusionio) for our mysql clusters.
After testing different io-scheduler, we found that cfq is too slow and deadline can't run on cgroup.
So we developed a new io-scheduler: tpps (Tiny Parallel Proportion Scheduler).It dispatch requests
only by using their individual weight and total weight (proportion) therefore it's simply and efficient.

Test case: fusionio card, 4 cgroups, iodepth-512
So, while I understand the intention behind it, I'm not sure a
separate io-sched for this is what we want. Kent and Jens have been
thinking about this lately so they'll probably chime in. From my POV,
I see a few largish issues.

* It has to be scalable with relatively large scale SMP / NUMA
configurations. It better integrate with blk-mq support currently
being brewed.
Ok, I will go on to look Jens's blk-mq branch.


* It definitely has to support hierarchy. Nothing which doesn't
support full hierarchy can be added to cgroup at this point.
Thanks for your note, hierarchy supporting will be added in my next version patch.

* We already have separate implementations in blk-throtl and
cfq-iosched. Maybe it's too late and too different for cfq-iosched
given that it's primarily targeted at disks, but I wonder whether we
can make blk-throtl generic and scalable enough to cover all other
use cases.
I have the same feeling. Let's wait other person's response.

Thanks.



--

Robin Dong
èæïèåïäçï
ééåå éå æåççé åæç
åæï72370
ææï13520865473
emailïsanbai@xxxxxxxxxx

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/