Re: [RFC] Block IO Controller V2 - some results

From: Vivek Goyal
Date: Mon Nov 23 2009 - 10:21:47 EST


On Sat, Nov 21, 2009 at 06:57:47PM +0100, Corrado Zoccolo wrote:
> Hi Vivek,
> On Fri, Nov 20, 2009 at 8:50 PM, Corrado Zoccolo <czoccolo@xxxxxxxxx> wrote:
> > On Fri, Nov 20, 2009 at 7:42 PM, Vivek Goyal <vgoyal@xxxxxxxxxx> wrote:
> >>
> >> Should we also take into account the "cfqq->dispatched" request in
> >> determining whether we should enable idling on deep queue random seeky
> >> readers?
> >
> > Probably, but I think the most important thing is to do some averaging
> > or hysteresis, so the completion of a single request doesn't cause the
> > switch of a workload.
>
> can you test the new version of the idling patch?
> I register the fact that the queue had a large depth in a flag, that
> is reset only when the idle times out (so at the end of the burst).
> Idling is enabled if that flag is set (and think time is acceptable).
> This should fix the switching behaviour you observed.
>
> I decided to not count cfqq->dispatched to determine the depth.
> In this way, when all queues in the system are random the idling is
> enabled only if the requests queue builds up faster than it can be
> consumed.

Hi Corrado,

This patch seems to be working much better in marking the random writer
queue as sync and not interefere with sync-noidle workload.

So frequent migration of random writer queue across group has stopped.

But there seems to be a different issue now after sometime, random writer
queue stops generating enough traffic and gets deleted after one request
and root group now runs random reader for sometime. So it basically
changes the ratio in which random writers and random readers get disk
share.

I guess part of the dependency comes from kjournald which is in root
group. But there is something else too because I don't see this happening
when there are no cgroups. I will do more debugging on this.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/