Re: [PATCH v2] sched: let __sched_period() use rq's nr_running

From: Byungchul Park
Date: Mon Jul 13 2015 - 22:27:02 EST


On Mon, Jul 13, 2015 at 06:25:35PM +0900, Byungchul Park wrote:
> On Mon, Jul 13, 2015 at 10:26:09AM +0200, Peter Zijlstra wrote:
> > On Fri, Jul 10, 2015 at 05:11:30PM +0900, byungchul.park@xxxxxxx wrote:
> > > From: Byungchul Park <byungchul.park@xxxxxxx>
> > >
> > > __sched_period() returns a period which a rq can have. the period has to be
> > > stretched by the number of task *the rq has*, when nr_running > nr_latency.
> > > otherwise, task slice can be very smaller than sysctl_sched_min_granularity
> > > depending on the position of tg hierarchy when CONFIG_FAIR_GROUP_SCHED.
> > >
> > > Signed-off-by: Byungchul Park <byungchul.park@xxxxxxx>
> > > ---
> > > kernel/sched/fair.c | 2 +-
> > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > >
> > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > index 09456fc..8ae7aeb 100644
> > > --- a/kernel/sched/fair.c
> > > +++ b/kernel/sched/fair.c
> > > @@ -635,7 +635,7 @@ static u64 __sched_period(unsigned long nr_running)
> > > */
> > > static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se)
> > > {
> > > - u64 slice = __sched_period(cfs_rq->nr_running + !se->on_rq);
> > > + u64 slice = __sched_period(rq_of(cfs_rq)->nr_running + !se->on_rq);
> > >
> > > for_each_sched_entity(se) {
> > > struct load_weight *load;
> >
> > This really doesn't make sense; look at what that
> > for_each_sched_entity() loop does below this.
>
> hello,
>
> for_each_sched_entity() loop is distributing slice to se with consideration
> for both hierarchy and its weight, walking from the passed se to the top rq.
>
> i am just talking about how to get a whole period value. my question is
> "why does it use local cfs's nr_running to get a whole period value?".

i need to modify my patch more, i admit.

but i have a question, do you think it is right to use local cfs's nr_running
to get a whole period value?

>
> >
> > I agree that sched_slice() is a difficult proposition in the face of
> > cgroup, but everything is, cgroups suck arse, they make everything hard.
>
> i don't make an issue of the way for cgroups to work though it already have
> many problems as you said.
>
> thank you for commenting,
> byungchul
>
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to majordomo@xxxxxxxxxxxxxxx
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at http://www.tux.org/lkml/
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/