Re: [RFC PATCH 11/11] sched: Add comments to find_busiest_group()function.

From: Gautham R Shenoy
Date: Wed Mar 25 2009 - 09:08:04 EST


On Wed, Mar 25, 2009 at 01:29:13PM +0100, Ingo Molnar wrote:
>
> * Gautham R Shenoy <ego@xxxxxxxxxx> wrote:
>
> > On Wed, Mar 25, 2009 at 02:44:27PM +0530, Gautham R Shenoy wrote:
> > > Add /** style comments around find_busiest_group(). Also add a few explanatory
> > > */
> >
> > <snip>
> >
> > > static struct sched_group *
> > > find_busiest_group(struct sched_domain *sd, int this_cpu,
> > > @@ -3593,17 +3613,31 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
> > > update_sd_lb_stats(sd, this_cpu, idle, sd_idle, cpus,
> > > balance, &sds);
> > >
> > > + /* Cases where imbalance does not exist from POV of this_cpu */
> > > + /* 1) this_cpu is not the appropriate cpu to perform load balancing
> > > + * at this level.
> > > + * 2) There is no busy sibling group to pull from.
> > > + * 3) This group is the busiest group.
> > > + * 4) This group is more busy than the avg busieness at this
> > > + * sched_domain.
> > > + * 5) The imbalance is within the specified limit.
> > > + * 6) Any rebalance would lead to ping-pong
> > > + */
> > > if (balance && !(*balance))
> > > goto ret;
> > >
> > > - if (!sds.busiest || sds.this_load >= sds.max_load
> > > - || sds.busiest_nr_running == 0)
> > > + if (!sds.busiest || sd.busiest_nr_running == 0)
> > ^^^^^^^^^^^^^^^^^^^^
> > should have been sds.busiest_nr_running. Hence the build failure on tip.
> >
> > I think I missed compile testing this last patch.
> >
> > Ingo, could you revert commit 7b6340ef884aff69a54f8a530c73ad9da0a7c388 in
> > tip/balancing and commit the following patch instead?
>
> sure - i've amended it and started testing it locally. If it passes
> testing it should show up in tip:master.

Thanks!

Meanwhile I'll see if there are any regressions in 2.6.29 with
this patchset.

>
> Ingo

--
Thanks and Regards
gautham
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/