Re: [PATCH v4 3/4] sched/fair: Rework sched_use_asym_prio() and sched_asym_prefer()

From: Ricardo Neri
Date: Fri Feb 09 2024 - 20:15:56 EST


On Fri, Feb 09, 2024 at 07:08:52PM +0800, kuiliang Shi wrote:
>
>
> On 2/9/24 10:47 AM, Ricardo Neri wrote:
> > On Wed, Feb 07, 2024 at 11:47:03AM +0800, alexs@xxxxxxxxxx wrote:
> >> From: Alex Shi <alexs@xxxxxxxxxx>
> >>
> >> sched_use_asym_prio() sched_asym_prefer() are used together in various
> >
> > s/prio() sched/prio() and sched/
>
> Will take it, Thanks!
>
> >
> >> places. Consolidate them into a single function sched_asym().
> >>
> >> The existing sched_group_asym() is only used when collecting statistics
> >> of a scheduling group. Rename it as sched_group_asym().
> >> This makes the code easier to read. No functional changes.
> >>
> >> Signed-off-by: Alex Shi <alexs@xxxxxxxxxx>
> >> To: Ricardo Neri <ricardo.neri-calderon@xxxxxxxxxxxxxxx>
> >> To: Valentin Schneider <vschneid@xxxxxxxxxx>
> >> To: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
> >> To: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> >> To: Ingo Molnar <mingo@xxxxxxxxxx>
> >> ---
> >> kernel/sched/fair.c | 37 ++++++++++++++++++++-----------------
> >> 1 file changed, 20 insertions(+), 17 deletions(-)
> >>
> >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> >> index 607dc310b355..942b6358f683 100644
> >> --- a/kernel/sched/fair.c
> >> +++ b/kernel/sched/fair.c
> >> @@ -9746,8 +9746,18 @@ static bool sched_use_asym_prio(struct sched_domain *sd, int cpu)
> >> return sd->flags & SD_SHARE_CPUCAPACITY || is_core_idle(cpu);
> >> }
> >>
> >> +static inline bool sched_asym(struct sched_domain *sd, int dst_cpu, int src_cpu)
> >> +{
> >> + /*
> >> + * First check if @dst_cpu can do asym_packing load balance. Only do it
> >> + * if it has higher priority than @src_cpu.
> >> + */
> >> + return sched_use_asym_prio(sd, dst_cpu) &&
> >> + sched_asym_prefer(dst_cpu, src_cpu);
> >> +}
> >> +
> >> /**
> >> - * sched_asym - Check if the destination CPU can do asym_packing load balance
> >> + * sched_group_asym - Check if the destination CPU can do asym_packing balance
> >> * @env: The load balancing environment
> >> * @sgs: Load-balancing statistics of the candidate busiest group
> >> * @group: The candidate busiest group
> >
> > After renaming and changing this function now its documentation has become
> > obsolete. Can you update it?
>
> Since the function sched_use_asym_prio() and sched_asym_prefer() both give detailed comments for their actions, as long as this function does, could we just remove the bit obsolete comments here?
>
>
> @@ -9765,14 +9765,6 @@ static inline bool sched_asym(struct sched_domain *sd, int dst_cpu, int src_cpu)
> * @env::dst_cpu can do asym_packing if it has higher priority than the
> * preferred CPU of @group.
> *
> - * SMT is a special case. If we are balancing load between cores, @env::dst_cpu
> - * can do asym_packing balance only if all its SMT siblings are idle. Also, it
> - * can only do it if @group is an SMT group and has exactly on busy CPU. Larger
> - * imbalances in the number of CPUS are dealt with in find_busiest_group().
> - *
> - * If we are balancing load within an SMT core, or at PKG domain level, always
> - * proceed.
> - *
> * Return: true if @env::dst_cpu can do with asym_packing load balance. False
> * otherwise.

Fine with me.

With this change,
Reviewed-by: Ricardo Neri <ricardo.neri-calderon@xxxxxxxxxxxxxxx>

Tested on Alder Lake and Meteor Lake, which use asym_packing.

Tested-by: Ricardo Neri <ricardo.neri-calderon@xxxxxxxxxxxxxxx>
> >