Re: [PATCH v6 3/3] sched/fair: Combine EAS check with overutilized access

From: Vincent Guittot
Date: Tue Mar 26 2024 - 10:13:49 EST


On Tue, 26 Mar 2024 at 13:26, Shrikanth Hegde <sshegde@xxxxxxxxxxxxx> wrote:
>
>
>
> On 3/26/24 1:56 PM, Vincent Guittot wrote:
> > On Tue, 26 Mar 2024 at 08:58, Ingo Molnar <mingo@xxxxxxxxxx> wrote:
> >>
> >>
> >> * Shrikanth Hegde <sshegde@xxxxxxxxxxxxx> wrote:
> >>
> >>> /*
> >>> - * Ensure that caller can do EAS. overutilized value
> >>> - * make sense only if EAS is enabled
> >>> + * overutilized value make sense only if EAS is enabled
> >>> */
> >>> -static inline int is_rd_overutilized(struct root_domain *rd)
> >>> +static inline int is_rd_not_overutilized(struct root_domain *rd)
> >>> {
> >>> - return READ_ONCE(rd->overutilized);
> >>> + return sched_energy_enabled() && !READ_ONCE(rd->overutilized);
> >>> }
> >>
> >> While adding the sched_energy_enabled() condition looks OK, the _not prefix
> >> This is silly: putting logical operators into functions names is far less
> >> readable than a !fn()...
> >>
> >>> - if (!is_rd_overutilized(rq->rd) && cpu_overutilized(rq->cpu))
> >>> + if (is_rd_not_overutilized(rq->rd) && cpu_overutilized(rq->cpu))
> >>
> >> Especially since we already have cpu_overutilized(). It's far more coherent
> >> to have the same basic attribute functions and put any negation into
> >> *actual* logical operators.
> >
> > I was concerned by the || in this case that could defeat the purpose
> > of sched_energy_enabled() but it will return early anyway
> >
>
> > return !sched_energy_enabled() || READ_ONCE(rd->overutilized);
>
> I think this would work.
>
> >
> >>
> >> Thanks,
> >>
> >> Ingo
>
>
> If EAS - false, then is_rd_overutilized -> would be true always and all users of it do !is_rd_overutilized(). so No operation.
> If EAS - true, it reads rd->overutilized value.
>
> Does this look correct?

yes looks good to me

> -------------------------------------------------------------------------------------
> From 3adc0d58f87d8a2e96196a0f47bcd0d2afd057ae Mon Sep 17 00:00:00 2001
> From: Shrikanth Hegde <sshegde@xxxxxxxxxxxxx>
> Date: Wed, 6 Mar 2024 03:58:58 -0500
> Subject: [PATCH v7 3/3] sched/fair: Combine EAS check with overutilized access
>
> Access to overutilized is always used with sched_energy_enabled in
> the pattern:
>
> if (sched_energy_enabled && !overutilized)
> do something
>
> So modify the helper function to return this pattern. This is more
> readable code as it would say, do something when root domain is not
> overutilized. This function always return true when EAS is disabled.
>
> No change in functionality intended.
>
> Suggested-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
> Signed-off-by: Shrikanth Hegde <sshegde@xxxxxxxxxxxxx>
> ---
> kernel/sched/fair.c | 20 +++++++-------------
> 1 file changed, 7 insertions(+), 13 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 24a7530a7d3f..e222e3ad4cfe 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6686,12 +6686,11 @@ static inline bool cpu_overutilized(int cpu)
> }
>
> /*
> - * Ensure that caller can do EAS. overutilized value
> - * make sense only if EAS is enabled
> + * overutilized value make sense only if EAS is enabled
> */
> static inline int is_rd_overutilized(struct root_domain *rd)
> {
> - return READ_ONCE(rd->overutilized);
> + return !sched_energy_enabled() || READ_ONCE(rd->overutilized);
> }
>
> static inline void set_rd_overutilized_status(struct root_domain *rd,
> @@ -6710,8 +6709,6 @@ static inline void check_update_overutilized_status(struct rq *rq)
> * overutilized field is used for load balancing decisions only
> * if energy aware scheduler is being used
> */
> - if (!sched_energy_enabled())
> - return;
>
> if (!is_rd_overutilized(rq->rd) && cpu_overutilized(rq->cpu))
> set_rd_overutilized_status(rq->rd, SG_OVERUTILIZED);
> @@ -7999,7 +7996,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
>
> rcu_read_lock();
> pd = rcu_dereference(rd->pd);
> - if (!pd || is_rd_overutilized(rd))
> + if (!pd)
> goto unlock;
>
> /*
> @@ -8202,7 +8199,7 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int wake_flags)
> cpumask_test_cpu(cpu, p->cpus_ptr))
> return cpu;
>
> - if (sched_energy_enabled()) {
> + if (!is_rd_overutilized(this_rq()->rd)) {
> new_cpu = find_energy_efficient_cpu(p, prev_cpu);
> if (new_cpu >= 0)
> return new_cpu;
> @@ -10903,12 +10900,9 @@ static struct sched_group *sched_balance_find_src_group(struct lb_env *env)
> if (busiest->group_type == group_misfit_task)
> goto force_balance;
>
> - if (sched_energy_enabled()) {
> - struct root_domain *rd = env->dst_rq->rd;
> -
> - if (rcu_dereference(rd->pd) && !is_rd_overutilized(rd))
> - goto out_balanced;
> - }
> + if (!is_rd_overutilized(env->dst_rq->rd) &&
> + rcu_dereference(env->dst_rq->rd->pd))
> + goto out_balanced;
>
> /* ASYM feature bypasses nice load balance check */
> if (busiest->group_type == group_asym_packing)
> --
> 2.39.3