Re: [PATCH v7 2/2] sched/fair: Introduce SIS_CURRENT to wake up short task on current CPU

From: Peter Zijlstra
Date: Wed Apr 26 2023 - 10:04:52 EST


On Sat, Apr 22, 2023 at 12:08:18AM +0800, Chen Yu wrote:

> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 4af5799b90fc..46c1321c0407 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6501,6 +6501,46 @@ static int wake_wide(struct task_struct *p)
> return 1;
> }
>
> +/*
> + * Wake up the task on current CPU, if the following conditions are met:
> + *
> + * 1. waker A is the only running task on this_cpu
> + * 3. A is a short duration task (waker will fall asleep soon)
> + * 4. wakee B is a short duration task (impact of B on A is minor)
> + * 5. A and B wake up each other alternately
> + */
> +static bool
> +wake_on_current(int this_cpu, struct task_struct *p)
> +{
> + if (!sched_feat(SIS_CURRENT))
> + return false;
> +
> + if (cpu_rq(this_cpu)->nr_running > 1)
> + return false;
> +
> + /*
> + * If a task switches in and then voluntarily relinquishes the
> + * CPU quickly, it is regarded as a short duration task. In that
> + * way, the short waker is likely to relinquish the CPU soon, which
> + * provides room for the wakee. Meanwhile, a short wakee would bring
> + * minor impact to the target rq. Put the short waker and wakee together
> + * bring benefit to cache-share task pairs and avoid migration overhead.
> + */
> + if (!current->se.dur_avg || ((current->se.dur_avg * 8) >= sysctl_sched_min_granularity))
> + return false;
> +
> + if (!p->se.dur_avg || ((p->se.dur_avg * 8) >= sysctl_sched_min_granularity))
> + return false;
> +
> + if (current->wakee_flips || p->wakee_flips)
> + return false;
> +
> + if (current->last_wakee != p || p->last_wakee != current)
> + return false;
> +
> + return true;
> +}

So I was going to play with this and found I needed to change things up
since these sysctl's no longer exist in my EEVDF branch.

And while I can easily do
's/sysctl_sched_min_granularity/sysctl_sched_base_slice/', it did make
me wonder if that's the right value to use.

min_gran/base_slice is related to how long we want a task to run before
switching, but that is not related to how long it needs to run to
establish a cache footprint.

Would not sched_migration_cost be a better measure to compare against?
That is also used in task_hot() to prevent migrations.