Re: [PATCH v3 1/2] sched/fair: Introduce short duration task check

From: Honglei Wang
Date: Fri Dec 02 2022 - 02:44:36 EST




On 2022/12/1 16:44, Chen Yu wrote:
Introduce short-duration task checks, as there is requirement
to leverage this attribute for better task placement.

There are several choices of metrics that could be used to
indicate if a task is a short-duration task.

At first thought the (p->se.sum_exec_runtime / p->nvcsw)
could be used to measure the task duration. However, the
history long past was factored too heavily in such a formula.
Ideally, the old activity should decay and not affect
the current status too much.

Although something based on PELT could be used, se.util_avg might
not be appropriate to describe the task duration:
1. Task p1 and task p2 are doing frequent ping-pong scheduling on
one CPU, both p1 and p2 have a short duration, but the util_avg
can be up to 50%.
2. Suppose a task lasting less than 4ms is regarded as a short task.
If task p3 runs for 6ms and sleeps for 32ms, p3 should not be a
short-duration task. However, PELT would decay p3's accumulated
running time from 6ms to 3ms, because 32ms is the half-life in PELT.
As a result, p3 would be incorrectly treated as a short task.

It was found that there was once a similar feature to track the
duration of a task, which is in Commit ad4b78bbcbab ("sched: Add
new wakeup preemption mode: WAKEUP_RUNNING"). Unfortunately, it
was reverted because it was an experiment. So pick the patch up
again, by recording the average duration when a task voluntarily
switches out. Introduce SIS_SHORT to control this strategy.

The threshold of short duration reuses sysctl_sched_min_granularity,
so it can be tuned by the user. Ideally there should be a dedicated
parameter for the threshold, but that might introduce complexity.

Suggested-by: Tim Chen <tim.c.chen@xxxxxxxxx>
Suggested-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
Signed-off-by: Chen Yu <yu.c.chen@xxxxxxxxx>
---
include/linux/sched.h | 4 ++++
kernel/sched/core.c | 2 ++
kernel/sched/fair.c | 17 +++++++++++++++++
kernel/sched/features.h | 1 +
4 files changed, 24 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index ffb6eb55cd13..64b7acb77a11 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -558,6 +558,10 @@ struct sched_entity {
u64 nr_migrations;
+ u64 prev_sum_exec_runtime_vol;
+ /* average duration of a task */
+ u64 dur_avg;
+
#ifdef CONFIG_FAIR_GROUP_SCHED
int depth;
struct sched_entity *parent;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index daff72f00385..c5202f1be3f7 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4348,6 +4348,8 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
p->se.prev_sum_exec_runtime = 0;
p->se.nr_migrations = 0;
p->se.vruntime = 0;
+ p->se.dur_avg = 0;
+ p->se.prev_sum_exec_runtime_vol = 0;
INIT_LIST_HEAD(&p->se.group_node);
#ifdef CONFIG_FAIR_GROUP_SCHED
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e4a0b8bd941c..a4b314b664f8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6200,6 +6200,16 @@ static int wake_wide(struct task_struct *p)
return 1;
}
+/*
+ * If a task switches in and then voluntarily relinquishes the
+ * CPU quickly, it is regarded as a short duration task.
+ */
+static inline int is_short_task(struct task_struct *p)
+{
+ return sched_feat(SIS_SHORT) &&
+ (p->se.dur_avg <= sysctl_sched_min_granularity);
+}
+

Hi Yu,

I still have a bit concern about the sysctl_sched_min_granularity stuff.. This grab can be set to different value which will impact the action of this patch and make things not totally under control.

Not sure if we can add a new grab for this.. The test result shows good improvement for short task, and with this grab, admins will be able to custom the system base on their own 'short task' view.

Thanks,
Honglei

/*
* The purpose of wake_affine() is to quickly determine on which CPU we can run
* soonest. For the purpose of speed we only consider the waking and previous
@@ -7680,6 +7690,13 @@ static void put_prev_task_fair(struct rq *rq, struct task_struct *prev)
struct sched_entity *se = &prev->se;
struct cfs_rq *cfs_rq;
+ if (sched_feat(SIS_SHORT) && !prev->on_rq) {
+ u64 this_dur = se->sum_exec_runtime - se->prev_sum_exec_runtime_vol;
+
+ se->prev_sum_exec_runtime_vol = se->sum_exec_runtime;
+ update_avg(&se->dur_avg, this_dur);
+ }
+
for_each_sched_entity(se) {
cfs_rq = cfs_rq_of(se);
put_prev_entity(cfs_rq, se);
diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index ee7f23c76bd3..efdc29c42161 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -62,6 +62,7 @@ SCHED_FEAT(TTWU_QUEUE, true)
*/
SCHED_FEAT(SIS_PROP, false)
SCHED_FEAT(SIS_UTIL, true)
+SCHED_FEAT(SIS_SHORT, true)
/*
* Issue a WARN when we do multiple update_rq_clock() calls