[PATCH v3 1/5] sched/deadline: Fix bandwidth reclaim equation in GRUB

From: Vineeth Pillai
Date: Sun May 14 2023 - 22:57:42 EST


According to the GRUB[1] rule, the runtime is depreciated as:
"dq = -max{u, (1 - Uinact - Uextra)} dt" (1)

To guarentee that deadline tasks doesn't starve lower class tasks,
we do not allocate the full bandwidth of the cpu to deadline tasks.
Maximum bandwidth usable by deadline tasks is denoted by "Umax".
Considering Umax, equation (1) becomes:
"dq = -(max{u, (Umax - Uinact - Uextra)} / Umax) dt" (2)

Current implementation has a minor bug in equation (2). This patch
fixes the bug and also fixes the precision issue by using div64_u64.

The reclamation logic is verified by a sample program which creates
multiple deadline threads and observing their utilization. The tests
were run on an isolated cpu(isolcpus=3) on a 4 cpu system.

Tests on 6.3.0
==============

RUN 1: runtime=7ms, deadline=period=10ms, RT capacity = 95%
TID[693]: RECLAIM=1, (r=7ms, d=10ms, p=10ms), Util: 93.33
TID[693]: RECLAIM=1, (r=7ms, d=10ms, p=10ms), Util: 93.35
TID[693]: RECLAIM=1, (r=7ms, d=10ms, p=10ms), Util: 93.35
TID[693]: RECLAIM=1, (r=7ms, d=10ms, p=10ms), Util: 93.29

RUN 2: runtime=1ms, deadline=period=100ms, RT capacity = 95%
TID[708]: RECLAIM=1, (r=1ms, d=100ms, p=100ms), Util: 16.69
TID[708]: RECLAIM=1, (r=1ms, d=100ms, p=100ms), Util: 16.69
TID[708]: RECLAIM=1, (r=1ms, d=100ms, p=100ms), Util: 16.70

RUN 3: 2 tasks
Task 1: runtime=1ms, deadline=period=10ms
Task 2: runtime=1ms, deadline=period=100ms
TID[631]: RECLAIM=1, (r=1ms, d=10ms, p=10ms), Util: 62.67
TID[632]: RECLAIM=1, (r=1ms, d=100ms, p=100ms), Util: 6.37
TID[631]: RECLAIM=1, (r=1ms, d=10ms, p=10ms), Util: 62.38
TID[632]: RECLAIM=1, (r=1ms, d=100ms, p=100ms), Util: 6.19
TID[631]: RECLAIM=1, (r=1ms, d=10ms, p=10ms), Util: 62.60
TID[632]: RECLAIM=1, (r=1ms, d=100ms, p=100ms), Util: 6.23
TID[631]: RECLAIM=1, (r=1ms, d=10ms, p=10ms), Util: 62.43

As seen above, the reclamation doesn't reclaim the maximum allowed
bandwidth and as the bandwidth of tasks gets smaller, the reclaimed
bandwidth also comes down.

Tests with this patch applied
=============================

RUN 1: runtime=7ms, deadline=period=10ms, RT capacity = 95%
TID[667]: RECLAIM=1, (r=7ms, d=10ms, p=10ms), Util: 95.01
TID[667]: RECLAIM=1, (r=7ms, d=10ms, p=10ms), Util: 95.00

RUN 2: runtime=1ms, deadline=period=100ms, RT capacity = 95%
TID[641]: RECLAIM=1, (r=1ms, d=100ms, p=100ms), Util: 94.86
TID[641]: RECLAIM=1, (r=1ms, d=100ms, p=100ms), Util: 95.06

RUN 3: 2 tasks
Task 1: runtime=1ms, deadline=period=10ms
Task 2: runtime=1ms, deadline=period=100ms
TID[636]: RECLAIM=1, (r=1ms, d=10ms, p=10ms), Util: 86.44
TID[637]: RECLAIM=1, (r=1ms, d=100ms, p=100ms), Util: 8.67
TID[636]: RECLAIM=1, (r=1ms, d=10ms, p=10ms), Util: 86.34
TID[637]: RECLAIM=1, (r=1ms, d=100ms, p=100ms), Util: 8.61

Running tasks on all cpus allowing for migration also showed that
the utilization is reclaimed to the maximum. Running 10 tasks on
3 cpus SCHED_FLAG_RECLAIM - top shows:
%Cpu0 : 94.6 us, 0.0 sy, 0.0 ni, 5.4 id, 0.0 wa
%Cpu1 : 95.2 us, 0.0 sy, 0.0 ni, 4.8 id, 0.0 wa
%Cpu2 : 95.8 us, 0.0 sy, 0.0 ni, 4.2 id, 0.0 wa

[1]: Abeni, Luca & Lipari, Giuseppe & Parri, Andrea & Sun, Youcheng.
(2015). Parallel and sequential reclaiming in multicore
real-time global scheduling.

Signed-off-by: Vineeth Pillai (Google) <vineeth@xxxxxxxxxxxxxxx>
---
kernel/sched/deadline.c | 72 ++++++++++++++++++++---------------------
kernel/sched/sched.h | 6 ++--
2 files changed, 39 insertions(+), 39 deletions(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 71b24371a6f7..91451c1c7e52 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -487,7 +487,7 @@ static inline int is_leftmost(struct task_struct *p, struct dl_rq *dl_rq)
return rb_first_cached(&dl_rq->root) == &dl_se->rb_node;
}

-static void init_dl_rq_bw_ratio(struct dl_rq *dl_rq);
+static void init_dl_rq_bw(struct dl_rq *dl_rq);

void init_dl_bandwidth(struct dl_bandwidth *dl_b, u64 period, u64 runtime)
{
@@ -523,7 +523,7 @@ void init_dl_rq(struct dl_rq *dl_rq)

dl_rq->running_bw = 0;
dl_rq->this_bw = 0;
- init_dl_rq_bw_ratio(dl_rq);
+ init_dl_rq_bw(dl_rq);
}

#ifdef CONFIG_SMP
@@ -1261,43 +1261,47 @@ int dl_runtime_exceeded(struct sched_dl_entity *dl_se)

/*
* This function implements the GRUB accounting rule:
- * according to the GRUB reclaiming algorithm, the runtime is
- * not decreased as "dq = -dt", but as
- * "dq = -max{u / Umax, (1 - Uinact - Uextra)} dt",
- * where u is the utilization of the task, Umax is the maximum reclaimable
- * utilization, Uinact is the (per-runqueue) inactive utilization, computed
- * as the difference between the "total runqueue utilization" and the
- * runqueue active utilization, and Uextra is the (per runqueue) extra
- * reclaimable utilization.
- * Since rq->dl.running_bw and rq->dl.this_bw contain utilizations
- * multiplied by 2^BW_SHIFT, the result has to be shifted right by
- * BW_SHIFT.
- * Since rq->dl.bw_ratio contains 1 / Umax multiplied by 2^RATIO_SHIFT,
- * dl_bw is multiped by rq->dl.bw_ratio and shifted right by RATIO_SHIFT.
+ * As per the GRUB rule,the runtime is not decreased as "dq = -dt", but as
+ * "dq = -max{u, (1 - Uinact - Uextra)} dt",
+ * where:
+ * u: Bandwith of the task.
+ * running_bw: Total bandwidth of tasks in active state for this rq.
+ * this_bw: Reserved bandwidth for this rq. Includes active and
+ * inactive bandwidth for this rq.
+ * Uinact: Inactive utilization (this_bw - running_bw)
+ * Umax: Max usable bandwidth for DL. Currently
+ * = sched_rt_runtime_us / sched_rt_period_us
+ * Uextra: Extra bandwidth not reserved:
+ * = Umax - \Sum(u_i / #cpus in the root domain)
+ * u_i: Bandwidth of an admitted dl task in the
+ * root domain.
+ *
+ * Deadline tasks are not allowed to use the whole bandwidth of the cpu,
+ * but only a portion of it denoted by "Umax". So the equation becomes:
+ * "dq = -(max{u, (Umax - Uinact - Uextra)} / Umax) dt",
+ *
* Since delta is a 64 bit variable, to have an overflow its value
* should be larger than 2^(64 - 20 - 8), which is more than 64 seconds.
* So, overflow is not an issue here.
*/
static u64 grub_reclaim(u64 delta, struct rq *rq, struct sched_dl_entity *dl_se)
{
- u64 u_inact = rq->dl.this_bw - rq->dl.running_bw; /* Utot - Uact */
u64 u_act;
- u64 u_act_min = (dl_se->dl_bw * rq->dl.bw_ratio) >> RATIO_SHIFT;
+ u64 u_inact = rq->dl.this_bw - rq->dl.running_bw; /* Utot - Uact */

/*
- * Instead of computing max{u * bw_ratio, (1 - u_inact - u_extra)},
+ * Instead of computing max{u, (rq->dl.max_bw - u_inact - u_extra)},
* we compare u_inact + rq->dl.extra_bw with
- * 1 - (u * rq->dl.bw_ratio >> RATIO_SHIFT), because
- * u_inact + rq->dl.extra_bw can be larger than
- * 1 * (so, 1 - u_inact - rq->dl.extra_bw would be negative
- * leading to wrong results)
+ * rq->dl.max_bw - u, because u_inact + rq->dl.extra_bw can be larger
+ * than rq->dl.max_bw (so, rq->dl.max_bw - u_inact - rq->dl.extra_bw
+ * would be negative leading to wrong results)
*/
- if (u_inact + rq->dl.extra_bw > BW_UNIT - u_act_min)
- u_act = u_act_min;
+ if (u_inact + rq->dl.extra_bw > rq->dl.max_bw - dl_se->dl_bw)
+ u_act = dl_se->dl_bw;
else
- u_act = BW_UNIT - u_inact - rq->dl.extra_bw;
+ u_act = rq->dl.max_bw - u_inact - rq->dl.extra_bw;

- return (delta * u_act) >> BW_SHIFT;
+ return div64_u64(delta * u_act, rq->dl.max_bw);
}

/*
@@ -2780,17 +2784,13 @@ int sched_dl_global_validate(void)
return ret;
}

-static void init_dl_rq_bw_ratio(struct dl_rq *dl_rq)
+static void init_dl_rq_bw(struct dl_rq *dl_rq)
{
- if (global_rt_runtime() == RUNTIME_INF) {
- dl_rq->bw_ratio = 1 << RATIO_SHIFT;
- dl_rq->extra_bw = 1 << BW_SHIFT;
- } else {
- dl_rq->bw_ratio = to_ratio(global_rt_runtime(),
- global_rt_period()) >> (BW_SHIFT - RATIO_SHIFT);
- dl_rq->extra_bw = to_ratio(global_rt_period(),
+ if (global_rt_runtime() == RUNTIME_INF)
+ dl_rq->max_bw = dl_rq->extra_bw = 1 << BW_SHIFT;
+ else
+ dl_rq->max_bw = dl_rq->extra_bw = to_ratio(global_rt_period(),
global_rt_runtime());
- }
}

void sched_dl_do_global(void)
@@ -2819,7 +2819,7 @@ void sched_dl_do_global(void)
raw_spin_unlock_irqrestore(&dl_b->lock, flags);

rcu_read_unlock_sched();
- init_dl_rq_bw_ratio(&cpu_rq(cpu)->dl);
+ init_dl_rq_bw(&cpu_rq(cpu)->dl);
}
}

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 3e8df6d31c1e..1bc7ae9ad349 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -754,10 +754,10 @@ struct dl_rq {
u64 extra_bw;

/*
- * Inverse of the fraction of CPU utilization that can be reclaimed
- * by the GRUB algorithm.
+ * Maximum available bandwidth for deadline tasks of this rq. This is
+ * used in calculation of reclaimable bandwidth(GRUB).
*/
- u64 bw_ratio;
+ u64 max_bw;
};

#ifdef CONFIG_FAIR_GROUP_SCHED
--
2.40.1