Re: [External] Re: [PATCH v3 1/2] sched/core: Avoid obvious double update_rq_clock warning

From: Hao Jia
Date: Sat Apr 30 2022 - 03:31:56 EST




On 2022/4/29 Dietmar Eggemann wrote:
On 27/04/2022 10:00, Hao Jia wrote:

[...]

LGTM, comments are only nit-picks.

Since we directly use raw_spin_rq_lock() to acquire rq lock instead of
rq_lock(), there is no corresponding change to rq->clock_update_flags.
In particular, we have obtained the rq lock of other cores,

s/cores/CPUs, with SMT, a core can have multiple (logical) CPUs.

Thanks for your review comments.
I will do it in patch v4.
Thanks.


[...]

So we need to clear RQCF_UPDATED of rq->clock_update_flags synchronously
to avoid the WARN_DOUBLE_CLOCK warning.

Why you mention `synchronously` here? Isn't this obvious? (1)

I will do it in patch v4.
Thanks.


[...]

@@ -1833,6 +1833,7 @@ select_task_rq_dl(struct task_struct *p, int cpu, int flags)
static void migrate_task_rq_dl(struct task_struct *p, int new_cpu __maybe_unused)
{
struct rq *rq;
+ struct rq_flags rf;

- struct rq *rq;
struct rq_flags rf;
+ struct rq *rq;

Use reverse fir tree order for variable declarations.
(./Documentation/process/maintainer-tip.rst)

I will do it in patch v4.
Thanks.


[...]

+#ifdef CONFIG_SCHED_DEBUG
+/*
+ * In double_lock_balance()/double_rq_lock(), we use raw_spin_rq_lock() to acquire
+ * rq lock instead of rq_lock(). So at the end of these two functions we need to
+ * call double_rq_clock_clear_update() synchronously to clear RQCF_UPDATED of
^^^^^^^^^^^^^
(1)

+ * rq->clock_update_flags to avoid the WARN_DOUBLE_CLOCK warning.
+ */
+static inline void double_rq_clock_clear_update(struct rq *rq1, struct rq *rq2)
+{
+ rq1->clock_update_flags &= (RQCF_REQ_SKIP|RQCF_ACT_SKIP);
+ /*
+ * If CONFIG_SMP is not defined, rq1 and rq2 are the same,
+ * so we just clear RQCF_UPDATED one of them.
+ */

Maybe shorter:

/* rq1 == rq2 for !CONFIG_SMP, so just clear RQCF_UPDATED once. */

I will do it in patch v4.
Thanks.

[...]

@@ -2543,14 +2564,15 @@ static inline int _double_lock_balance(struct rq *this_rq, struct rq *busiest)
__acquires(busiest->lock)
__acquires(this_rq->lock)
{
- if (__rq_lockp(this_rq) == __rq_lockp(busiest))
- return 0;
-
- if (likely(raw_spin_rq_trylock(busiest)))
+ if (__rq_lockp(this_rq) == __rq_lockp(busiest) ||
+ likely(raw_spin_rq_trylock(busiest))) {

indention:

if (__rq_lockp(this_rq) == __rq_lockp(busiest) ||
- likely(raw_spin_rq_trylock(busiest))) {
+ likely(raw_spin_rq_trylock(busiest))) {

Thanks again for your review and suggestion.
I will do it in patch v4.
Thanks.


[...]

Reviewed-by: Dietmar Eggemann <dietmar.eggemann@xxxxxxx>

Tested on Arm64 Juno with mainline & preempt-rt (linux-5.17.y-rt).