Re: [PATCH v1 1/3] sched/debug: use int type and fix wrong print for rq->nr_uninterruptible

From: Peter Zijlstra
Date: Tue Apr 25 2023 - 08:49:19 EST


On Wed, Apr 12, 2023 at 11:07:29AM +0800, 晏艳(采苓) wrote:
> commit e6fe3f422be1 ("sched: Make multiple runqueue task counters
> 32-bit") changed rq->nr_uninterruptible from 'unsigned long'
> to 'unsigned int', but left the wrong print to
> /sys/kernel/debug/sched/debug and to the console.
>
> For example:
> Current type is 'unsigned int' and value is fffffff7, and the print
> will run the sentences,
> "do { \
> if (sizeof(rq->x) == 4) \
> SEQ_printf(m, " .%-30s: %ld\n", #x, (long)(rq->x)); \

Arguably this is wrong, because LP64 has sizeof(long) != 4. This ought
to be something like:

+ SEQ_printf(m, " .%-30s: %d\n", #x, (int)(rq->x)); \

> else \
> SEQ_printf(m, " .%-30s: %Ld\n", #x, (long long)(rq->x));\
> } while (0)"
>
> The result will be 4294967287 on 64-bit machines to print (long)(rq->x)
> while old type 'unsigned long' will print -9.
>
> And the other places that use its value will cast to int to return
> expected output, so we convert its type from 'unsigned int' to int.
>
> Signed-off-by: Yan Yan <yanyan.yan@xxxxxxxxxxxx>
> ---
> kernel/sched/loadavg.c | 2 +-
> kernel/sched/sched.h | 2 +-
> 2 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/loadavg.c b/kernel/sched/loadavg.c
> index 52c8f8226b0d..b9867495fe8b 100644
> --- a/kernel/sched/loadavg.c
> +++ b/kernel/sched/loadavg.c
> @@ -80,7 +80,7 @@ long calc_load_fold_active(struct rq *this_rq, long adjust)
> long nr_active, delta = 0;
>
> nr_active = this_rq->nr_running - adjust;
> - nr_active += (int)this_rq->nr_uninterruptible;
> + nr_active += this_rq->nr_uninterruptible;
>
> if (nr_active != this_rq->calc_load_active) {
> delta = nr_active - this_rq->calc_load_active;
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 060616944d7a..23c643948331 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -1006,7 +1006,7 @@ struct rq {
> * one CPU and if it got migrated afterwards it may decrease
> * it on another CPU. Always updated under the runqueue lock:
> */
> - unsigned int nr_uninterruptible;
> + int nr_uninterruptible;

Bit meh on this; but I suppose since calc_load_fold_active() already had
an explicit cast, it won't make the 'but signed overflow is UB' crowd
more unhappy.