Re: PSI idle-shutoff

From: Suren Baghdasaryan
Date: Mon Oct 10 2022 - 16:59:58 EST


On Sun, Oct 9, 2022 at 6:17 AM Chengming Zhou
<zhouchengming@xxxxxxxxxxxxx> wrote:
>
> On 2022/10/9 20:41, Chengming Zhou wrote:
> > Hello,
> >
> > I just saw these emails, sorry for late.
> >
> > On 2022/10/6 00:32, Suren Baghdasaryan wrote:
> >> On Sun, Oct 2, 2022 at 11:11 PM Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote:
> >>>
> >>> On Fri, Sep 16, 2022 at 10:45 PM Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote:
> >>>>
> >>>> On Wed, Sep 14, 2022 at 11:20 PM Pavan Kondeti
> >>>> <quic_pkondeti@xxxxxxxxxxx> wrote:
> >>>>>
> >>>>> On Tue, Sep 13, 2022 at 07:38:17PM +0530, Pavan Kondeti wrote:
> >>>>>> Hi
> >>>>>>
> >>>>>> The fact that psi_avgs_work()->collect_percpu_times()->get_recent_times()
> >>>>>> run from a kworker thread, PSI_NONIDLE condition would be observed as
> >>>>>> there is a RUNNING task. So we would always end up re-arming the work.
> >>>>>>
> >>>>>> If the work is re-armed from the psi_avgs_work() it self, the backing off
> >>>>>> logic in psi_task_change() (will be moved to psi_task_switch soon) can't
> >>>>>> help. The work is already scheduled. so we don't do anything there.
> >>>>
> >>>> Hi Pavan,
> >>>> Thanks for reporting the issue. IIRC [1] was meant to fix exactly this
> >>>> issue. At the time it was written I tested it and it seemed to work.
> >>>> Maybe I missed something or some other change introduced afterwards
> >>>> affected the shutoff logic. I'll take a closer look next week when I'm
> >>>> back at my computer and will consult with Johannes.
> >>>
> >>> Sorry for the delay. I had some time to look into this and test psi
> >>> shutoff on my device and I think you are right. The patch I mentioned
> >>> prevents new psi_avgs_work from being scheduled when the only non-idle
> >>> task is psi_avgs_work itself, however the regular 2sec averaging work
> >>> will still go on. I think we could record the fact that the only
> >>> active task is psi_avgs_work in record_times() using a new
> >>> psi_group_cpu.state_mask flag and then prevent psi_avgs_work() from
> >>> rescheduling itself if that flag is set for all non-idle cpus. I'll
> >>> test this approach and will post a patch for review if that works.
> >>
> >> Hi Pavan,
> >> Testing PSI shutoff on Android proved more difficult than I expected.
> >> Lots of tasks to silence and I keep encountering new ones.
> >> The approach I was thinking about is something like this:
> >>
> >> ---
> >> include/linux/psi_types.h | 3 +++
> >> kernel/sched/psi.c | 12 +++++++++---
> >> 2 files changed, 12 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h
> >> index c7fe7c089718..8d936f22cb5b 100644
> >> --- a/include/linux/psi_types.h
> >> +++ b/include/linux/psi_types.h
> >> @@ -68,6 +68,9 @@ enum psi_states {
> >> NR_PSI_STATES = 7,
> >> };
> >>
> >> +/* state_mask flag to keep re-arming averaging work */
> >> +#define PSI_STATE_WAKE_CLOCK (1 << NR_PSI_STATES)
> >> +
> >> enum psi_aggregators {
> >> PSI_AVGS = 0,
> >> PSI_POLL,
> >> diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
> >> index ecb4b4ff4ce0..dd62ad28bacd 100644
> >> --- a/kernel/sched/psi.c
> >> +++ b/kernel/sched/psi.c
> >> @@ -278,6 +278,7 @@ static void get_recent_times(struct psi_group
> >> *group, int cpu,
> >> if (delta)
> >> *pchanged_states |= (1 << s);
> >> }
> >> + *pchanged_states |= (state_mask & PSI_STATE_WAKE_CLOCK);
> >
> > If the avgs_work kworker is running on this CPU, it will still see
> > PSI_STATE_WAKE_CLOCK set in state_mask? So the work will be re-armed?
> >
> > Maybe I missed something... but I have another different idea which
> > I want to implement later only for discussion.
>
> diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
> index ee2ecc081422..f322e8fd8d41 100644
> --- a/kernel/sched/psi.c
> +++ b/kernel/sched/psi.c
> @@ -241,11 +241,13 @@ static void get_recent_times(struct psi_group *group, int cpu,
> enum psi_aggregators aggregator, u32 *times,
> u32 *pchanged_states)
> {
> + int current_cpu = raw_smp_processor_id();
> struct psi_group_cpu *groupc = per_cpu_ptr(group->pcpu, cpu);
> u64 now, state_start;
> enum psi_states s;
> unsigned int seq;
> u32 state_mask;
> + bool only_avgs_work = false;
>
> *pchanged_states = 0;
>
> @@ -256,6 +258,14 @@ static void get_recent_times(struct psi_group *group, int cpu,
> memcpy(times, groupc->times, sizeof(groupc->times));
> state_mask = groupc->state_mask;
> state_start = groupc->state_start;
> + /*
> + * This CPU has only avgs_work kworker running, snapshot the
> + * newest times then don't need to re-arm work for this groupc.
> + * Normally this kworker will sleep soon and won't
> + * wake_clock in psi_group_change().
> + */
> + if (current_cpu == cpu && groupc->tasks[NR_RUNNING] == 1)
> + only_avgs_work = true;

Why do you determine only_avgs_work while taking a snapshot? The
read_seqcount_retry() might fail and the loop gets retried, which
might lead to a wrong only_avgs_work value if the state changes
between retries. I think it's safer to do this after the snapshot was
taken and to use tasks[NR_RUNNING] instead of groupc->tasks.

> } while (read_seqcount_retry(&groupc->seq, seq));
>
> /* Calculate state time deltas against the previous snapshot */
> @@ -280,6 +290,10 @@ static void get_recent_times(struct psi_group *group, int cpu,
> if (delta)
> *pchanged_states |= (1 << s);
> }
> +
> + /* Clear PSI_NONIDLE so avgs_work won't be re-armed for this groupc */
> + if (only_avgs_work)
> + *pchanged_states &= ~(1 << PSI_NONIDLE);

This seems to be safe because changed_states&(1<< PSI_NONIDLE) is used
only for re-arming psi_avgs_work, however semantically this is
incorrect. The CPU was not idle when it was executing psi_avgs_work.
IMO a separate flag would avoid this confusion.

> }
>
> static void calc_avgs(unsigned long avg[3], int missed_periods,
>
>
> >
> > Thanks.
> >
> >> }
> >>
> >> static void calc_avgs(unsigned long avg[3], int missed_periods,
> >> @@ -413,7 +414,7 @@ static void psi_avgs_work(struct work_struct *work)
> >> struct delayed_work *dwork;
> >> struct psi_group *group;
> >> u32 changed_states;
> >> - bool nonidle;
> >> + bool wake_clock;
> >> u64 now;
> >>
> >> dwork = to_delayed_work(work);
> >> @@ -424,7 +425,7 @@ static void psi_avgs_work(struct work_struct *work)
> >> now = sched_clock();
> >>
> >> collect_percpu_times(group, PSI_AVGS, &changed_states);
> >> - nonidle = changed_states & (1 << PSI_NONIDLE);
> >> + wake_clock = changed_states & PSI_STATE_WAKE_CLOCK;
> >> /*
> >> * If there is task activity, periodically fold the per-cpu
> >> * times and feed samples into the running averages. If things
> >> @@ -435,7 +436,7 @@ static void psi_avgs_work(struct work_struct *work)
> >> if (now >= group->avg_next_update)
> >> group->avg_next_update = update_averages(group, now);
> >>
> >> - if (nonidle) {
> >> + if (wake_clock) {
> >> schedule_delayed_work(dwork, nsecs_to_jiffies(
> >> group->avg_next_update - now) + 1);
> >> }
> >> @@ -742,6 +743,11 @@ static void psi_group_change(struct psi_group
> >> *group, int cpu,
> >> if (unlikely(groupc->tasks[NR_ONCPU] && cpu_curr(cpu)->in_memstall))
> >> state_mask |= (1 << PSI_MEM_FULL);
> >>
> >> + if (wake_clock || test_state(groupc->tasks, PSI_NONIDLE)) {
> >> + /* psi_avgs_work was not the only task on the CPU */
> >> + state_mask |= PSI_STATE_WAKE_CLOCK;
> >> + }
> >> +
> >> groupc->state_mask = state_mask;
> >>
> >> write_seqcount_end(&groupc->seq);