Re: [PATCH v2 6/6] rcu-tasks: Maintain real-time response in rcu_tasks_postscan()

From: Paul E. McKenney
Date: Thu Feb 22 2024 - 17:56:56 EST


On Thu, Feb 22, 2024 at 12:52:24PM -0800, Paul E. McKenney wrote:
> On Thu, Feb 22, 2024 at 06:48:47PM +0100, Frederic Weisbecker wrote:
> > Le Fri, Feb 16, 2024 at 05:27:41PM -0800, Boqun Feng a écrit :
> > > From: "Paul E. McKenney" <paulmck@xxxxxxxxxx>
> > >
> > > The current code will scan the entirety of each per-CPU list of exiting
> > > tasks in ->rtp_exit_list with interrupts disabled. This is normally just
> > > fine, because each CPU typically won't have very many tasks in this state.
> > > However, if a large number of tasks block late in do_exit(), these lists
> > > could be arbitrarily long. Low probability, perhaps, but it really
> > > could happen.
> > >
> > > This commit therefore occasionally re-enables interrupts while traversing
> > > these lists, inserting a dummy element to hold the current place in the
> > > list. In kernels built with CONFIG_PREEMPT_RT=y, this re-enabling happens
> > > after each list element is processed, otherwise every one-to-two jiffies.
> > >
> > > Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx>
> > > Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> > > Cc: Sebastian Siewior <bigeasy@xxxxxxxxxxxxx>
> > > Cc: Anna-Maria Behnsen <anna-maria@xxxxxxxxxxxxx>
> > > Cc: Steven Rostedt <rostedt@xxxxxxxxxxx>
> > > Signed-off-by: Boqun Feng <boqun.feng@xxxxxxxxx>
> > > ---
> > > kernel/rcu/tasks.h | 21 ++++++++++++++++++++-
> > > 1 file changed, 20 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
> > > index 4dc355b2ac22..866743e0796f 100644
> > > --- a/kernel/rcu/tasks.h
> > > +++ b/kernel/rcu/tasks.h
> > > @@ -971,13 +971,32 @@ static void rcu_tasks_postscan(struct list_head *hop)
> > > */
> > >
> > > for_each_possible_cpu(cpu) {
> > > + unsigned long j = jiffies + 1;
> > > struct rcu_tasks_percpu *rtpcp = per_cpu_ptr(rcu_tasks.rtpcpu, cpu);
> > > struct task_struct *t;
> > > + struct task_struct *t1;
> > > + struct list_head tmp;
> > >
> > > raw_spin_lock_irq_rcu_node(rtpcp);
> > > - list_for_each_entry(t, &rtpcp->rtp_exit_list, rcu_tasks_exit_list)
> > > + list_for_each_entry_safe(t, t1, &rtpcp->rtp_exit_list, rcu_tasks_exit_list) {
> > > if (list_empty(&t->rcu_tasks_holdout_list))
> > > rcu_tasks_pertask(t, hop);
> > > +
> > > + // RT kernels need frequent pauses, otherwise
> > > + // pause at least once per pair of jiffies.
> > > + if (!IS_ENABLED(CONFIG_PREEMPT_RT) && time_before(jiffies, j))
> > > + continue;
> > > +
> > > + // Keep our place in the list while pausing.
> > > + // Nothing else traverses this list, so adding a
> > > + // bare list_head is OK.
> > > + list_add(&tmp, &t->rcu_tasks_exit_list);
> >
> > I'm a bit confused about what this does...
> >
> > > + raw_spin_unlock_irq_rcu_node(rtpcp);
> > > + cond_resched(); // For CONFIG_PREEMPT=n kernels
> > > + raw_spin_lock_irq_rcu_node(rtpcp);
> > > + list_del(&tmp);
> >
> > Isn't there a risk that t is reaped by then? If it was not observed on_rq
> > while calling rcu_tasks_pertask() then there is no get_task_struct.
>
> That is OK, courtesy of the _safe in list_for_each_entry_safe().
>
> > And what about t1? Can't it be reaped as well?
>
> It can, and that is a problem, good catch!
>
> My current thought is to add this before the list_del(), which is
> admittedly a bit crude:
>
> t1 = tmp.next;

OK, OK... ;-)

t1 = list_entry(tmp.next, struct task_struct, rcu_tasks_exit_list);

Is there still a better way?

Thanx, Paul

> > Thanks.
> >
> >
> > > + j = jiffies + 1;
> > > + }
> > > raw_spin_unlock_irq_rcu_node(rtpcp);
> > > }
> > >
> > > --
> > > 2.43.0
> > >