Re: [PATCH 2/2] getrusage: use sig->stats_lock

From: Dylan Hatch
Date: Fri Jan 19 2024 - 22:28:15 EST


On Fri, Jan 19, 2024 at 6:16 AM Oleg Nesterov <oleg@xxxxxxxxxx> wrote:
>
> Rather than lock_task_sighand(), sig->stats_lock was specifically designed
> for this type of use. This way getrusage runs lockless in the likely case.
>
> TODO:
> - Change do_task_stat() to use sig->stats_lock too, then we can
> remove spin_lock_irq(siglock) in wait_task_zombie().
>
> - Turn sig->stats_lock into seqcount_rwlock_t, this way the
> readers in the slow mode won't exclude each other. See
> https://lore.kernel.org/all/20230913154907.GA26210@xxxxxxxxxx/
>
> - stats_lock has to disable irqs because ->siglock can be taken
> in irq context, it would be very nice to change __exit_signal()
> to avoid the siglock->stats_lock dependency.
>
> Signed-off-by: Oleg Nesterov <oleg@xxxxxxxxxx>
> ---
> kernel/sys.c | 16 +++++++++++++---
> 1 file changed, 13 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sys.c b/kernel/sys.c
> index 70ad06ad852e..f8e543f1e38a 100644
> --- a/kernel/sys.c
> +++ b/kernel/sys.c
> @@ -1788,7 +1788,9 @@ void getrusage(struct task_struct *p, int who, struct rusage *r)
> unsigned long maxrss;
> struct mm_struct *mm;
> struct signal_struct *sig = p->signal;
> + unsigned int seq = 0;
>
> +retry:
> memset(r, 0, sizeof(*r));
> utime = stime = 0;
> maxrss = 0;
> @@ -1800,8 +1802,7 @@ void getrusage(struct task_struct *p, int who, struct rusage *r)
> goto out_thread;
> }
>
> - if (!lock_task_sighand(p, &flags))
> - return;
> + flags = read_seqbegin_or_lock_irqsave(&sig->stats_lock, &seq);
>
> switch (who) {
> case RUSAGE_BOTH:
> @@ -1829,14 +1830,23 @@ void getrusage(struct task_struct *p, int who, struct rusage *r)
> r->ru_oublock += sig->oublock;
> if (maxrss < sig->maxrss)
> maxrss = sig->maxrss;
> +
> + rcu_read_lock();
> __for_each_thread(sig, t)
> accumulate_thread_rusage(t, r);
> + rcu_read_unlock();
> +
> break;
>
> default:
> BUG();
> }
> - unlock_task_sighand(p, &flags);
> +
> + if (need_seqretry(&sig->stats_lock, seq)) {
> + seq = 1;
> + goto retry;
> + }
> + done_seqretry_irqrestore(&sig->stats_lock, seq, flags);
>
> if (who == RUSAGE_CHILDREN)
> goto out_children;
> --
> 2.25.1.362.g51ebf55
>
>

I applied these to a 5.10 kernel, and my repro (calling getrusage(RUSAGE_SELF)
from 200K threads) is no longer triggering a hard lockup.

Tested-by: Dylan Hatch <dylanbhatch@xxxxxxxxxx>