Re: [PATCH v2] sched, timer: Use atomics for thread_group_cputimer to improve scalability

From: Jason Low
Date: Mon Mar 02 2015 - 16:19:39 EST


On Mon, 2015-03-02 at 20:40 +0100, Oleg Nesterov wrote:
> Well, I forgot everything about this code, but let me ask anyway ;)
>
> On 03/02, Jason Low wrote:

> > @@ -222,13 +239,10 @@ void thread_group_cputimer(struct task_struct *tsk, struct task_cputime *times)
> > * it.
> > */
> > thread_group_cputime(tsk, &sum);
> > - raw_spin_lock_irqsave(&cputimer->lock, flags);
> > - cputimer->running = 1;
> > - update_gt_cputime(&cputimer->cputime, &sum);
> > - } else
> > - raw_spin_lock_irqsave(&cputimer->lock, flags);
> > - *times = cputimer->cputime;
> > - raw_spin_unlock_irqrestore(&cputimer->lock, flags);
> > + update_gt_cputime(cputimer, &sum);
> > + ACCESS_ONCE(cputimer->running) = 1;
>
> WRITE_ONCE() looks better...

Okay, I can update that.

> but it is not clear to me why do we need it
> at all.

Peter suggested it here as we would now be updating the running field
without the lock:

https://lkml.org/lkml/2015/1/23/641

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/