Re: [PATCH v10 18/20] timers: Implement the hierarchical pull model

From: Frederic Weisbecker
Date: Mon Jan 29 2024 - 17:22:04 EST


Le Mon, Jan 29, 2024 at 11:50:39AM +0100, Anna-Maria Behnsen a écrit :
> Frederic Weisbecker <frederic@xxxxxxxxxx> writes:
>
> > Le Mon, Jan 15, 2024 at 03:37:41PM +0100, Anna-Maria Behnsen a écrit :
> >> +static bool tmigr_inactive_up(struct tmigr_group *group,
> >> + struct tmigr_group *child,
> >> + void *ptr)
> >> +{
> >> + union tmigr_state curstate, newstate, childstate;
> >> + struct tmigr_walk *data = ptr;
> >> + bool walk_done;
> >> + u8 childmask;
> >> +
> >> + childmask = data->childmask;
> >> + curstate.state = atomic_read(&group->migr_state);
> >> + childstate.state = 0;
> >> +
> >> + do {
> >
> > So I got the confirmation from Boqun (+Cc) and Paul that a failing cmpxchg
> > may not order the load of the old value against subsequent loads. And
> > that may apply to atomic_try_cmpxchg() as well.
> >
> > Therefore you not only need to turn group->migr_state read into
> > an atomic_read_acquire() but you also need to do this on each iteration
> > of this loop. For example you can move the read_acquire right here.
>
> I tried to read and understand more about the memory barriers especially
> the acquire/release stuff. So please correct me whenever I'm wrong.
>
> We have to make sure that the child/group state values contain the last
> updates and prevent reordering to be able to rely on those values.
>
> So I understand, that we need the atomic_read_acquire() here for the
> child state, because we change the group state accordingly and need to
> make sure, that it contains the last update of it. The cmpxchg which
> writes the child state is (on success) a full memory barrier. And the
> atomic_read_acquire() makes sure all preceding "critical sections"
> (which ends with the full memory barrier) are visible. Is this right?

Right. And BTW I'm being suggested by Paul to actually avoid
atomic_read_acquire() after cmpxchg() failure because that implies an
error prone re-read. So pick up your favourite between smp_rmb() or
smp_mb__after_atomic().

With the latter this could look like:

curstate.state = atomic_read_acquire(&group->migr_state);
for (;;) {
childstate.state = atomic_read(&child->migr_state);
...
if (atomic_try_cmpxchg(&group->migr_state, &curstate.state, newstate.state))
break;
smp_mb__after_atomic();
}

>
> To make sure the proper states are used, atomic_read_acquire() is then
> also required in:
> - tmigr_check_migrator()
> - tmigr_check_migrator_and_lonely()
> - tmigr_check_lonely()

Not sure about those. I'll check them.

> - tmigr_new_timer_up() (for childstate and groupstate)

Actually you need to fix some ordering there that I suggested a while ago :)
See https://lore.kernel.org/all/ZIhKT3h7Dc0G3xoU@lothringen/

> - tmigr_connect_child_parent()
> Right?
>
> Regarding the pairing of acquire: What happens when two
> atomic_read_acquire() are executed afterwards without pairing 1:1 with a
> release or stronger memory barrier?

I think I'll need an example.

>
> Now I want to understand the case for the group state here and also in
> active_up path. When reading it without acquire, it is possible, that
> not all changes are visible due to reordering,... . But then the worst
> outcome would be that the cmpxchg fails and the loop has to be done once
> more? Is this right?

Right. This one looks good as it doesn't depend on the child's value.

>
> I know that memory barriers are not for free and redo the loop is also
> not for free. But I don't know which of both is worse. At least in
> inactive_up() path, we are not in the critical path. In active_up() it
> would be good to take the less expensive option.

I don't think you need to change the active_up(), from a quick glance.

>
> I want to understand the atomic_try_cmpxchg_acquire() variant: The Read
> is an acquire, so even if the compare/write fails, the value which is
> handed back is the one which was update last with a succesful cmpxchg
> and then we can rely on this value?

So cmpxchg_acquire() provides a weaker ordering than cmpxchg(). Instead
of issuing a full memory barrier, it issues an acquire barrier, which is
really not what you want since you actually want to order what precedes
the cmpxchg() with the write that it performs. At the very least you would
actually need cmpxchg_release().

And most importantly, neither cmpxchg(), cmpxchg_release() nor cmpxchg_acquire()
guarantee any ordering on failure.

Thanks.