Re: [PATCH 04/19] sched: Prepare for Core-wide rq->lock

From: Josh Don
Date: Fri Apr 30 2021 - 04:48:50 EST


On Fri, Apr 30, 2021 at 1:20 AM Aubrey Li <aubrey.intel@xxxxxxxxx> wrote:
>
> On Fri, Apr 30, 2021 at 4:40 AM Josh Don <joshdon@xxxxxxxxxx> wrote:
> >
> > On Thu, Apr 29, 2021 at 1:03 AM Aubrey Li <aubrey.intel@xxxxxxxxx> wrote:
> > >
> > > On Thu, Apr 22, 2021 at 8:39 PM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> > > ----snip----
> > > > @@ -199,6 +224,25 @@ void raw_spin_rq_unlock(struct rq *rq)
> > > > raw_spin_unlock(rq_lockp(rq));
> > > > }
> > > >
> > > > +#ifdef CONFIG_SMP
> > > > +/*
> > > > + * double_rq_lock - safely lock two runqueues
> > > > + */
> > > > +void double_rq_lock(struct rq *rq1, struct rq *rq2)
> > > > +{
> > > > + lockdep_assert_irqs_disabled();
> > > > +
> > > > + if (rq1->cpu > rq2->cpu)
> > >
> > > It's still a bit hard for me to digest this function, I guess using (rq->cpu)
> > > can't guarantee the sequence of locking when coresched is enabled.
> > >
> > > - cpu1 and cpu7 shares lockA
> > > - cpu2 and cpu8 shares lockB
> > >
> > > double_rq_lock(1,8) leads to lock(A) and lock(B)
> > > double_rq_lock(7,2) leads to lock(B) and lock(A)
> > >
> > > change to below to avoid ABBA?
> > > + if (__rq_lockp(rq1) > __rq_lockp(rq2))
> > >
> > > Please correct me if I was wrong.
> >
> > Great catch Aubrey. This is possibly what is causing the lockups that
> > Don is seeing.
> >
> > The proposed usage of __rq_lockp() is prone to race with sched core
> > being enabled/disabled.It also won't order properly if we do
> > double_rq_lock(smt0, smt1) vs double_rq_lock(smt1, smt0), since these
> > would have equivalent __rq_lockp()
>
> If __rq_lockp(smt0) == __rq_lockp(smt1), rq0 and rq1 won't swap,
> Later only one rq is locked and just returns. I'm not sure how does it not
> order properly?

If there is a concurrent switch from sched_core enable <-> disable,
the value of __rq_lockp() will race.

In the version you posted directly above, where we swap rq1 and rq2 if
__rq_lockp(rq1) > __rqlockp(rq2) rather than comparing the cpu, the
following can happen:

cpu 1 and cpu 7 share a core lock when coresched is enabled

- schedcore enabled
- double_lock(7, 1)
- __rq_lockp compares equal for 7 and 1; no swap is done
- schedcore disabled; now __rq_lockp returns the per-rq lock
- lock(__rq_lockp(7)) => lock(7)
- lock(__rq_lockp(1)) => lock(1)

Then we can also have

- schedcore disabled
- double_lock(1, 7)
- __rq_lock(1) < rq_lock(7), so no swap
- lock(__rqlockp(1)) => lock(1)
- lock(__rq_lockp(7)) => lock(7)

So we have in the first 7->1 and in the second 1->7

>
> .> I'd propose an alternative but similar idea: order by core, then break ties
> > by ordering on cpu.
> >
> > +#ifdef CONFIG_SCHED_CORE
> > + if (rq1->core->cpu > rq2->core->cpu)
> > + swap(rq1, rq2);
> > + else if (rq1->core->cpu == rq2->core->cpu && rq1->cpu > rq2->cpu)
> > + swap(rq1, rq2);
>
> That is, why the "else if" branch is needed?

Ensuring that core siblings always take their locks in the same order
if coresched is disabled.

>
> > +#else
> > if (rq1->cpu > rq2->cpu)
> > swap(rq1, rq2);
> > +#endif

Best,
Josh