Re: [RFC][PATCH 09/13] hotplug: Replace hotplug lock with percpu-rwsem

From: Oleg Nesterov
Date: Wed Jun 24 2015 - 11:14:57 EST


On 06/24, Peter Zijlstra wrote:
>
> On Wed, Jun 24, 2015 at 03:50:49PM +0200, Oleg Nesterov wrote:
> > On 06/23, Peter Zijlstra wrote:
> > > If in one callchain we do:
> > >
> > > get_online_cpus();
> > > lock(A);
> > >
> > > in another we do:
> > >
> > > lock(A);
> > > get_online_cpus();
> > >
> > > lockdep will complain about the inverted lock order, however this is not
> > > a problem at all for recursive locks.
> >
> > Ah, but in this case lockdep is right. This is deadlockable because
> > with the new implementation percpu_down_write() blocks the new readers.
> > So this change just hides the valid warning.
> >
> > Just suppose that the 3rd CPU does percpu_down_write()->down_write()
> > right after the 2nd CPU (above) takes lock(A).
> >
> > I have to admit that I didn't realize that the code above is currently
> > correct... but it is.
> >
> > So we need percpu_down_write_dont_block_readers(). I already thought
> > about this before, I'll try to make the patch tomorrow on top of your
> > changes.
> >
> > This means that we do not need task_struct->cpuhp_ref, but we can't
> > avoid livelock we currently have: cpu_hotplug_begin() can never succeed
> > if the new readers come fast enough.
>
> I'm confused.. why isn't the read-in-read recursion good enough?

Because the code above can actually deadlock if 2 CPU's do this at
the same time?

task_struct->cpuhp_ref only makes read-in-read work, but
percpu_down_write() blocks the new readers.

Suppose that ->cpuhp_ref == 0 on CPU's 0 and 1, suppose that CPU 2
does percpu_down_write() and "sem->state = readers_block" is already
visible to CPU 1 when it calls get_online_cpus().

CPU_0 CPU_1 CPU_2

get_online_cpus(); lock(A);

// waits for CPU_1
lock(A)

// waits for CPU_0
percpu_down_write();

// waits for CPU_2
get_online_cpus();


Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/