Re: [PATCH 7/9] irq: remove needless lock in takedown_cpu()

From: Pingfan Liu
Date: Wed Apr 27 2022 - 02:02:07 EST


On Mon, Apr 25, 2022 at 11:43:03AM +0200, Thomas Gleixner wrote:
> On Mon, Apr 25 2022 at 10:57, Pingfan Liu wrote:
> > On Thu, Apr 21, 2022 at 06:11:56PM +0200, Thomas Gleixner wrote:
> >> > - irq_lock_sparse();
> >>
> >> Not everything is about RCU here. You really need to look at all moving
> >> parts:
> >>
> >> irq_migrate_all_off_this_cpu() relies on the allocated_irqs bitmap and
> >> the sparse tree to be in consistent state, which is only guaranteed when
> >> the sparse lock is held.
> >>
> >
> > For the irq which transfer from active to inactive(disappearing) after
> > fetching, desc->lock can serve the sync purpose. In this case,
> > irq_lock_sparse() is not needed. For a emergeing irq, I am not sure
> > about it.
>
> No, it's required for the free case. The alloc case is
> uninteresting. Care to look into the code?
>

Yes, it is a good exercise. Thanks for the enlightenment.

> irq_free_descs()
> lock(sparse);
> free_descs();
> bitmap_clear(allocated_irqs, from, cnt);
> unlock_sparse);
>
> As free_descs() sets the sparse tree entry to NULL, up to the point
> where bitmap_clear() finishes the state is inconsistent.
>
> Now look at irq_migrate_all_off_this_cpu() and figure out what happens
> when stop_machine() hits into the inconsistent state.
>

So the following code should fix the inconsistence between bitmap and
sparse tree.
diff --git a/kernel/irq/cpuhotplug.c b/kernel/irq/cpuhotplug.c
index 1ed2b1739363..cd0d180f082d 100644
--- a/kernel/irq/cpuhotplug.c
+++ b/kernel/irq/cpuhotplug.c
@@ -161,6 +161,8 @@ void irq_migrate_all_off_this_cpu(void)
bool affinity_broken;

desc = irq_to_desc(irq);
+ if (!desc)
+ continue;
raw_spin_lock(&desc->lock);
affinity_broken = migrate_one_irq(desc);
raw_spin_unlock(&desc->lock);

> This can be fixed, but not by making mysterious claims about RCU and
> desc->lock.
>

But I still think that desc->lock is critical to the consistence of the
irq _affinity_ if removing sparse lock in takedown_cpu().

For the free case, after applying the above patch, it should work.
void irq_migrate_all_off_this_cpu(void)
{
for_each_active_irq(irq) {

desc = irq_to_desc(irq);
if (!desc)
continue;
---> if breaking
in by free, then
migrate_one_irq()
will skip it
since the irq is
not activated any
long
raw_spin_lock(&desc->lock);
affinity_broken = migrate_one_irq(desc);
raw_spin_unlock(&desc->lock);
...
}
}

But for the alloc case, it could be a problem.
void irq_migrate_all_off_this_cpu(void)
{
for_each_active_irq(irq) {

desc = irq_to_desc(irq);
if (!desc)
continue;
raw_spin_lock(&desc->lock);
affinity_broken = migrate_one_irq(desc);
raw_spin_unlock(&desc->lock);
...
---> any new irq will
not be detected. But alloc_descs(start, cnt, node, affinity)
still associate the
irq with this cpu.
There is _no_
opportunity to clear
out this cpu from
desc->irq_common_data.affinity.

This is the affinity
inconsistent problem.
}


Thanks,

Pingfan