Re: 4.19.106-rt44 -- boot problems with irqwork: push most work into softirq context

From: Pavel Machek
Date: Sat Mar 21 2020 - 19:00:41 EST


Hi!

> > > > Does this patch help?
> > >
> > > I don't think so. It also failed, and the failure seems to be
> > > identical to me.
> > >
> > > https://gitlab.com/cip-project/cip-kernel/linux-cip/tree/ci/pavel/linux-cip
> > > https://lava.ciplatform.org/scheduler/job/13110
> > >
> >
> > Can you send me a patch that shows the difference between the revert that
> > you say works, and the upstream v4.19-rt tree (let me know which version
> > of v4.19-rt you are basing it on).
>
> I was using -rt44, and yes, I can probably generate better diffs.
>
> But I guess I found it with code review: how does this look to you? I
> applied it on top of your fix, and am testing. 2 successes so far.

And I'd recommend some kind of cleanup on top. The code is really
"interesting" and we don't want to have two copies. Totally untested.

Looking at the code, it could be probably cleaned up further.

Signed-off-by: Pavel Machek <pavel@xxxxxxx>

Best regards,
Pavel

diff --git a/kernel/irq_work.c b/kernel/irq_work.c
index dd654865c219..88211b87d4e3 100644
--- a/kernel/irq_work.c
+++ b/kernel/irq_work.c
@@ -70,10 +70,19 @@ static void __irq_work_queue_local(struct irq_work *work, struct llist_head *lis
arch_irq_work_raise();
}

+static struct llist_head *irq_work_get_list(struct irq_work *work)
+{
+ if ((IS_ENABLED(CONFIG_PREEMPT_RT_FULL) && !(work->flags & IRQ_WORK_HARD_IRQ))
+ || (work->flags & IRQ_WORK_LAZY))
+ return &lazy_list;
+ else
+ return &raised_list;
+}
+
/* Enqueue the irq work @work on the current CPU */
bool irq_work_queue(struct irq_work *work)
{
- struct llist_head *list;
+ struct llist_head *list;

/* Only queue if not already pending */
if (!irq_work_claim(work))
@@ -81,12 +90,7 @@ bool irq_work_queue(struct irq_work *work)

/* Queue the entry and raise the IPI if needed. */
preempt_disable();
- if ((IS_ENABLED(CONFIG_PREEMPT_RT_FULL) && !(work->flags & IRQ_WORK_HARD_IRQ))
- || (work->flags & IRQ_WORK_LAZY))
- list = this_cpu_ptr(&lazy_list);
- else
- list = this_cpu_ptr(&raised_list);
-
+ list = this_cpu_ptr(irq_work_get_list(work));
__irq_work_queue_local(work, list);
preempt_enable();

@@ -107,7 +111,6 @@ bool irq_work_queue_on(struct irq_work *work, int cpu)

#else /* CONFIG_SMP: */
struct llist_head *list;
- bool lazy_work, realtime = IS_ENABLED(CONFIG_PREEMPT_RT_FULL);

/* All work should have been flushed before going offline */
WARN_ON_ONCE(cpu_is_offline(cpu));
@@ -117,13 +120,8 @@ bool irq_work_queue_on(struct irq_work *work, int cpu)
return false;

preempt_disable();
-
- lazy_work = work->flags & IRQ_WORK_LAZY;
-
- if (lazy_work || (realtime && !(work->flags & IRQ_WORK_HARD_IRQ)))
- list = &per_cpu(lazy_list, cpu);
- else
- list = &per_cpu(raised_list, cpu);
+ list = irq_work_get_list(work);
+ list = &per_cpu(list, cpu);

if (cpu != smp_processor_id()) {
/* Arch remote IPI send/receive backend aren't NMI safe */

--
DENX Software Engineering GmbH, Managing Director: Wolfgang Denk
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany

Attachment: signature.asc
Description: PGP signature