[PATCH] smp, ipi: Speed up IPI handling by invoking the callbacks in reverse order

From: Srivatsa S. Bhat
Date: Wed Jun 04 2014 - 15:41:04 EST


The current implementation of lockless list (llist) has a drawback: if we
want to traverse the list in FIFO order (oldest to newest), we need to
reverse the list first (and this can be expensive if the list is large,
since this is an O(n) operation).

However, for callbacks that are queued using smp-call-function IPIs, the
requirement is that:
a. we invoke all of them, without missing any.
b. we invoke them as soon as possible.

In other words, we don't actually (need to) guarantee that the callbacks
will be invoked in FIFO order. So don't bother reversing the list; just
invoke the callbacks as they are (i.e., in reverse order). This would
probably speed-up the smp-call-function interrupt handler a tiny bit, when
flushing multiple pending callbacks upon receiving a single IPI.

But for debugging purposes, reverse the list and print it in the original
(FIFO) order in the WARN_ON case.

Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@xxxxxxxxxxxxxxxxxx>
---

kernel/smp.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index 5295388..be55094 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -229,7 +229,6 @@ static void flush_smp_call_function_queue(bool warn_cpu_offline)

head = &__get_cpu_var(call_single_queue);
entry = llist_del_all(head);
- entry = llist_reverse_order(entry);

/* There shouldn't be any pending callbacks on an offline CPU. */
if (unlikely(warn_cpu_offline && !cpu_online(smp_processor_id()) &&
@@ -237,6 +236,8 @@ static void flush_smp_call_function_queue(bool warn_cpu_offline)
warned = true;
WARN(1, "IPI on offline CPU %d\n", smp_processor_id());

+ entry = llist_reverse_order(entry);
+
/*
* We don't have to use the _safe() variant here
* because we are not invoking the IPI handlers yet.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/