[RFC PATCH 1/2] PCI: correctly flush workqueue when destroy pcie hotplug controller

From: Jiang Liu
Date: Sat Apr 07 2012 - 14:22:08 EST


When destroying a PCIe hotplug controller, all work items associated with
that controller should be flushed. Function pcie_cleanup_slot() calls
cancel_delayed_work() and flush_workqueue() to achieve that.
Function flush_workqueue() will flush all work items already submitted,
but new work items submitted by those already submitted work items may
still be in live state when returning from flush_workqueue().

For the extreme case, pciehp driver may expierence following calling path:
1) pcie_isr() -> pciehp_handle_xxx() -> queue_interrupt_event()->queue_work()
2) interrupt_event_handler() -> handle_button_press_event() ->
queue_delayed_work()
3) pciehp_queue_pushbutton_work() -> queue_work()

So enhance pcie_cleanup_slot() to correctly flush workqueue when destroying
PCIe hotplug controller.

Signed-off-by: Jiang Liu <jiang.liu@xxxxxxxxxx>
---
drivers/pci/hotplug/pciehp_hpc.c | 13 ++++++++++++-
1 files changed, 12 insertions(+), 1 deletions(-)

diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
index d4fa705..9a48a51 100644
--- a/drivers/pci/hotplug/pciehp_hpc.c
+++ b/drivers/pci/hotplug/pciehp_hpc.c
@@ -890,8 +890,19 @@ static int pcie_init_slot(struct controller *ctrl)
static void pcie_cleanup_slot(struct controller *ctrl)
{
struct slot *slot = ctrl->slot;
- cancel_delayed_work(&slot->work);
+
+ /*
+ * Following workqueue flushing logic is to deal with the special call path:
+ * 1) pcie_isr() -> pciehp_handle_xxx() ->
+ * queue_interrupt_event(pciehp_wq_event)->queue_work(pciehp_wq)
+ * 2) interrupt_event_handler() -> handle_button_press_event() ->
+ * queue_delayed_work(pciehp_wq)
+ * 3) pciehp_queue_pushbutton_work() -> queue_work(pciehp_wq)
+ */
flush_workqueue(pciehp_wq);
+ cancel_delayed_work_sync(&slot->work);
+ flush_workqueue(pciehp_wq);
+
kfree(slot);
}

--
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/