Re: [PATCH v3] driver core: Cancel scheduled pm_runtime_idle() on device removal

From: Rafael J. Wysocki
Date: Mon Mar 04 2024 - 09:39:18 EST


On Thu, Feb 29, 2024 at 7:23 AM Kai-Heng Feng
<kai.heng.feng@xxxxxxxxxxxxx> wrote:
>
> When inserting an SD7.0 card to Realtek card reader, the card reader
> unplugs itself and morph into a NVMe device. The slot Link down on hot
> unplugged can cause the following error:
>
> pcieport 0000:00:1c.0: pciehp: Slot(8): Link Down
> BUG: unable to handle page fault for address: ffffb24d403e5010
> PGD 100000067 P4D 100000067 PUD 1001fe067 PMD 100d97067 PTE 0
> Oops: 0000 [#1] PREEMPT SMP PTI
> CPU: 3 PID: 534 Comm: kworker/3:10 Not tainted 6.4.0 #6
> Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./H370M Pro4, BIOS P3.40 10/25/2018
> Workqueue: pm pm_runtime_work
> RIP: 0010:ioread32+0x2e/0x70
> Code: ff 03 00 77 25 48 81 ff 00 00 01 00 77 14 8b 15 08 d9 54 01 b8 ff ff ff ff 85 d2 75 14 c3 cc cc cc cc 89 fa ed c3 cc cc cc cc <8b> 07 c3 cc cc cc cc 55 83 ea 01 48 89 fe 48 c7 c7 98 6f 15 99 48
> RSP: 0018:ffffb24d40a5bd78 EFLAGS: 00010296
> RAX: ffffb24d403e5000 RBX: 0000000000000152 RCX: 000000000000007f
> RDX: 000000000000ff00 RSI: ffffb24d403e5010 RDI: ffffb24d403e5010
> RBP: ffffb24d40a5bd98 R08: ffffb24d403e5010 R09: 0000000000000000
> R10: ffff9074cd95e7f4 R11: 0000000000000003 R12: 000000000000007f
> R13: ffff9074e1a68c00 R14: ffff9074e1a68d00 R15: 0000000000009003
> FS: 0000000000000000(0000) GS:ffff90752a180000(0000) knlGS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: ffffb24d403e5010 CR3: 0000000152832006 CR4: 00000000003706e0
> Call Trace:
> <TASK>
> ? show_regs+0x68/0x70
> ? __die_body+0x20/0x70
> ? __die+0x2b/0x40
> ? page_fault_oops+0x160/0x480
> ? search_bpf_extables+0x63/0x90
> ? ioread32+0x2e/0x70
> ? search_exception_tables+0x5f/0x70
> ? kernelmode_fixup_or_oops+0xa2/0x120
> ? __bad_area_nosemaphore+0x179/0x230
> ? bad_area_nosemaphore+0x16/0x20
> ? do_kern_addr_fault+0x8b/0xa0
> ? exc_page_fault+0xe5/0x180
> ? asm_exc_page_fault+0x27/0x30
> ? ioread32+0x2e/0x70
> ? rtsx_pci_write_register+0x5b/0x90 [rtsx_pci]
> rtsx_set_l1off_sub+0x1c/0x30 [rtsx_pci]
> rts5261_set_l1off_cfg_sub_d0+0x36/0x40 [rtsx_pci]
> rtsx_pci_runtime_idle+0xc7/0x160 [rtsx_pci]
> ? __pfx_pci_pm_runtime_idle+0x10/0x10
> pci_pm_runtime_idle+0x34/0x70
> rpm_idle+0xc4/0x2b0
> pm_runtime_work+0x93/0xc0
> process_one_work+0x21a/0x430
> worker_thread+0x4a/0x3c0
> ? __pfx_worker_thread+0x10/0x10
> kthread+0x106/0x140
> ? __pfx_kthread+0x10/0x10
> ret_from_fork+0x29/0x50
> </TASK>
> Modules linked in: nvme nvme_core snd_hda_codec_hdmi snd_sof_pci_intel_cnl snd_sof_intel_hda_common snd_hda_codec_realtek snd_hda_codec_generic snd_soc_hdac_hda soundwire_intel ledtrig_audio nls_iso8859_1 soundwire_generic_allocation soundwire_cadence snd_sof_intel_hda_mlink snd_sof_intel_hda snd_sof_pci snd_sof_xtensa_dsp snd_sof snd_sof_utils snd_hda_ext_core snd_soc_acpi_intel_match snd_soc_acpi soundwire_bus snd_soc_core snd_compress ac97_bus snd_pcm_dmaengine snd_hda_intel i915 snd_intel_dspcfg snd_intel_sdw_acpi intel_rapl_msr snd_hda_codec intel_rapl_common snd_hda_core x86_pkg_temp_thermal intel_powerclamp snd_hwdep coretemp snd_pcm kvm_intel drm_buddy ttm mei_hdcp kvm drm_display_helper snd_seq_midi snd_seq_midi_event cec crct10dif_pclmul ghash_clmulni_intel sha512_ssse3 aesni_intel crypto_simd rc_core cryptd rapl snd_rawmidi drm_kms_helper binfmt_misc intel_cstate i2c_algo_bit joydev snd_seq snd_seq_device syscopyarea wmi_bmof snd_timer sysfillrect input_leds snd ee1004 sysimgblt mei_me soundcore
> mei intel_pch_thermal mac_hid acpi_tad acpi_pad sch_fq_codel msr parport_pc ppdev lp ramoops drm parport reed_solomon efi_pstore ip_tables x_tables autofs4 hid_generic usbhid hid rtsx_pci_sdmmc crc32_pclmul ahci e1000e i2c_i801 i2c_smbus rtsx_pci xhci_pci libahci xhci_pci_renesas video wmi
> CR2: ffffb24d403e5010
> ---[ end trace 0000000000000000 ]---
>
> This happens because scheduled pm_runtime_idle() is not cancelled.

But rpm_resume() changes dev->power.request to RPM_REQ_NONE and if
pm_runtime_work() sees this, it will not run rpm_idle().

However, rpm_resume() doesn't deactivate the autosuspend timer if it
is running (see the comment in rpm_resume() regarding this), so it may
queue up a runtime PM work later.

If this is not desirable, you need to stop the autosuspend timer
explicitly in addition to calling pm_runtime_get_sync().

> So before releasing the device, stop all runtime power managements by
> using pm_runtime_barrier() to fix the issue.
>
> Link: https://lore.kernel.org/all/2ce258f371234b1f8a1a470d5488d00e@xxxxxxxxxxx/
> Cc: Ricky Wu <ricky_wu@xxxxxxxxxxx>
> Signed-off-by: Kai-Heng Feng <kai.heng.feng@xxxxxxxxxxxxx>
> ---
> v3:
> Move the change the device driver core.
>
> v2:
> Cover more cases than just pciehp.
>
> drivers/base/dd.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/drivers/base/dd.c b/drivers/base/dd.c
> index 85152537dbf1..38c815e2b3a2 100644
> --- a/drivers/base/dd.c
> +++ b/drivers/base/dd.c
> @@ -1244,6 +1244,7 @@ static void __device_release_driver(struct device *dev, struct device *parent)
>
> drv = dev->driver;
> if (drv) {
> + pm_runtime_barrier(dev);

This prevents the crash from occurring because pm_runtime_barrier()
calls pm_runtime_deactivate_timer() unconditionally AFAICS.

>
> while (device_links_busy(dev)) {
> --

Overall, the issue appears to be in the driver that forgets to
deactivate the autosuspend timer in its remove callback.