"Convert the AMD iommu driver to the dma-iommu api" is buggy

From: Qian Cai
Date: Wed Oct 16 2019 - 10:55:14 EST


Today's linux-next generates a lot of warnings on multiple servers during boot
due to the series "iommu/amd: Convert the AMD iommu driver to the dma-iommu api"
[1]. Reverted the whole things fixed them.

[1] https://lore.kernel.org/lkml/20190908165642.22253-1-murphyt7@xxxxxx/

[ÂÂ257.785749][ T6184] BUG: sleeping function called from invalid context at
mm/page_alloc.c:4692
[ÂÂ257.794886][ T6184] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid:
6184, name: readelf
[ÂÂ257.803574][ T6184] INFO: lockdep is turned off.
[ÂÂ257.808233][ T6184] CPU: 86 PID: 6184 Comm: readelf Tainted:
GÂÂÂÂÂÂÂÂWÂÂÂÂÂÂÂÂÂ5.4.0-rc3-next-20191016+ #7
[ÂÂ257.818035][ T6184] Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385
Gen10, BIOS A40 07/10/2019
[ÂÂ257.827313][ T6184] Call Trace:
[ÂÂ257.830487][ T6184]ÂÂdump_stack+0x86/0xca
[ÂÂ257.834530][ T6184]ÂÂ___might_sleep.cold.92+0xd2/0x122
[ÂÂ257.839708][ T6184]ÂÂ__might_sleep+0x73/0xe0
[ÂÂ257.844011][ T6184]ÂÂ__alloc_pages_nodemask+0x442/0x720
[ÂÂ257.849274][ T6184]ÂÂ? __alloc_pages_slowpath+0x18d0/0x18d0
[ÂÂ257.854886][ T6184]ÂÂ? debug_lockdep_rcu_enabled+0x27/0x60
[ÂÂ257.860415][ T6184]ÂÂ? lock_downgrade+0x3c0/0x3c0
[ÂÂ257.865156][ T6184]ÂÂalloc_pages_current+0x9c/0x110
[ÂÂ257.870071][ T6184]ÂÂ__get_free_pages+0x12/0x60
[ÂÂ257.874634][ T6184]ÂÂget_zeroed_page+0x16/0x20
[ÂÂ257.879112][ T6184]ÂÂamd_iommu_map+0x504/0x850
[ÂÂ257.883588][ T6184]ÂÂ? amd_iommu_domain_direct_map+0x60/0x60
[ÂÂ257.889286][ T6184]ÂÂ? lockdep_hardirqs_on+0x16/0x2a0
[ÂÂ257.894373][ T6184]ÂÂ? alloc_iova+0x189/0x210
[ÂÂ257.898765][ T6184]ÂÂ? trace_hardirqs_on+0x3a/0x160
[ÂÂ257.903677][ T6184]ÂÂiommu_map+0x1b3/0x4d0
[ÂÂ257.907802][ T6184]ÂÂ? iommu_unmap+0xf0/0xf0
[ÂÂ257.912104][ T6184]ÂÂ? alloc_iova_fast+0x258/0x3d1
[ÂÂ257.916929][ T6184]ÂÂ? create_object+0x4a2/0x540
[ÂÂ257.921579][ T6184]ÂÂiommu_map_sg+0x9d/0x120
[ÂÂ257.925882][ T6184]ÂÂiommu_dma_map_sg+0x2c3/0x450
[ÂÂ257.930627][ T6184]ÂÂscsi_dma_map+0xd7/0x160
[ÂÂ257.934936][ T6184]ÂÂpqi_raid_submit_scsi_cmd_with_io_request+0x392/0x420
[smartpqi]
[ÂÂ257.942735][ T6184]ÂÂ? pqi_alloc_io_request+0x127/0x140 [smartpqi]
[ÂÂ257.948962][ T6184]ÂÂpqi_scsi_queue_command+0x8ab/0xe00 [smartpqi]
[ÂÂ257.955192][ T6184]ÂÂ? pqi_eh_device_reset_handler+0x9c0/0x9c0 [smartpqi]
[ÂÂ257.962029][ T6184]ÂÂ? sd_init_command+0xa25/0x1346 [sd_mod]
[ÂÂ257.967730][ T6184]ÂÂscsi_queue_rq+0xd19/0x1360
[ÂÂ257.972298][ T6184]ÂÂ__blk_mq_try_issue_directly+0x295/0x3f0
[ÂÂ257.977999][ T6184]ÂÂ? blk_mq_request_bypass_insert+0xd0/0xd0
[ÂÂ257.983787][ T6184]ÂÂ? debug_lockdep_rcu_enabled+0x27/0x60
[ÂÂ257.989312][ T6184]ÂÂblk_mq_request_issue_directly+0xb5/0x100
[ÂÂ257.995098][ T6184]ÂÂ? blk_mq_flush_plug_list+0x7e0/0x7e0
[ÂÂ258.000537][ T6184]ÂÂ? blk_mq_sched_insert_requests+0xd6/0x380
[ÂÂ258.006409][ T6184]ÂÂ? lock_downgrade+0x3c0/0x3c0
[ÂÂ258.011147][ T6184]ÂÂblk_mq_try_issue_list_directly+0xa9/0x160
[ÂÂ258.017023][ T6184]ÂÂblk_mq_sched_insert_requests+0x228/0x380
[ÂÂ258.022810][ T6184]ÂÂblk_mq_flush_plug_list+0x448/0x7e0
[ÂÂ258.028073][ T6184]ÂÂ? blk_mq_insert_requests+0x380/0x380
[ÂÂ258.033516][ T6184]ÂÂblk_flush_plug_list+0x1eb/0x230
[ÂÂ258.038515][ T6184]ÂÂ? blk_insert_cloned_request+0x1b0/0x1b0
[ÂÂ258.044215][ T6184]ÂÂblk_finish_plug+0x43/0x5d
[ÂÂ258.048695][ T6184]ÂÂread_pages+0xf6/0x3b0
[ÂÂ258.052823][ T6184]ÂÂ? read_cache_pages+0x350/0x350
[ÂÂ258.057737][ T6184]ÂÂ? __page_cache_alloc+0x12c/0x230
[ÂÂ258.062826][ T6184]ÂÂ__do_page_cache_readahead+0x346/0x3a0
[ÂÂ258.068348][ T6184]ÂÂ? read_pages+0x3b0/0x3b0
[ÂÂ258.072738][ T6184]ÂÂ? lockdep_hardirqs_on+0x16/0x2a0
[ÂÂ258.077928][ T6184]ÂÂ? __xfs_filemap_fault+0x167/0x4a0 [xfs]
[ÂÂ258.083625][ T6184]ÂÂfilemap_fault+0xa13/0xe70
[ÂÂ258.088201][ T6184]ÂÂ__xfs_filemap_fault+0x167/0x4a0 [xfs]
[ÂÂ258.093731][ T6184]ÂÂ? kmemleak_alloc+0x57/0x90
[ÂÂ258.098397][ T6184]ÂÂ? xfs_file_read_iter+0x3c0/0x3c0 [xfs]
[ÂÂ258.104009][ T6184]ÂÂ? debug_check_no_locks_freed+0x2c/0xe0
[ÂÂ258.109618][ T6184]ÂÂ? lockdep_init_map+0x8b/0x2b0
[ÂÂ258.114543][ T6184]ÂÂxfs_filemap_fault+0x68/0x70 [xfs]
[ÂÂ258.119720][ T6184]ÂÂ__do_fault+0x83/0x220
[ÂÂ258.123847][ T6184]ÂÂ__handle_mm_fault+0xd76/0x1f40
[ÂÂ258.128757][ T6184]ÂÂ? __pmd_alloc+0x280/0x280
[ÂÂ258.133231][ T6184]ÂÂ? debug_lockdep_rcu_enabled+0x27/0x60
[ÂÂ258.138755][ T6184]ÂÂ? handle_mm_fault+0x178/0x4c0
[ÂÂ258.143581][ T6184]ÂÂ? lockdep_hardirqs_on+0x16/0x2a0
[ÂÂ258.148674][ T6184]ÂÂ? __do_page_fault+0x29c/0x640
[ÂÂ258.153501][ T6184]ÂÂhandle_mm_fault+0x205/0x4c0
[ÂÂ258.158151][ T6184]ÂÂ__do_page_fault+0x29c/0x640
[ÂÂ258.162800][ T6184]ÂÂdo_page_fault+0x50/0x37f
[ÂÂ258.167189][ T6184]ÂÂpage_fault+0x34/0x40
[ÂÂ258.171228][ T6184] RIP: 0010:__clear_user+0x3b/0x70

[ÂÂ183.553150] BUG: sleeping function called from invalid context at
drivers/iommu/iommu.c:1950
[ÂÂ183.562306] in_atomic(): 1, irqs_disabled(): 128, non_block: 0, pid: 1486,
name: kworker/0:3
[ÂÂ183.571450] 5 locks held by kworker/0:3/1486:
[ÂÂ183.576510]ÂÂ#0: 44ff0008000ce128 ((wq_completion)events){+.+.}, at:
process_one_work+0x25c/0x948
[ÂÂ183.586110]ÂÂ#1: 43ff00081fb2fcf8 ((work_completion)(&wfc.work)){+.+.}, at:
process_one_work+0x280/0x948
[ÂÂ183.596310]ÂÂ#2: ffff000a2c661a08 (&dev->intf_state_mutex){+.+.}, at:
mlx5_load_one+0x68/0x12e0 [mlx5_core]
[ÂÂ183.606916]ÂÂ#3: ffff9000127e4560 (irq_domain_mutex){+.+.}, at:
__irq_domain_alloc_irqs+0x1f8/0x430
[ÂÂ183.616683]ÂÂ#4: 02ff0095ca0ed8f0 (&(&cookie->msi_lock)->rlock){....}, at:
iommu_dma_prepare_msi+0x70/0x210
[ÂÂ183.627146] irq event stamp: 378872
[ÂÂ183.631345] hardirqs lastÂÂenabled at (378871): [<ffff9000109d0230>]
_raw_write_unlock_irqrestore+0x4c/0x84
[ÂÂ183.641791] hardirqs last disabled at (378872): [<ffff9000109cf7a0>]
_raw_spin_lock_irqsave+0x38/0x9c
[ÂÂ183.651717] softirqs lastÂÂenabled at (377854): [<ffff9000100824f4>]
__do_softirq+0x864/0x900
[ÂÂ183.660951] softirqs last disabled at (377841): [<ffff900010118768>]
irq_exit+0x1c8/0x238
[ÂÂ183.669836] CPU: 0 PID: 1486 Comm: kworker/0:3 Tainted:
GÂÂÂÂÂÂÂÂWÂÂÂÂLÂÂÂÂ5.4.0-rc3-next-20191016+ #8
[ÂÂ183.679845] Hardware name: HPE Apollo 70ÂÂÂÂÂÂÂÂÂÂÂÂÂ/C01_APACHE_MBÂÂÂÂÂÂÂÂÂ,
BIOS L50_5.13_1.11 06/18/2019
[ÂÂ183.690292] Workqueue: events work_for_cpu_fn
[ÂÂ183.695357] Call trace:
[ÂÂ183.698510]ÂÂdump_backtrace+0x0/0x248
[ÂÂ183.702878]ÂÂshow_stack+0x20/0x2c
[ÂÂ183.706900]ÂÂdump_stack+0xc8/0x130
[ÂÂ183.711009]ÂÂ___might_sleep+0x314/0x328
[ÂÂ183.715551]ÂÂ__might_sleep+0x7c/0xe0
[ÂÂ183.719832]ÂÂiommu_map+0x40/0x70
[ÂÂ183.723766]ÂÂiommu_dma_prepare_msi+0x16c/0x210
[ÂÂ183.728916]ÂÂits_irq_domain_alloc+0x100/0x254
[ÂÂ183.733979]ÂÂirq_domain_alloc_irqs_parent+0x74/0x90
[ÂÂ183.739562]ÂÂmsi_domain_alloc+0xa0/0x170
[ÂÂ183.744190]ÂÂ__irq_domain_alloc_irqs+0x228/0x430
[ÂÂ183.749512]ÂÂmsi_domain_alloc_irqs+0x130/0x548
[ÂÂ183.754663]ÂÂpci_msi_setup_msi_irqs+0x64/0x74
[ÂÂ183.759726]ÂÂ__pci_enable_msix_range+0x52c/0x878
[ÂÂ183.765049]ÂÂpci_alloc_irq_vectors_affinity+0x94/0x168
[ÂÂ183.771028]ÂÂmlx5_irq_table_create+0x178/0x748 [mlx5_core]
[ÂÂ183.777353]ÂÂmlx5_load_one+0x710/0x12e0 [mlx5_core]
[ÂÂ183.783069]ÂÂinit_one+0x514/0x898 [mlx5_core]
[ÂÂ183.788134]ÂÂlocal_pci_probe+0x74/0xcc
[ÂÂ183.792589]ÂÂwork_for_cpu_fn+0x30/0x4c
[ÂÂ183.797045]ÂÂprocess_one_work+0x4f4/0x948
[ÂÂ183.801760]ÂÂprocess_scheduled_works+0x34/0x54
[ÂÂ183.806909]ÂÂworker_thread+0x348/0x4bc
[ÂÂ183.811364]ÂÂkthread+0x1cc/0x1e8
[ÂÂ183.815299]ÂÂret_from_fork+0x10/0x18
[ÂÂ184.621631] mlx5_core 0000:0b:00.1: Port module event: module 1, Cable
unplugged
[ÂÂ184.867367] mlx5_core 0000:0b:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256)
RxCqeCmprss(0)
[ÂÂ186.181802] mlx5_core 0000:0b:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256)
RxCqeCmprss(0)