Re: [PATCH v2] mm/page_isolation: fix a deadlock with printk()

From: Qian Cai
Date: Mon Oct 07 2019 - 10:59:17 EST


On Mon, 2019-10-07 at 16:30 +0200, Petr Mladek wrote:
> On Fri 2019-10-04 18:26:45, Qian Cai wrote:
> > It is unsafe to call printk() while zone->lock was held, i.e.,
> >
> > zone->lock --> console_lock
> >
> > because the console could always allocate some memory in different code
> > paths and form locking chains in an opposite order,
> >
> > console_lock --> * --> zone->lock
> >
> > As the result, it triggers lockdep splats like below and in different
> > code paths in this thread [1]. Since has_unmovable_pages() was only used
> > in set_migratetype_isolate() and is_pageblock_removable_nolock(). Only
> > the former will set the REPORT_FAILURE flag which will call printk().
> > Hence, unlock the zone->lock just before the dump_page() there where
> > when has_unmovable_pages() returns true, there is no need to hold the
> > lock anyway in the rest of set_migratetype_isolate().
> >
> > While at it, remove a problematic printk() in __offline_isolated_pages()
> > only for debugging as well which will always disable lockdep on debug
> > kernels.
> >
> > The problem is probably there forever, but neither many developers will
> > run memory offline with the lockdep enabled nor admins in the field are
> > lucky enough yet to hit a perfect timing which required to trigger a
> > real deadlock. In addition, there aren't many places that call printk()
> > while zone->lock was held.
> >
> > WARNING: possible circular locking dependency detected
> > ------------------------------------------------------
> > test.sh/1724 is trying to acquire lock:
> > 0000000052059ec0 (console_owner){-...}, at: console_unlock+0x
> > 01: 328/0xa30
> >
> > but task is already holding lock:
> > 000000006ffd89c8 (&(&zone->lock)->rlock){-.-.}, at: start_iso
> > 01: late_page_range+0x216/0x538
> >
> > which lock already depends on the new lock.
> >
> > the existing dependency chain (in reverse order) is:
> >
> > -> #2 (&(&zone->lock)->rlock){-.-.}:
> > lock_acquire+0x21a/0x468
> > _raw_spin_lock+0x54/0x68
> > get_page_from_freelist+0x8b6/0x2d28
> > __alloc_pages_nodemask+0x246/0x658
> > __get_free_pages+0x34/0x78
> > sclp_init+0x106/0x690
> > sclp_register+0x2e/0x248
> > sclp_rw_init+0x4a/0x70
> > sclp_console_init+0x4a/0x1b8
> > console_init+0x2c8/0x410
> > start_kernel+0x530/0x6a0
> > startup_continue+0x70/0xd0
>
> This code takes locks: sclp_lock --> &(&zone->lock)->rlock
>
> > -> #1 (sclp_lock){-.-.}:
> > lock_acquire+0x21a/0x468
> > _raw_spin_lock_irqsave+0xcc/0xe8
> > sclp_add_request+0x34/0x308
> > sclp_conbuf_emit+0x100/0x138
> > sclp_console_write+0x96/0x3b8
> > console_unlock+0x6dc/0xa30
> > vprintk_emit+0x184/0x3c8
> > vprintk_default+0x44/0x50
> > printk+0xa8/0xc0
> > iommu_debugfs_setup+0xf2/0x108
> > iommu_init+0x6c/0x78
> > do_one_initcall+0x162/0x680
> > kernel_init_freeable+0x4e8/0x5a8
> > kernel_init+0x2a/0x188
> > ret_from_fork+0x30/0x34
> > kernel_thread_starter+0x0/0xc
>
> This code path takes: console_owner --> sclp_lock
>
> > -> #0 (console_owner){-...}:
> > check_noncircular+0x338/0x3e0
> > __lock_acquire+0x1e66/0x2d88
> > lock_acquire+0x21a/0x468
> > console_unlock+0x3a6/0xa30
> > vprintk_emit+0x184/0x3c8
> > vprintk_default+0x44/0x50
> > printk+0xa8/0xc0
> > __dump_page+0x1dc/0x710
> > dump_page+0x2e/0x58
> > has_unmovable_pages+0x2e8/0x470
> > start_isolate_page_range+0x404/0x538
> > __offline_pages+0x22c/0x1338
> > memory_subsys_offline+0xa6/0xe8
> > device_offline+0xe6/0x118
> > state_store+0xf0/0x110
> > kernfs_fop_write+0x1bc/0x270
> > vfs_write+0xce/0x220
> > ksys_write+0xea/0x190
> > system_call+0xd8/0x2b4
>
> And this code path takes: &(&zone->lock)->rlock --> console_owner
>
> > other info that might help us debug this:
> >
> > Chain exists of:
> > console_owner --> sclp_lock --> &(&zone->lock)->rlock
>
> All three code paths together create a cyclic dependency. This
> is why lockdep reports a possible deadlock.
>
> I believe that it cannot really happen because:
>
> static int __init
> sclp_console_init(void)
> {
> [...]
> rc = sclp_rw_init();
> [...]
> register_console(&sclp_console);
> return 0;
> }
>
> sclp_rw_init() is called before register_console(). And
> console_unlock() will never call sclp_console_write() before
> the console is registered.

It could really hard to tell for sure unless someone fully audit every place in
the code could do,

console_owner_lock --> sclp_lock

The lockdep will save only the earliest trace after first saw the lock order, so
those early boot one will always be there in the report.

>
> AFAIK, lockdep only compares existing chain of locks. It does
> not know about console registration that would make some
> code paths mutually exclusive.

Yes.

>
> I believe that it is a false positive. I do not know how to
> avoid this lockdep report. I hope that it will disappear
> by deferring all printk() calls rather soon.

However, the similar splat is forÂconsole_owner_lock --> port_lock below. I have
even seen the another one before with a 4-way lockdep splat (which was shot down
separately),

https://lore.kernel.org/lkml/1568817579.5576.172.camel@xxxxxx/

console_sem --> pi_lock --> rq_lock --> zone_lock
zone_lock --> console_sem

It is almost impossible to eliminate all the indirect call chains from
console_sem/console_owner_lock to zone->lock because it is too normal that
something later needs to allocate some memory dynamically, so as long as it
directly call printk() with zone->lock held, it will be in trouble.

I really hope the new printk() will solve this class of the problem, but it is
essentially up to the air until a patchset was posted. There are just too many
questions out there to be answered.

[ÂÂ297.425908] WARNING: possible circular locking dependency detected
[ÂÂ297.425908] 5.3.0-next-20190917 #8 Not tainted
[ÂÂ297.425909] ------------------------------------------------------
[ÂÂ297.425910] test.sh/8653 is trying to acquire lock:
[ÂÂ297.425911] ffffffff865a4460 (console_owner){-.-.}, at:
console_unlock+0x207/0x750

[ÂÂ297.425914] but task is already holding lock:
[ÂÂ297.425915] ffff88883fff3c58 (&(&zone->lock)->rlock){-.-.}, at:
__offline_isolated_pages+0x179/0x3e0

[ÂÂ297.425919] which lock already depends on the new lock.


[ÂÂ297.425920] the existing dependency chain (in reverse order) is:

[ÂÂ297.425922] -> #3 (&(&zone->lock)->rlock){-.-.}:
[ÂÂ297.425925]ÂÂÂÂÂÂÂÂ__lock_acquire+0x5b3/0xb40
[ÂÂ297.425925]ÂÂÂÂÂÂÂÂlock_acquire+0x126/0x280
[ÂÂ297.425926]ÂÂÂÂÂÂÂÂ_raw_spin_lock+0x2f/0x40
[ÂÂ297.425927]ÂÂÂÂÂÂÂÂrmqueue_bulk.constprop.21+0xb6/0x1160
[ÂÂ297.425928]ÂÂÂÂÂÂÂÂget_page_from_freelist+0x898/0x22c0
[ÂÂ297.425928]ÂÂÂÂÂÂÂÂ__alloc_pages_nodemask+0x2f3/0x1cd0
[ÂÂ297.425929]ÂÂÂÂÂÂÂÂalloc_pages_current+0x9c/0x110
[ÂÂ297.425930]ÂÂÂÂÂÂÂÂallocate_slab+0x4c6/0x19c0
[ÂÂ297.425931]ÂÂÂÂÂÂÂÂnew_slab+0x46/0x70
[ÂÂ297.425931]ÂÂÂÂÂÂÂÂ___slab_alloc+0x58b/0x960
[ÂÂ297.425932]ÂÂÂÂÂÂÂÂ__slab_alloc+0x43/0x70
[ÂÂ297.425933]ÂÂÂÂÂÂÂÂ__kmalloc+0x3ad/0x4b0
[ÂÂ297.425933]ÂÂÂÂÂÂÂÂ__tty_buffer_request_room+0x100/0x250
[ÂÂ297.425934]ÂÂÂÂÂÂÂÂtty_insert_flip_string_fixed_flag+0x67/0x110
[ÂÂ297.425935]ÂÂÂÂÂÂÂÂpty_write+0xa2/0xf0
[ÂÂ297.425936]ÂÂÂÂÂÂÂÂn_tty_write+0x36b/0x7b0
[ÂÂ297.425936]ÂÂÂÂÂÂÂÂtty_write+0x284/0x4c0
[ÂÂ297.425937]ÂÂÂÂÂÂÂÂ__vfs_write+0x50/0xa0
[ÂÂ297.425938]ÂÂÂÂÂÂÂÂvfs_write+0x105/0x290
[ÂÂ297.425939]ÂÂÂÂÂÂÂÂredirected_tty_write+0x6a/0xc0
[ÂÂ297.425939]ÂÂÂÂÂÂÂÂdo_iter_write+0x248/0x2a0
[ÂÂ297.425940]ÂÂÂÂÂÂÂÂvfs_writev+0x106/0x1e0
[ÂÂ297.425941]ÂÂÂÂÂÂÂÂdo_writev+0xd4/0x180
[ÂÂ297.425941]ÂÂÂÂÂÂÂÂ__x64_sys_writev+0x45/0x50
[ÂÂ297.425942]ÂÂÂÂÂÂÂÂdo_syscall_64+0xcc/0x76c
[ÂÂ297.425943]ÂÂÂÂÂÂÂÂentry_SYSCALL_64_after_hwframe+0x49/0xbe

[ÂÂ297.425944] -> #2 (&(&port->lock)->rlock){-.-.}:
[ÂÂ297.425946]ÂÂÂÂÂÂÂÂ__lock_acquire+0x5b3/0xb40
[ÂÂ297.425947]ÂÂÂÂÂÂÂÂlock_acquire+0x126/0x280
[ÂÂ297.425948]ÂÂÂÂÂÂÂÂ_raw_spin_lock_irqsave+0x3a/0x50
[ÂÂ297.425949]ÂÂÂÂÂÂÂÂtty_port_tty_get+0x20/0x60
[ÂÂ297.425949]ÂÂÂÂÂÂÂÂtty_port_default_wakeup+0xf/0x30
[ÂÂ297.425950]ÂÂÂÂÂÂÂÂtty_port_tty_wakeup+0x39/0x40
[ÂÂ297.425951]ÂÂÂÂÂÂÂÂuart_write_wakeup+0x2a/0x40
[ÂÂ297.425952]ÂÂÂÂÂÂÂÂserial8250_tx_chars+0x22e/0x440
[ÂÂ297.425952]ÂÂÂÂÂÂÂÂserial8250_handle_irq.part.8+0x14a/0x170
[ÂÂ297.425953]ÂÂÂÂÂÂÂÂserial8250_default_handle_irq+0x5c/0x90
[ÂÂ297.425954]ÂÂÂÂÂÂÂÂserial8250_interrupt+0xa6/0x130
[ÂÂ297.425955]ÂÂÂÂÂÂÂÂ__handle_irq_event_percpu+0x78/0x4f0
[ÂÂ297.425955]ÂÂÂÂÂÂÂÂhandle_irq_event_percpu+0x70/0x100
[ÂÂ297.425956]ÂÂÂÂÂÂÂÂhandle_irq_event+0x5a/0x8b
[ÂÂ297.425957]ÂÂÂÂÂÂÂÂhandle_edge_irq+0x117/0x370
[ÂÂ297.425958]ÂÂÂÂÂÂÂÂdo_IRQ+0x9e/0x1e0
[ÂÂ297.425958]ÂÂÂÂÂÂÂÂret_from_intr+0x0/0x2a
[ÂÂ297.425959]ÂÂÂÂÂÂÂÂcpuidle_enter_state+0x156/0x8e0
[ÂÂ297.425960]ÂÂÂÂÂÂÂÂcpuidle_enter+0x41/0x70
[ÂÂ297.425960]ÂÂÂÂÂÂÂÂcall_cpuidle+0x5e/0x90
[ÂÂ297.425961]ÂÂÂÂÂÂÂÂdo_idle+0x333/0x370
[ÂÂ297.425962]ÂÂÂÂÂÂÂÂcpu_startup_entry+0x1d/0x1f
[ÂÂ297.425962]ÂÂÂÂÂÂÂÂstart_secondary+0x290/0x330
[ÂÂ297.425963]ÂÂÂÂÂÂÂÂsecondary_startup_64+0xb6/0xc0

[ÂÂ297.425964] -> #1 (&port_lock_key){-.-.}:
[ÂÂ297.425967]ÂÂÂÂÂÂÂÂ__lock_acquire+0x5b3/0xb40
[ÂÂ297.425967]ÂÂÂÂÂÂÂÂlock_acquire+0x126/0x280
[ÂÂ297.425968]ÂÂÂÂÂÂÂÂ_raw_spin_lock_irqsave+0x3a/0x50
[ÂÂ297.425969]ÂÂÂÂÂÂÂÂserial8250_console_write+0x3e4/0x450
[ÂÂ297.425970]ÂÂÂÂÂÂÂÂuniv8250_console_write+0x4b/0x60
[ÂÂ297.425970]ÂÂÂÂÂÂÂÂconsole_unlock+0x501/0x750
[ÂÂ297.425971]ÂÂÂÂÂÂÂÂvprintk_emit+0x10d/0x340
[ÂÂ297.425972]ÂÂÂÂÂÂÂÂvprintk_default+0x1f/0x30
[ÂÂ297.425972]ÂÂÂÂÂÂÂÂvprintk_func+0x44/0xd4
[ÂÂ297.425973]ÂÂÂÂÂÂÂÂprintk+0x9f/0xc5
[ÂÂ297.425974]ÂÂÂÂÂÂÂÂregister_console+0x39c/0x520
[ÂÂ297.425975]ÂÂÂÂÂÂÂÂuniv8250_console_init+0x23/0x2d
[ÂÂ297.425975]ÂÂÂÂÂÂÂÂconsole_init+0x338/0x4cd
[ÂÂ297.425976]ÂÂÂÂÂÂÂÂstart_kernel+0x534/0x724
[ÂÂ297.425977]ÂÂÂÂÂÂÂÂx86_64_start_reservations+0x24/0x26
[ÂÂ297.425977]ÂÂÂÂÂÂÂÂx86_64_start_kernel+0xf4/0xfb
[ÂÂ297.425978]ÂÂÂÂÂÂÂÂsecondary_startup_64+0xb6/0xc0

[ÂÂ297.425979] -> #0 (console_owner){-.-.}:
[ÂÂ297.425982]ÂÂÂÂÂÂÂÂcheck_prev_add+0x107/0xea0
[ÂÂ297.425982]ÂÂÂÂÂÂÂÂvalidate_chain+0x8fc/0x1200
[ÂÂ297.425983]ÂÂÂÂÂÂÂÂ__lock_acquire+0x5b3/0xb40
[ÂÂ297.425984]ÂÂÂÂÂÂÂÂlock_acquire+0x126/0x280
[ÂÂ297.425984]ÂÂÂÂÂÂÂÂconsole_unlock+0x269/0x750
[ÂÂ297.425985]ÂÂÂÂÂÂÂÂvprintk_emit+0x10d/0x340
[ÂÂ297.425986]ÂÂÂÂÂÂÂÂvprintk_default+0x1f/0x30
[ÂÂ297.425987]ÂÂÂÂÂÂÂÂvprintk_func+0x44/0xd4
[ÂÂ297.425987]ÂÂÂÂÂÂÂÂprintk+0x9f/0xc5
[ÂÂ297.425988]ÂÂÂÂÂÂÂÂ__offline_isolated_pages.cold.52+0x2f/0x30a
[ÂÂ297.425989]ÂÂÂÂÂÂÂÂoffline_isolated_pages_cb+0x17/0x30
[ÂÂ297.425990]ÂÂÂÂÂÂÂÂwalk_system_ram_range+0xda/0x160
[ÂÂ297.425990]ÂÂÂÂÂÂÂÂ__offline_pages+0x79c/0xa10
[ÂÂ297.425991]ÂÂÂÂÂÂÂÂoffline_pages+0x11/0x20
[ÂÂ297.425992]ÂÂÂÂÂÂÂÂmemory_subsys_offline+0x7e/0xc0
[ÂÂ297.425992]ÂÂÂÂÂÂÂÂdevice_offline+0xd5/0x110
[ÂÂ297.425993]ÂÂÂÂÂÂÂÂstate_store+0xc6/0xe0
[ÂÂ297.425994]ÂÂÂÂÂÂÂÂdev_attr_store+0x3f/0x60
[ÂÂ297.425995]ÂÂÂÂÂÂÂÂsysfs_kf_write+0x89/0xb0
[ÂÂ297.425995]ÂÂÂÂÂÂÂÂkernfs_fop_write+0x188/0x240
[ÂÂ297.425996]ÂÂÂÂÂÂÂÂ__vfs_write+0x50/0xa0
[ÂÂ297.425997]ÂÂÂÂÂÂÂÂvfs_write+0x105/0x290
[ÂÂ297.425997]ÂÂÂÂÂÂÂÂksys_write+0xc6/0x160
[ÂÂ297.425998]ÂÂÂÂÂÂÂÂ__x64_sys_write+0x43/0x50
[ÂÂ297.425999]ÂÂÂÂÂÂÂÂdo_syscall_64+0xcc/0x76c
[ÂÂ297.426000]ÂÂÂÂÂÂÂÂentry_SYSCALL_64_after_hwframe+0x49/0xbe

[ÂÂ297.426001] other info that might help us debug this:

[ÂÂ297.426002] Chain exists of:
[ÂÂ297.426002]ÂÂÂconsole_owner --> &(&port->lock)->rlock --> &(&zone->lock)-
>rlock

[ÂÂ297.426007]ÂÂPossible unsafe locking scenario:

[ÂÂ297.426008]ÂÂÂÂÂÂÂÂCPU0ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂCPU1
[ÂÂ297.426009]ÂÂÂÂÂÂÂÂ----ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ----
[ÂÂ297.426009]ÂÂÂlock(&(&zone->lock)->rlock);
[ÂÂ297.426011]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂlock(&(&port->lock)->rlock);
[ÂÂ297.426013]ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂlock(&(&zone->lock)->rlock);
[ÂÂ297.426014]ÂÂÂlock(console_owner);

[ÂÂ297.426016]ÂÂ*** DEADLOCK ***

[ÂÂ297.426017] 9 locks held by test.sh/8653:
[ÂÂ297.426018]ÂÂ#0: ffff88839ba7d408 (sb_writers#4){.+.+}, at:
vfs_write+0x25f/0x290
[ÂÂ297.426021]ÂÂ#1: ffff888277618880 (&of->mutex){+.+.}, at:
kernfs_fop_write+0x128/0x240
[ÂÂ297.426024]ÂÂ#2: ffff8898131fc218 (kn->count#115){.+.+}, at:
kernfs_fop_write+0x138/0x240
[ÂÂ297.426028]ÂÂ#3: ffffffff86962a80 (device_hotplug_lock){+.+.}, at:
lock_device_hotplug_sysfs+0x16/0x50
[ÂÂ297.426031]ÂÂ#4: ffff8884374f4990 (&dev->mutex){....}, at:
device_offline+0x70/0x110
[ÂÂ297.426034]ÂÂ#5: ffffffff86515250 (cpu_hotplug_lock.rw_sem){++++}, at:
__offline_pages+0xbf/0xa10
[ÂÂ297.426037]ÂÂ#6: ffffffff867405f0 (mem_hotplug_lock.rw_sem){++++}, at:
percpu_down_write+0x87/0x2f0
[ÂÂ297.426040]ÂÂ#7: ffff88883fff3c58 (&(&zone->lock)->rlock){-.-.}, at:
__offline_isolated_pages+0x179/0x3e0
[ÂÂ297.426043]ÂÂ#8: ffffffff865a4920 (console_lock){+.+.}, at:
vprintk_emit+0x100/0x340

[ÂÂ297.426047] stack backtrace:
[ÂÂ297.426048] CPU: 1 PID: 8653 Comm: test.sh Not tainted 5.3.0-next-20190917 #8
[ÂÂ297.426049] Hardware name: HPE ProLiant DL560 Gen10/ProLiant DL560 Gen10,
BIOS U34 05/21/2019
[ÂÂ297.426049] Call Trace:
[ÂÂ297.426050]ÂÂdump_stack+0x86/0xca
[ÂÂ297.426051]ÂÂprint_circular_bug.cold.31+0x243/0x26e
[ÂÂ297.426051]ÂÂcheck_noncircular+0x29e/0x2e0
[ÂÂ297.426052]ÂÂ? stack_trace_save+0x87/0xb0
[ÂÂ297.426053]ÂÂ? print_circular_bug+0x120/0x120
[ÂÂ297.426053]ÂÂcheck_prev_add+0x107/0xea0
[ÂÂ297.426054]ÂÂvalidate_chain+0x8fc/0x1200
[ÂÂ297.426055]ÂÂ? check_prev_add+0xea0/0xea0
[ÂÂ297.426055]ÂÂ__lock_acquire+0x5b3/0xb40
[ÂÂ297.426056]ÂÂlock_acquire+0x126/0x280
[ÂÂ297.426057]ÂÂ? console_unlock+0x207/0x750
[ÂÂ297.426057]ÂÂ? __kasan_check_read+0x11/0x20
[ÂÂ297.426058]ÂÂconsole_unlock+0x269/0x750
[ÂÂ297.426059]ÂÂ? console_unlock+0x207/0x750
[ÂÂ297.426059]ÂÂvprintk_emit+0x10d/0x340
[ÂÂ297.426060]ÂÂvprintk_default+0x1f/0x30
[ÂÂ297.426061]ÂÂvprintk_func+0x44/0xd4
[ÂÂ297.426061]ÂÂ? do_raw_spin_lock+0x118/0x1d0
[ÂÂ297.426062]ÂÂprintk+0x9f/0xc5
[ÂÂ297.426063]ÂÂ? kmsg_dump_rewind_nolock+0x64/0x64
[ÂÂ297.426064]ÂÂ? __offline_isolated_pages+0x179/0x3e0
[ÂÂ297.426064]ÂÂ__offline_isolated_pages.cold.52+0x2f/0x30a
[ÂÂ297.426065]ÂÂ? online_memory_block+0x20/0x20
[ÂÂ297.426066]ÂÂoffline_isolated_pages_cb+0x17/0x30
[ÂÂ297.426067]ÂÂwalk_system_ram_range+0xda/0x160
[ÂÂ297.426067]ÂÂ? walk_mem_res+0x30/0x30
[ÂÂ297.426068]ÂÂ? dissolve_free_huge_page+0x1e/0x2b0
[ÂÂ297.426069]ÂÂ__offline_pages+0x79c/0xa10
[ÂÂ297.426069]ÂÂ? __add_memory+0xc0/0xc0
[ÂÂ297.426070]ÂÂ? __kasan_check_write+0x14/0x20
[ÂÂ297.426071]ÂÂ? __mutex_lock+0x344/0xcd0
[ÂÂ297.426071]ÂÂ? _raw_spin_unlock_irqrestore+0x49/0x50
[ÂÂ297.426072]ÂÂ? device_offline+0x70/0x110
[ÂÂ297.426073]ÂÂ? klist_next+0x1c1/0x1e0
[ÂÂ297.426073]ÂÂ? __mutex_add_waiter+0xc0/0xc0
[ÂÂ297.426074]ÂÂ? klist_next+0x10b/0x1e0
[ÂÂ297.426075]ÂÂ? klist_iter_exit+0x16/0x40
[ÂÂ297.426076]ÂÂ? device_for_each_child+0xd0/0x110
[ÂÂ297.426076]ÂÂoffline_pages+0x11/0x20
[ÂÂ297.426077]ÂÂmemory_subsys_offline+0x7e/0xc0
[ÂÂ297.426078]ÂÂdevice_offline+0xd5/0x110
[ÂÂ297.426078]ÂÂ? auto_online_blocks_show+0x70/0x70
[ÂÂ297.426079]ÂÂstate_store+0xc6/0xe0
[ÂÂ297.426080]ÂÂdev_attr_store+0x3f/0x60
[ÂÂ297.426080]ÂÂ? device_match_name+0x40/0x40
[ÂÂ297.426081]ÂÂsysfs_kf_write+0x89/0xb0
[ÂÂ297.426082]ÂÂ? sysfs_file_ops+0xa0/0xa0
[ÂÂ297.426082]ÂÂkernfs_fop_write+0x188/0x240
[ÂÂ297.426083]ÂÂ__vfs_write+0x50/0xa0
[ÂÂ297.426084]ÂÂvfs_write+0x105/0x290
[ÂÂ297.426084]ÂÂksys_write+0xc6/0x160
[ÂÂ297.426085]ÂÂ? __x64_sys_read+0x50/0x50
[ÂÂ297.426086]ÂÂ? do_syscall_64+0x79/0x76c
[ÂÂ297.426086]ÂÂ? do_syscall_64+0x79/0x76c
[ÂÂ297.426087]ÂÂ__x64_sys_write+0x43/0x50
[ÂÂ297.426088]ÂÂdo_syscall_64+0xcc/0x76c
[ÂÂ297.426088]ÂÂ? trace_hardirqs_on_thunk+0x1a/0x20
[ÂÂ297.426089]ÂÂ? syscall_return_slowpath+0x210/0x210
[ÂÂ297.426090]ÂÂ? entry_SYSCALL_64_after_hwframe+0x3e/0xbe
[ÂÂ297.426091]ÂÂ? trace_hardirqs_off_caller+0x3a/0x150
[ÂÂ297.426092]ÂÂ? trace_hardirqs_off_thunk+0x1a/0x20
[ÂÂ297.426092]ÂÂentry_SYSCALL_64_after_hwframe+0x49/0xbe