Re: [PATCH 4.9 75/95] random: set up the NUMA crng instances after the CRNG is fully initialized

From: Tetsuo Handa
Date: Thu Apr 26 2018 - 02:47:07 EST


Ingo Molnar wrote:
> What\'s the resolution here? It\'s still triggering upstream as well, as of
> 69bfd470f462:
>
> [ 8.881634] dracut: Switching root
>
> [ 8.994899] ================================
> [ 8.999338] WARNING: inconsistent lock state
> [ 9.003760] 4.17.0-rc2-00151-g43ae031-dirty #1 Not tainted
> [ 9.009389] --------------------------------
> [ 9.013803] inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage.
> [ 9.019956] swapper/2/0 [HC1[1]:SC0[0]:HE0:SE1] takes:
> [ 9.025244] (ptrval) (fs_reclaim){?.+.}, at: fs_reclaim_acquire.part.87+0x5/0x30
> [ 9.033598] {HARDIRQ-ON-W} state was registered at:
> [ 9.038628] fs_reclaim_acquire.part.87+0x29/0x30
> [ 9.043568] kmem_cache_alloc_trace+0x2c/0x240
> [ 9.048248] alloc_workqueue_attrs+0x29/0x60
> [ 9.052755] workqueue_init+0x4a/0x2e4
> [ 9.056741] kernel_init_freeable+0x108/0x286
> [ 9.061335] kernel_init+0xa/0x110
> [ 9.064974] ret_from_fork+0x27/0x50
>
> ....
>
> Is there a fix or a revert that can be tested?

Yes, you can test a fix at http://lkml.kernel.org/r/20180424034138.23994-1-tytso@xxxxxxx .

I thought that that fix still causes a problem. But it seems that I made a quick judgment.
I can no longer reproduce the problem with that fix.