Re: Infiniate systemd loop when power off the machine with multiple MD RAIDs

From: AceLan Kao
Date: Tue Aug 22 2023 - 04:14:08 EST


Mariusz Tkaczyk <mariusz.tkaczyk@xxxxxxxxxxxxxxx> 於 2023年8月22日 週二 下午2:39寫道:
>
> On Mon, 21 Aug 2023 23:17:54 -0700
> Song Liu <song@xxxxxxxxxx> wrote:
>
> > On Mon, Aug 21, 2023 at 8:51 PM Guoqing Jiang <guoqing.jiang@xxxxxxxxx> wrote:
> > >
> > >
> > >
> > > On 8/18/23 16:16, Mariusz Tkaczyk wrote:
> > > > On Wed, 16 Aug 2023 16:37:26 +0700
> > > > Bagas Sanjaya<bagasdotme@xxxxxxxxx> wrote:
> > > >
> > > >> Hi,
> > > >>
> > > >> I notice a regression report on Bugzilla [1]. Quoting from it:
> > > >>
> > > >>> It needs to build at least 2 different RAIDs(eg. RAID0 and RAID10, RAID5
> > > >>> and RAID10) and then you will see below error repeatly(need to use
> > > >>> serial console to see it)
> > > >>>
> > > >>> [ 205.360738] systemd-shutdown[1]: Stopping MD devices.
> > > >>> [ 205.366384] systemd-shutdown[1]: sd-device-enumerator: Scan all dirs
> > > >>> [ 205.373327] systemd-shutdown[1]: sd-device-enumerator: Scanning
> > > >>> /sys/bus [ 205.380427] systemd-shutdown[1]: sd-device-enumerator:
> > > >>> Scanning /sys/class [ 205.388257] systemd-shutdown[1]: Stopping MD
> > > >>> /dev/md127 (9:127). [ 205.394880] systemd-shutdown[1]: Failed to sync
> > > >>> MD block device /dev/md127, ignoring: Input/output error [ 205.404975]
> > > >>> md: md127 stopped. [ 205.470491] systemd-shutdown[1]: Stopping MD
> > > >>> /dev/md126 (9:126). [ 205.770179] md: md126: resync interrupted.
> > > >>> [ 205.776258] md126: detected capacity change from 1900396544 to 0
> > > >>> [ 205.783349] md: md126 stopped.
> > > >>> [ 205.862258] systemd-shutdown[1]: Stopping MD /dev/md125 (9:125).
> > > >>> [ 205.862435] md: md126 stopped.
> > > >>> [ 205.868376] systemd-shutdown[1]: Failed to sync MD block device
> > > >>> /dev/md125, ignoring: Input/output error [ 205.872845] block device
> > > >>> autoloading is deprecated and will be removed. [ 205.880955] md: md125
> > > >>> stopped. [ 205.934349] systemd-shutdown[1]: Stopping MD /dev/md124p2
> > > >>> (259:7). [ 205.947707] systemd-shutdown[1]: Could not stop MD
> > > >>> /dev/md124p2: Device or resource busy [ 205.957004]
> > > >>> systemd-shutdown[1]: Stopping MD /dev/md124p1 (259:6). [ 205.964177]
> > > >>> systemd-shutdown[1]: Could not stop MD /dev/md124p1: Device or resource
> > > >>> busy [ 205.973155] systemd-shutdown[1]: Stopping MD /dev/md124 (9:124).
> > > >>> [ 205.979789] systemd-shutdown[1]: Could not stop MD /dev/md124: Device
> > > >>> or resource busy [ 205.988475] systemd-shutdown[1]: Not all MD devices
> > > >>> stopped, 4 left.
> > > >> See Bugzilla for the full thread and attached full journalctl log.
> > > >>
> > > >> Anyway, I'm adding this regression to be tracked by regzbot:
> > > >>
> > > >> #regzbot introduced: 12a6caf273240a
> > > >> https://bugzilla.kernel.org/show_bug.cgi?id=217798 #regzbot title:
> > > >> systemd shutdown hang on machine with different RAID levels
> > > >>
> > > >> Thanks.
> > > >>
> > > >> [1]:https://bugzilla.kernel.org/show_bug.cgi?id=217798
> > > >>
> > > > Hello,
> > > > The issue is reproducible with IMSM metadata too, around 20% of reboot
> > > > hangs. I will try to raise the priority in the bug because it is valid
> > > > high- the base functionality of the system is affected.
> > >
> > > Since it it reproducible from your side, is it possible to turn the
> > > reproduce steps into a test case
> > > given the importance?
>
> I didn't try to reproduce it locally yet because customer was able to
> bisect the regression and it pointed them to the same patch so I connected it
> and asked author to take a look first. At a first glance, I wanted to get
> community voice to see if it is not something obvious.
>
> So far I know, customer is creating 3 IMSM raid arrays, one is the system
> volume and do a reboot and it sporadically fails (around 20%). That is all.
>
> > >
> > > I guess If all arrays are set with MD_DELETED flag, then reboot might
> > > hang, not sure whether
> > > below (maybe need to flush wq as well before list_del) helps or not,
> > > just FYI.
> > >
> > > @@ -9566,8 +9566,10 @@ static int md_notify_reboot(struct notifier_block
> > > *this,
> > >
> > > spin_lock(&all_mddevs_lock);
> > > list_for_each_entry_safe(mddev, n, &all_mddevs, all_mddevs) {
> > > - if (!mddev_get(mddev))
> > > + if (!mddev_get(mddev)) {
> > > + list_del(&mddev->all_mddevs);
> > > continue;
> > > + }
> >
> > I am still not able to reproduce this, probably due to differences in the
> > timing. Maybe we only need something like:
> >
> > diff --git i/drivers/md/md.c w/drivers/md/md.c
> > index 5c3c19b8d509..ebb529b0faf8 100644
> > --- i/drivers/md/md.c
> > +++ w/drivers/md/md.c
> > @@ -9619,8 +9619,10 @@ static int md_notify_reboot(struct notifier_block
> > *this,
> >
> > spin_lock(&all_mddevs_lock);
> > list_for_each_entry_safe(mddev, n, &all_mddevs, all_mddevs) {
> > - if (!mddev_get(mddev))
> > + if (!mddev_get(mddev)) {
> > + need_delay = 1;
> > continue;
> > + }
> > spin_unlock(&all_mddevs_lock);
> > if (mddev_trylock(mddev)) {
> > if (mddev->pers)
> >
> >
> > Thanks,
> > Song
>
> I will try to reproduce issue at Intel lab to check this.
>
> Thanks,
> Mariusz
Hi Guoqing,

Here is the command how I trigger the issue, have to do it around 10
times to make sure the issue is reproducible

echo "repair" | sudo tee /sys/class/block/md12?/md/sync_action && sudo
grub-reboot "Advanced options for Ubuntu>Ubuntu, with Linux 6.5.0-rc77
06a74159504-dirty" && head -c 1G < /dev/urandom > myfile1 && sleep 180
&& head -c 1G < /dev/urandom > myfile2 && sleep 1 && cat /proc/mdstat
&& sleep 1 && rm myfile1 &&
sudo reboot

And the patch to add need_delay doesn't work.



--
Chia-Lin Kao(AceLan)
http://blog.acelan.idv.tw/
E-Mail: acelan.kaoATcanonical.com (s/AT/@/)