Re: [RFC PATCH v2 0/2] block: fix backing_dev_info lifetime

From: Jan Kara
Date: Thu Jan 26 2017 - 05:09:45 EST


On Wed 25-01-17 13:43:58, Dan Williams wrote:
> On Mon, Jan 23, 2017 at 1:17 PM, Thiago Jung Bauermann
> <bauerman@xxxxxxxxxxxxxxxxxx> wrote:
> > Hello Dan,
> >
> > Am Freitag, 6. Januar 2017, 17:02:51 BRST schrieb Dan Williams:
> >> v1 of these changes [1] was a one line change to bdev_get_queue() to
> >> prevent a shutdown crash when del_gendisk() races the final
> >> __blkdev_put().
> >>
> >> While it is known at del_gendisk() time that the queue is still alive,
> >> Jan Kara points to other paths [2] that are racing __blkdev_put() where
> >> the assumption that ->bd_queue, or inode->i_wb is valid does not hold.
> >>
> >> Fix that broken assumption, make it the case that if you have a live
> >> block_device, or block_device-inode that the corresponding queue and
> >> inode-write-back data is still valid.
> >>
> >> These changes survive a run of the libnvdimm unit test suite which puts
> >> some stress on the block_device shutdown path.
> >
> > I realize that the kernel test robot found problems with this series, but FWIW
> > it fixes the bug mentioned in [2].
> >
>
> Thanks for the test result. I might take a look at cleaning up the
> test robot reports and resubmitting this approach unless Jan beats me
> to the punch with his backing_devi_info lifetime change patches.

Yeah, so my patches (and I suspect your as well), have a problem when the
backing_device_info stays around because blkdev inode still exists, device
gets removed (e.g. USB disk gets unplugged) but blkdev inode still stays
around (there doesn't appear to be anything that would be forcing blkdev
inode out of cache on device removal and there cannot be because different
processes may hold inode reference) and then some other device gets plugged
in and reuses the same MAJOR:MINOR combination. Things get awkward there, I
think we need to unhash blkdev inode on device removal but so far I didn't
make this work...

Honza
--
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR