Re: New warning in nvme_setup_discard

From: Ming Lei
Date: Fri Jul 16 2021 - 06:42:18 EST


On Fri, Jul 16, 2021 at 12:03:43PM +0200, Oleksandr Natalenko wrote:
> Hello.
>
> On pátek 16. července 2021 11:33:05 CEST Ming Lei wrote:
> > Can you test the following patch?
>
> Sure, building it at the moment, and will give it a try. Also please see my
> comments and questions below.
>
> >
> > diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
> > index 727955918563..673a634eadd9 100644
> > --- a/block/bfq-iosched.c
> > +++ b/block/bfq-iosched.c
> > @@ -2361,6 +2361,9 @@ static int bfq_request_merge(struct request_queue *q,
> > struct request **req, __rq = bfq_find_rq_fmerge(bfqd, bio, q);
> > if (__rq && elv_bio_merge_ok(__rq, bio)) {
> > *req = __rq;
> > +
> > + if (blk_discard_mergable(__rq))
> > + return ELEVATOR_DISCARD_MERGE;
> > return ELEVATOR_FRONT_MERGE;
> > }
> >
> > diff --git a/block/blk-merge.c b/block/blk-merge.c
> > index a11b3b53717e..f8707ff7e2fc 100644
> > --- a/block/blk-merge.c
> > +++ b/block/blk-merge.c
> > @@ -705,22 +705,6 @@ static void blk_account_io_merge_request(struct request
> > *req) }
> > }
> >
> > -/*
> > - * Two cases of handling DISCARD merge:
> > - * If max_discard_segments > 1, the driver takes every bio
> > - * as a range and send them to controller together. The ranges
> > - * needn't to be contiguous.
> > - * Otherwise, the bios/requests will be handled as same as
> > - * others which should be contiguous.
> > - */
> > -static inline bool blk_discard_mergable(struct request *req)
> > -{
> > - if (req_op(req) == REQ_OP_DISCARD &&
> > - queue_max_discard_segments(req->q) > 1)
> > - return true;
> > - return false;
> > -}
> > -
> > static enum elv_merge blk_try_req_merge(struct request *req,
> > struct request *next)
> > {
> > diff --git a/block/elevator.c b/block/elevator.c
> > index 52ada14cfe45..a5fe2615ec0f 100644
> > --- a/block/elevator.c
> > +++ b/block/elevator.c
> > @@ -336,6 +336,9 @@ enum elv_merge elv_merge(struct request_queue *q, struct
> > request **req, __rq = elv_rqhash_find(q, bio->bi_iter.bi_sector);
> > if (__rq && elv_bio_merge_ok(__rq, bio)) {
> > *req = __rq;
> > +
> > + if (blk_discard_mergable(__rq))
> > + return ELEVATOR_DISCARD_MERGE;
> > return ELEVATOR_BACK_MERGE;
> > }
> >
> > diff --git a/block/mq-deadline-main.c b/block/mq-deadline-main.c
> > index 6f612e6dc82b..294be0c0db65 100644
> > --- a/block/mq-deadline-main.c
> > +++ b/block/mq-deadline-main.c
>
> I had to adjust this against v5.13 because there's no mq-deadline-main.c, only
> mq-deadline.c (due to Bart series, I assume). I hope this is fine as the patch
> applies cleanly.
>
> > @@ -677,6 +677,8 @@ static int dd_request_merge(struct request_queue *q,
> > struct request **rq,
> >
> > if (elv_bio_merge_ok(__rq, bio)) {
> > *rq = __rq;
> > + if (blk_discard_mergable(__rq))
> > + return ELEVATOR_DISCARD_MERGE;
> > return ELEVATOR_FRONT_MERGE;
> > }
> > }
> > diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> > index 3177181c4326..87f00292fd7a 100644
> > --- a/include/linux/blkdev.h
> > +++ b/include/linux/blkdev.h
> > @@ -1521,6 +1521,22 @@ static inline int
> > queue_limit_discard_alignment(struct queue_limits *lim, sector return
> > offset << SECTOR_SHIFT;
> > }
> >
> > +/*
> > + * Two cases of handling DISCARD merge:
> > + * If max_discard_segments > 1, the driver takes every bio
> > + * as a range and send them to controller together. The ranges
> > + * needn't to be contiguous.
> > + * Otherwise, the bios/requests will be handled as same as
> > + * others which should be contiguous.
> > + */
> > +static inline bool blk_discard_mergable(struct request *req)
> > +{
> > + if (req_op(req) == REQ_OP_DISCARD &&
> > + queue_max_discard_segments(req->q) > 1)
> > + return true;
> > + return false;
> > +}
> > +
> > static inline int bdev_discard_alignment(struct block_device *bdev)
> > {
> > struct request_queue *q = bdev_get_queue(bdev);
>
> Do I understand correctly that this will be something like:
>
> Fixes: 2705dfb209 ("block: fix discard request merge")
>
> ?
>
> Because as the bisection progresses, I've bumped into this commit only.
> Without it the issue is not reproducible, at least so far.

It could be.

So can you just test v5.14-rc1?


Thanks,
Ming