[PATCH v2] block: limit the extract size to align queue limit

From: ed.tsai
Date: Fri Nov 10 2023 - 13:29:08 EST


From: Ed Tsai <ed.tsai@xxxxxxxxxxxx>

When an application performs a large IO, it fills and submits multiple
full bios to the block layer. Referring to commit 07173c3ec276
("block: enable multipage bvecs"), the full bio size is no longer fixed
at 1MB but can vary based on the physical memory layout.

The size of the full bio no longer aligns with the maximum IO size of
the queue. Therefore, in a 64MB read, you may see many unaligned bios
being submitted.

Executing the command to perform a 64MB read:

dd if=/data/test_file of=/dev/null bs=64m count=1 iflag=direct

It demonstrates the submission of numerous unaligned bios:

block_bio_queue: 254,52 R 2933336 + 2136
block_bio_queue: 254,52 R 2935472 + 2152
block_bio_queue: 254,52 R 2937624 + 2128
block_bio_queue: 254,52 R 2939752 + 2160

This patch limits the number of extract pages to ensure that we submit
the bio once we fill enough pages, preventing the block layer from
spliting small I/Os in between.

I performed the Antutu V10 Storage Test on a UFS 4.0 device, which
resulted in a significant improvement in the Sequential test:

Sequential Read (average of 5 rounds):
Original: 3033.7 MB/sec
Patched: 3520.9 MB/sec

Sequential Write (average of 5 rounds):
Original: 2225.4 MB/sec
Patched: 2800.3 MB/sec

Link: https://lore.kernel.org/linux-arm-kernel/20231025092255.27930-1-ed.tsai@xxxxxxxxxxxx/
Signed-off-by: Ed Tsai <ed.tsai@xxxxxxxxxxxx>

---
block/bio.c | 17 ++++++++++++-----
1 file changed, 12 insertions(+), 5 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 816d412c06e9..8d3a112e68da 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1227,8 +1227,10 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
iov_iter_extraction_t extraction_flags = 0;
unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
+ struct block_device *bdev = bio->bi_bdev;
struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
struct page **pages = (struct page **)bv;
+ ssize_t max_extract = UINT_MAX - bio->bi_iter.bi_size;
ssize_t size, left;
unsigned len, i = 0;
size_t offset;
@@ -1242,7 +1244,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2);
pages += entries_left * (PAGE_PTRS_PER_BVEC - 1);

- if (bio->bi_bdev && blk_queue_pci_p2pdma(bio->bi_bdev->bd_disk->queue))
+ if (bdev && blk_queue_pci_p2pdma(bdev->bd_disk->queue))
extraction_flags |= ITER_ALLOW_P2PDMA;

/*
@@ -1252,16 +1254,21 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
* result to ensure the bio's total size is correct. The remainder of
* the iov data will be picked up in the next bio iteration.
*/
- size = iov_iter_extract_pages(iter, &pages,
- UINT_MAX - bio->bi_iter.bi_size,
+ if (bdev && bio_op(bio) != REQ_OP_ZONE_APPEND) {
+ unsigned int max = queue_max_bytes(bdev_get_queue(bdev));
+
+ max_extract = bio->bi_iter.bi_size ?
+ max - bio->bi_iter.bi_size & (max - 1) : max;
+ }
+ size = iov_iter_extract_pages(iter, &pages, max_extract,
nr_pages, extraction_flags, &offset);
if (unlikely(size <= 0))
return size ? size : -EFAULT;

nr_pages = DIV_ROUND_UP(offset + size, PAGE_SIZE);

- if (bio->bi_bdev) {
- size_t trim = size & (bdev_logical_block_size(bio->bi_bdev) - 1);
+ if (bdev) {
+ size_t trim = size & (bdev_logical_block_size(bdev) - 1);
iov_iter_revert(iter, trim);
size -= trim;
}
--
2.18.0