[no subject]

From: Wu Fengguang
Date: Tue Sep 22 2009 - 21:48:18 EST


Subject: Re: [RESEND] [PATCH] readahead:add blk_run_backing_dev
Reply-To:
In-Reply-To: <20090922135838.33ebe36b.akpm@xxxxxxxxxxxxxxxxxxxx>

On Wed, Sep 23, 2009 at 04:58:38AM +0800, Andrew Morton wrote:
> On Fri, 29 May 2009 14:35:55 +0900
> Hisashi Hifumi <hifumi.hisashi@xxxxxxxxxxxxx> wrote:
>
> > I added blk_run_backing_dev on page_cache_async_readahead
> > so readahead I/O is unpluged to improve throughput on
> > especially RAID environment.
>
> I still haven't sent this upstream. It's unclear to me that we've
> decided that it merits merging?

Yes, if I remember it right, the performance gain is later confirmed
by Ronald's independent testing on his RAID. (Ronald CC-ed)

Thanks,
Fengguang

>
>
> From: Hisashi Hifumi <hifumi.hisashi@xxxxxxxxxxxxx>
>
> I added blk_run_backing_dev on page_cache_async_readahead so readahead I/O
> is unpluged to improve throughput on especially RAID environment.
>
> The normal case is, if page N become uptodate at time T(N), then T(N) <=
> T(N+1) holds. With RAID (and NFS to some degree), there is no strict
> ordering, the data arrival time depends on runtime status of individual
> disks, which breaks that formula. So in do_generic_file_read(), just
> after submitting the async readahead IO request, the current page may well
> be uptodate, so the page won't be locked, and the block device won't be
> implicitly unplugged:
>
> if (PageReadahead(page))
> page_cache_async_readahead()
> if (!PageUptodate(page))
> goto page_not_up_to_date;
> //...
> page_not_up_to_date:
> lock_page_killable(page);
>
> Therefore explicit unplugging can help.
>
> Following is the test result with dd.
>
> #dd if=testdir/testfile of=/dev/null bs=16384
>
> -2.6.30-rc6
> 1048576+0 records in
> 1048576+0 records out
> 17179869184 bytes (17 GB) copied, 224.182 seconds, 76.6 MB/s
>
> -2.6.30-rc6-patched
> 1048576+0 records in
> 1048576+0 records out
> 17179869184 bytes (17 GB) copied, 206.465 seconds, 83.2 MB/s
>
> (7Disks RAID-0 Array)
>
> -2.6.30-rc6
> 1054976+0 records in
> 1054976+0 records out
> 17284726784 bytes (17 GB) copied, 212.233 seconds, 81.4 MB/s
>
> -2.6.30-rc6-patched
> 1054976+0 records out
> 17284726784 bytes (17 GB) copied, 198.878 seconds, 86.9 MB/s
>
> (7Disks RAID-5 Array)
>
> The patch was found to improve performance with the SCST scsi target
> driver. See
> http://sourceforge.net/mailarchive/forum.php?thread_name=a0272b440906030714g67eabc5k8f847fb1e538cc62%40mail.gmail.com&forum_name=scst-devel
>
> [akpm@xxxxxxxxxxxxxxxxxxxx: unbust comment layout]
> [akpm@xxxxxxxxxxxxxxxxxxxx: "fix" CONFIG_BLOCK=n]
> Signed-off-by: Hisashi Hifumi <hifumi.hisashi@xxxxxxxxxxxxx>
> Acked-by: Wu Fengguang <fengguang.wu@xxxxxxxxx>
> Cc: Jens Axboe <jens.axboe@xxxxxxxxxx>
> Cc: KOSAKI Motohiro <kosaki.motohiro@xxxxxxxxxxxxxx>
> Tested-by: Ronald <intercommit@xxxxxxxxx>
> Cc: Bart Van Assche <bart.vanassche@xxxxxxxxx>
> Cc: Vladislav Bolkhovitin <vst@xxxxxxxx>
> Cc: Randy Dunlap <randy.dunlap@xxxxxxxxxx>
> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> ---
>
> mm/readahead.c | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff -puN mm/readahead.c~readahead-add-blk_run_backing_dev mm/readahead.c
> --- a/mm/readahead.c~readahead-add-blk_run_backing_dev
> +++ a/mm/readahead.c
> @@ -547,5 +547,17 @@ page_cache_async_readahead(struct addres
>
> /* do read-ahead */
> ondemand_readahead(mapping, ra, filp, true, offset, req_size);
> +
> +#ifdef CONFIG_BLOCK
> + /*
> + * Normally the current page is !uptodate and lock_page() will be
> + * immediately called to implicitly unplug the device. However this
> + * is not always true for RAID conifgurations, where data arrives
> + * not strictly in their submission order. In this case we need to
> + * explicitly kick off the IO.
> + */
> + if (PageUptodate(page))
> + blk_run_backing_dev(mapping->backing_dev_info, NULL);
> +#endif
> }
> EXPORT_SYMBOL_GPL(page_cache_async_readahead);
> _
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/