[PATCH v2] f2fs: enhance multithread dio write performance

From: Chao Yu
Date: Wed Oct 21 2015 - 03:12:07 EST


When dio writes perform concurrently, our performace will be low because of
Thread A's allocation of multi continuous blocks will be break by Thread B,
there are two cases as below:
- In Thread B, we may change current segment to a new segment for LFS
allocation if we dio write in the beginning of the file.
- In Thread B, we may allocate blocks in the middle of Thread A's
allocation, which make blocks allocated in Thread A being inconsecutive.

This patch adds writepages mutex lock to make block allocation in dio write
atomic to avoid above issues.

Test environment:
ubuntu os with linux kernel 4.4-rc4, intel i7-3770, 16g memory,
32g kingston sd card.

fio --name seqw --ioengine=sync --invalidate=1 --rw=write --directory=/mnt/f2fs --filesize=256m --size=16m --bs=2m --direct=1
--numjobs=10

before:
WRITE: io=163840KB, aggrb=5125KB/s, minb=512KB/s, maxb=776KB/s, mint=21105msec, maxt=31967msec
patched:
WRITE: io=163840KB, aggrb=10424KB/s, minb=1042KB/s, maxb=1172KB/s, mint=13975msec, maxt=15717msec

Signed-off-by: Chao Yu <chao2.yu@xxxxxxxxxxx>
---
v2:
- only serialize block allocation.
- do not serialize small dio.
fs/f2fs/data.c | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 90a2ffe..c01d113 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -1566,7 +1566,10 @@ static ssize_t f2fs_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
struct file *file = iocb->ki_filp;
struct address_space *mapping = file->f_mapping;
struct inode *inode = mapping->host;
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
size_t count = iov_iter_count(iter);
+ int rw = iov_iter_rw(iter);
+ bool serialized = (F2FS_BYTES_TO_BLK(count) >= 64);
int err;

/* we don't need to use inline_data strictly */
@@ -1583,10 +1586,14 @@ static ssize_t f2fs_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
if (err)
return err;

- trace_f2fs_direct_IO_enter(inode, offset, count, iov_iter_rw(iter));
+ trace_f2fs_direct_IO_enter(inode, offset, count, rw);

- if (iov_iter_rw(iter) == WRITE) {
+ if (rw == WRITE) {
+ if (serialized)
+ mutex_lock(&sbi->writepages);
__allocate_data_blocks(inode, offset, count);
+ if (serialized)
+ mutex_unlock(&sbi->writepages);
if (unlikely(f2fs_cp_error(F2FS_I_SB(inode)))) {
err = -EIO;
goto out;
@@ -1595,10 +1602,10 @@ static ssize_t f2fs_direct_IO(struct kiocb *iocb, struct iov_iter *iter,

err = blockdev_direct_IO(iocb, inode, iter, offset, get_data_block_dio);
out:
- if (err < 0 && iov_iter_rw(iter) == WRITE)
+ if (err < 0 && rw == WRITE)
f2fs_write_failed(mapping, offset + count);

- trace_f2fs_direct_IO_exit(inode, offset, count, iov_iter_rw(iter), err);
+ trace_f2fs_direct_IO_exit(inode, offset, count, rw, err);

return err;
}
--
2.6.3


>
> >
> > Thanks,
> >>
> >> Thanks,
> >>
> >>> + }
> >>>
> >>> err = blockdev_direct_IO(iocb, inode, iter, offset, get_data_block_dio);
> >>> - if (err < 0 && iov_iter_rw(iter) == WRITE)
> >>> - f2fs_write_failed(mapping, offset + count);
> >>> + if (rw == WRITE) {
> >>> + mutex_unlock(&sbi->writepages);
> >>> + if (err)
> >>> + f2fs_write_failed(mapping, offset + count);
> >>> + }
> >>>
> >>> trace_f2fs_direct_IO_exit(inode, offset, count, iov_iter_rw(iter), err);
> >>>
> >>> --
> >>> 2.4.2
> >
> >
> > ------------------------------------------------------------------------------
> > Monitor Your Dynamic Infrastructure at Any Scale With Datadog!
> > Get real-time metrics from all of your servers, apps and tools
> > in one place.
> > SourceForge users - Click here to start your Free Trial of Datadog now!
> > http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140
> > _______________________________________________
> > Linux-f2fs-devel mailing list
> > Linux-f2fs-devel@xxxxxxxxxxxxxxxxxxxxx
> > https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
> >
> > .
> >

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/