Re: [PATCH 0/1] vhost: add vhost_blk driver

From: Jason Wang
Date: Mon Nov 05 2018 - 21:45:22 EST



On 2018/11/5 äå11:23, Vitaly Mayatskih wrote:
On Sun, Nov 4, 2018 at 10:00 PM Jason Wang <jasowang@xxxxxxxxxx> wrote:

# fio num-jobs
# A: bare metal over block
# B: bare metal over file
# C: virtio-blk over block
# D: virtio-blk over file
# E: vhost-blk bio over block
# F: vhost-blk kiocb over block
# G: vhost-blk kiocb over file
#
# A B C D E F G
16 1480k 1506k 101k 102k 1346k 1202k 566k
Hi:

Thanks for the patches.

This is not the first attempt for having vhost-blk:

- Badari's version: https://lwn.net/Articles/379864/

- Asias' version: https://lwn.net/Articles/519880/

It's better to describe the differences (kiocb vs bio? performance?).
E.g if my memory is correct, Asias said it doesn't give much improvement
compared with userspace qemu.

And what's more important, I believe we tend to use virtio-scsi nowdays.
So what's the advantages of vhost-blk over vhost-scsi?
Hi,

Yes, I saw both. Frankly, my implementation is not that different,
because the whole thing has only twice more LOC that vhost/test.c.

I posted my numbers (see in quoted text above the 16 queues case),
IOPS goes from ~100k to 1.2M and almost reaches the physical
limitation of the backend.

submit_bio() is a bit faster, but can't be used for disk images placed
on a file system. I have that submit_bio implementation too.

Storage industry is shifting away from SCSI, which has a scaling
problem.


Know little about storage. For scaling, do you mean SCSI protocol itself? If not, it's probably not a real issue for virtio-scsi itself.


I can compare vhost-scsi vs vhost-blk if you are curious.


It would be very helpful to see the performance comparison.


Thanks


Thanks!