Re: Userspace Block Device

From: Bill Speirs
Date: Tue May 19 2015 - 10:43:05 EST


On Tue, May 19, 2015 at 1:34 AM, Rob Landley <rob@xxxxxxxxxxx> wrote:
> On Mon, May 18, 2015 at 2:01 PM, Bill Speirs <bill.speirs@xxxxxxxxx> wrote:
>> My goal is to provide Amazon S3 or Google Cloud Storage as a block
>> device. I would like to leverage the libraries that exist for both
>> systems by servicing requests via a user space program.
>> ... ndb seems like a bit of a Rube Goldberg solution.
>
> I wrote the busybox and toybox nbd clients, and have a todo list item
> to write an nbd server for toybox. I believe there's also an nbd
> server in qemu. I haven't found any decent documentation on the
> protocol yet, but what specifically makes you describe it as rube
> goldberg?

My understanding of using nbd is:
- Write an ndb-server that is essentially a gateway between nbd and
S3/Google. For each nbd request, I translate it into the appropriate
S3/Google request and respond appropriately.
- I'd run the above server on the machine on some port.
- I'd run a client on the same server using 127.0.0.1 and the above
port, providing the nbd block device.
- Go drink a beer as I rack up a huge bill with Amazon or Google

Seems a bit much to run a client & server on the same machine with
socket overhead, etc. In looking at the code for your nbd-client
(https://github.com/landley/toybox/blob/master/toys/other/nbd_client.c)
I'm wondering if I couldn't just set a pipe instead of a socket in the
ioctl(nbd, NBD_SET_SOCK, sock) step, then have the same proc (or fork)
listening on the pipe so it's all in a single process/codebase.
Thoughts on this approach?

That said, clearly my bottleneck in all of this will be the
communication with S3/Google, and using something like dm-cache would
make it appear fast for most requests. So maybe my Rube Goldberg
comment was too over-the-top.

Thank you for the pointers and the feedback!

Bill-
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/