Re: ublk-qcow2: ublk-qcow2 is available

From: Ming Lei
Date: Fri Oct 07 2022 - 07:22:02 EST


On Thu, Oct 06, 2022 at 02:29:55PM -0400, Stefan Hajnoczi wrote:
> On Thu, Oct 06, 2022 at 11:09:48PM +0800, Ming Lei wrote:
> > On Thu, Oct 06, 2022 at 09:59:40AM -0400, Stefan Hajnoczi wrote:
> > > On Thu, Oct 06, 2022 at 06:26:15PM +0800, Ming Lei wrote:
> > > > On Wed, Oct 05, 2022 at 11:11:32AM -0400, Stefan Hajnoczi wrote:
> > > > > On Tue, Oct 04, 2022 at 01:57:50AM +0200, Denis V. Lunev wrote:
> > > > > > On 10/3/22 21:53, Stefan Hajnoczi wrote:
> > > > > > > On Fri, Sep 30, 2022 at 05:24:11PM +0800, Ming Lei wrote:
> > > > > > > > ublk-qcow2 is available now.
> > > > > > > Cool, thanks for sharing!
> > > > > > yep
> > > > > >
> > > > > > > > So far it provides basic read/write function, and compression and snapshot
> > > > > > > > aren't supported yet. The target/backend implementation is completely
> > > > > > > > based on io_uring, and share the same io_uring with ublk IO command
> > > > > > > > handler, just like what ublk-loop does.
> > > > > > > >
> > > > > > > > Follows the main motivations of ublk-qcow2:
> > > > > > > >
> > > > > > > > - building one complicated target from scratch helps libublksrv APIs/functions
> > > > > > > > become mature/stable more quickly, since qcow2 is complicated and needs more
> > > > > > > > requirement from libublksrv compared with other simple ones(loop, null)
> > > > > > > >
> > > > > > > > - there are several attempts of implementing qcow2 driver in kernel, such as
> > > > > > > > ``qloop`` [2], ``dm-qcow2`` [3] and ``in kernel qcow2(ro)`` [4], so ublk-qcow2
> > > > > > > > might useful be for covering requirement in this field
> > > > > > There is one important thing to keep in mind about all partly-userspace
> > > > > > implementations though:
> > > > > > * any single allocation happened in the context of the
> > > > > >    userspace daemon through try_to_free_pages() in
> > > > > >    kernel has a possibility to trigger the operation,
> > > > > >    which will require userspace daemon action, which
> > > > > >    is inside the kernel now.
> > > > > > * the probability of this is higher in the overcommitted
> > > > > >    environment
> > > > > >
> > > > > > This was the main motivation of us in favor for the in-kernel
> > > > > > implementation.
> > > > >
> > > > > CCed Josef Bacik because the Linux NBD driver has dealt with memory
> > > > > reclaim hangs in the past.
> > > > >
> > > > > Josef: Any thoughts on userspace block drivers (whether NBD or ublk) and
> > > > > how to avoid hangs in memory reclaim?
> > > >
> > > > If I remember correctly, there isn't new report after the last NBD(TCMU) deadlock
> > > > in memory reclaim was addressed by 8d19f1c8e193 ("prctl: PR_{G,S}ET_IO_FLUSHER
> > > > to support controlling memory reclaim").
> > >
> > > Denis: I'm trying to understand the problem you described. Is this
> > > correct:
> > >
> > > Due to memory pressure, the kernel reclaims pages and submits a write to
> > > a ublk block device. The userspace process attempts to allocate memory
> > > in order to service the write request, but it gets stuck because there
> > > is no memory available. As a result reclaim gets stuck, the system is
> > > unable to free more memory and therefore it hangs?
> >
> > The process should be killed in this situation if PR_SET_IO_FLUSHER
> > is applied since the page allocation is done in VM fault handler.
>
> Thanks for mentioning PR_SET_IO_FLUSHER. There is more info in commit
> 8d19f1c8e1937baf74e1962aae9f90fa3aeab463 ("prctl: PR_{G,S}ET_IO_FLUSHER
> to support controlling memory reclaim").
>
> It requires CAP_SYS_RESOURCE :/. This makes me wonder whether
> unprivileged ublk will ever be possible.

IMO, it shouldn't be one blocker, there might be lots of choices for us

- unprivileged ublk can simply not call it, if such io hang is triggered,
ublksrv is capable of figuring out this problem, then kill & recover the device.

- set PR_IO_FLUSHER for current task in ublk_ch_uring_cmd(UBLK_IO_FETCH_REQ)

- ...

>
> I think this addresses Denis' concern about hangs, but it doesn't solve
> them because I/O will fail. The real solution is probably what you
> mentioned...

So far, not see real report yet, and it may be never one issue if proper
swap device/file is configured.


Thanks,
Ming