On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:I mean that we trust the backend that it can prevent Dom0
On 04/17/2018 11:57 PM, Dongwon Kim wrote:I cannot parse the above sentence:
On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:3.2 Backend exports dma-buf to xen-front
On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
In this case Dom0 pages are shared with DomU. As before, DomU can only write
to these pages, not any other page from Dom0, so it can be still considered
safe.
But, the following must be considered (highlighted in xen-front's Kernel
documentation):
Â- If guest domain dies then pages/grants received from the backend cannot
ÂÂ be claimed back - think of it as memory lost to Dom0 (won't be used for
any
ÂÂ other guest)
Â- Misbehaving guest may send too many requests to the backend exhausting
ÂÂ its grant references and memory (consider this from security POV). As the
ÂÂ backend runs in the trusted domain we also assume that it is trusted as
well,
ÂÂ e.g. must take measures to prevent DDoS attacks.
"As the backend runs in the trusted domain we also assume that it is
trusted as well, e.g. must take measures to prevent DDoS attacks."
What's the relation between being trusted and protecting from DoS
attacks?
In any case, all? PV protocols are implemented with the frontendThis is the first use-case above. But there are real-world
sharing pages to the backend, and I think there's a reason why this
model is used, and it should continue to be used.
Having to add logic in the backend to prevent such attacks meansYou can live without this code at all, but this is then up to
that:
- We need more code in the backend, which increases complexity and
chances of bugs.
- Such code/logic could be wrong, thus allowing DoS.
Because there is no dma-buf UAPI which allows to track the buffer life cycle4. xen-front/backend/xen-zcopy synchronizationSo this zcopy thing keeps some kind of track of the memory usage? Why
4.1. As I already said in 2) all the inter VM communication happens between
xen-front and the backend, xen-zcopy is NOT involved in that.
When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
This call is synchronous, so xen-front expects that backend does free the
buffer pages on return.
4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
 - closes all dumb handles/fd's of the buffer according to [3]
 - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
sure
ÂÂÂ the buffer is freed (think of it as it waits for dma-buf->release
callback)
can't the user-space backend keep track of the buffer usage?
A dma-buf is seen by user-space as a file descriptor and you can - replies to xen-front that the buffer can be destroyed.I don't know much about the dma-buf implementation in Linux, but
This way deletion of the buffer happens synchronously on both Dom0 and DomU
sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
error
(BTW, wait time is a parameter of this IOCTL), Xen will defer grant
reference
removal and will retry later until those are free.
Hope this helps understand how buffers are synchronously deleted in case
of xen-zcopy with a single protocol command.
I think the above logic can also be re-used by the hyper-dmabuf driver with
some additional work:
1. xen-zcopy can be split into 2 parts and extend:
1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
vise versa,
gntdev is a user-space device, and AFAICT user-space applications
don't have any notion of dma buffers. How are such buffers useful for
user-space? Why can't this just be called memory?
At the moment I can only see Linux implementation and it seems
Also, (with my FreeBSD maintainer hat) how is this going to translate
to other OSes? So far the operations performed by the gntdev device
are mostly OS-agnostic because this just map/unmap memory, and in fact
they are implemented by Linux and FreeBSD.
Use-case: Dom0 has a HW driver which only works with contig memoryimplement "wait" ioctl (wait for dma-buf->release): currently these areI think this needs clarifying. In which memory space do you need those
DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
DRM_XEN_ZCOPY_DUMB_WAIT_FREE
1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
needed
by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
regions to be contiguous?
Host
Do they need to be contiguous in host physical memory, or guest
physical memory?
There are drivers/HW which can only work with contig memory and
If it's in guest memory space, isn't there any generic interface that
you can use?
If it's in host physical memory space, why do you need this buffer to
be contiguous in host physical memory space? The IOMMU should hide all
this.
Thanks, Roger.Thank you,