Re: [PATCH v2 1/4] x86/mm: Export force_dma_unencrypted

From: Andy Lutomirski
Date: Tue Sep 03 2019 - 17:41:20 EST


On Tue, Sep 3, 2019 at 1:46 PM Thomas HellstrÃm (VMware)
<thomas_os@xxxxxxxxxxxx> wrote:
>
> On 9/3/19 6:22 PM, Christoph Hellwig wrote:
> > On Tue, Sep 03, 2019 at 04:32:45PM +0200, Thomas HellstrÃm (VMware) wrote:
> >> Is this a layer violation concern, that is, would you be ok with a similar
> >> helper for TTM, or is it that you want to force the graphics drivers into
> >> adhering strictly to the DMA api, even when it from an engineering
> >> perspective makes no sense?
> > >From looking at DRM I strongly believe that making DRM use the DMA
> > mapping properly makes a lot of sense from the engineering perspective,
> > and this series is a good argument for that positions.
>
> What I mean with "from an engineering perspective" is that drivers would
> end up with a non-trivial amount of code supporting purely academic
> cases: Setups where software rendering would be faster than gpu
> accelerated, and setups on platforms where the driver would never run
> anyway because the device would never be supported on that platform...
>
> > If DRM was using
> > the DMA properl we would not need this series to start with, all the
> > SEV handling is hidden behind the DMA API. While we had occasional
> > bugs in that support fixing it meant that it covered all drivers
> > properly using that API.
>
> That is not really true. The dma API can't handle faulting of coherent
> pages which is what this series is really all about supporting also with
> SEV active. To handle the case where we move graphics buffers or send
> them to swap space while user-space have them mapped.
>
> To do that and still be fully dma-api compliant we would ideally need,
> for example, an exported dma_pgprot(). (dma_pgprot() by the way is still
> suffering from one of the bugs that you mention above).
>
> Still, I need a way forward and my questions weren't really answered by
> this.
>
>

I read this patch, I read force_dma_encrypted(), I read the changelog
again, and I haven't the faintest clue what TTM could possibly be
doing with force_dma_encrypted().

You're saying that TTM needs to transparently change mappings to
relocate objects in memory between system memory and device memory.
Great, I don't see the problem. Is the issue that you need to
allocate system memory that is addressable by the GPU and that, if the
GPU has insufficient PA bits, you need unencrypted memory? If so,
this sounds like an excellent use for the DMA API. Rather than
kludging directly knowledge of force_dma_encrypted() into the driver,
can't you at least add, if needed, a new helper specifically to
allocate memory that can be addressed by the device? Like
dma_alloc_coherent()? Or, if for some reason, dma_alloc_coherent()
doesn't do what you need or your driver isn't ready to use it, then
explain *why* and introduce a new function to solve your problem?

Keep in mind that, depending on just how MKTME ends up being supported
in Linux, it's entirely possible that it will be *backwards* from what
you expect -- high address bits will be needed to ask for
*unencrypted* memory.

--Andy