Re: [PATCH 1/1] PCI: layerscape-ep: set 64-bit DMA mask

From: Frank Li
Date: Thu Sep 21 2023 - 18:10:08 EST


On Thu, Sep 21, 2023 at 10:04:31PM +0200, Christophe JAILLET wrote:
> Le 21/09/2023 à 20:35, Frank Li a écrit :
> > On Thu, Sep 21, 2023 at 07:59:51PM +0200, Christophe JAILLET wrote:
> > > Le 21/09/2023 à 17:37, Frank Li a écrit :
> > > > From: Guanhua Gao <guanhua.gao@xxxxxxx>
> > > >
> > > > Set DMA mask and coherent DMA mask to enable 64-bit addressing.
> > > >
> > > > Signed-off-by: Guanhua Gao <guanhua.gao@xxxxxxx>
> > > > Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@xxxxxxx>
> > > > Signed-off-by: Frank Li <Frank.Li@xxxxxxx>
> > > > ---
> > > > drivers/pci/controller/dwc/pci-layerscape-ep.c | 5 +++++
> > > > 1 file changed, 5 insertions(+)
> > > >
> > > > diff --git a/drivers/pci/controller/dwc/pci-layerscape-ep.c b/drivers/pci/controller/dwc/pci-layerscape-ep.c
> > > > index de4c1758a6c33..6fd0dea38a32c 100644
> > > > --- a/drivers/pci/controller/dwc/pci-layerscape-ep.c
> > > > +++ b/drivers/pci/controller/dwc/pci-layerscape-ep.c
> > > > @@ -249,6 +249,11 @@ static int __init ls_pcie_ep_probe(struct platform_device *pdev)
> > > > pcie->big_endian = of_property_read_bool(dev->of_node, "big-endian");
> > > > + /* set 64-bit DMA mask and coherent DMA mask */
> > > > + if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)))
> > > > + if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32)))
> > >
> > > As stated in [1], dma_set_mask() with a 64-bit mask will never
> > > fail if dev->dma_mask is non-NULL.
> > >
> > > So, if it fails, the 32 bits case will also fail for the same reason.
> > > There is no need for the 2nd test.
> > >
> > >
> > > See [1] for Christoph Hellwig comment about it.
> >
> > I don't think it is true. the below is dma_set_mask()'s implementation
>
> I'll try to recollect a more detailled explanation from Christoph.
>
> I also checked all paths some times ago, and the conclusion was that if
> dma_set_mask(64) failed, dma_set_mask(32) would fail for the exact same
> reasons.
>
> I'll try to find the corresponding mail and come back to you.

I go through iommu driver and code carefully. You are right.
The dma_supported() actual means iommu require minimized dma capatiblity.

It is quite miss leading. There are many codes in driver like these pattern.

A example:
static int sba_dma_supported( struct device *dev, u64 mask)()
{
...
* check if mask is >= than the current max IO Virt Address
* The max IO Virt address will *always* < 30 bits.
*/
return((int)(mask >= (ioc->ibase - 1 +
(ioc->pdir_size / sizeof(u64) * IOVP_SIZE) )));
...
}

1 means supported. 0 means unsupported.

So dma_set_mask(64) is enough. Let me send new patch.

Frank


>
> I don't thing that implementation details have changed since that times, so
> the conclusion should still be valid.
>
> Adding Christoph in cc, if he wants to give another look at it, or if he
> beats me finding the 1 or 2 years old mails.
>
> CJ
>
> >
> > int dma_set_mask(struct device *dev, u64 mask)
> > {
> > /*
> > * Truncate the mask to the actually supported dma_addr_t width to
> > * avoid generating unsupportable addresses.
> > */
> > mask = (dma_addr_t)mask;
> >
> > if (!dev->dma_mask || !dma_supported(dev, mask))
> > ^^^^^^^
> > return -EIO;
> >
> > arch_dma_set_mask(dev, mask);
> > *dev->dma_mask = mask;
> > return 0;
> > }
> >
> > dma_supported() may return failiure.
> >
> > static int dma_supported(struct device *dev, u64 mask)
> > {
> > const struct dma_map_ops *ops = get_dma_ops(dev);
> >
> > /*
> > * ->dma_supported sets the bypass flag, so we must always call
> > * into the method here unless the device is truly direct mapped.
> > */
> > if (!ops)
> > return dma_direct_supported(dev, mask);
> > if (!ops->dma_supported)
> > return 1;
> > return ops->dma_supported(dev, mask);
> > ^^^^^^
> > DMA driver or IOMMU driver may return failure.
> > }
> >
> > Frank
> >
> > >
> > > CJ
> > >
> > >
> > > [1]: https://lkml.org/lkml/2021/6/7/398
> > >
> > > > + return -EIO;
> > > > +
> > > > platform_set_drvdata(pdev, pcie);
> > > > ret = dw_pcie_ep_init(&pci->ep);
> > >
> >
>