Re: [RFC PATCH 05/11] iommu/arm-smmu-v3: Merge a span of page to block descriptor

From: Keqian Zhu
Date: Sun Feb 07 2021 - 07:17:59 EST


Hi Robin,

On 2021/2/5 3:52, Robin Murphy wrote:
> On 2021-01-28 15:17, Keqian Zhu wrote:
>> From: jiangkunkun <jiangkunkun@xxxxxxxxxx>
>>
>> When stop dirty log tracking, we need to recover all block descriptors
>> which are splited when start dirty log tracking. This adds a new
>> interface named merge_page in iommu layer and arm smmuv3 implements it,
>> which reinstall block mappings and unmap the span of page mappings.
>>
>> It's caller's duty to find contiuous physical memory.
>>
>> During merging page, other interfaces are not expected to be working,
>> so race condition does not exist. And we flush all iotlbs after the merge
>> procedure is completed to ease the pressure of iommu, as we will merge a
>> huge range of page mappings in general.
>
> Again, I think we need better reasoning than "race conditions don't exist because we don't expect them to exist".
Sure, because they can't. ;-)

>
>> Co-developed-by: Keqian Zhu <zhukeqian1@xxxxxxxxxx>
>> Signed-off-by: Kunkun Jiang <jiangkunkun@xxxxxxxxxx>
>> ---
>> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 20 ++++++
>> drivers/iommu/io-pgtable-arm.c | 78 +++++++++++++++++++++
>> drivers/iommu/iommu.c | 75 ++++++++++++++++++++
>> include/linux/io-pgtable.h | 2 +
>> include/linux/iommu.h | 10 +++
>> 5 files changed, 185 insertions(+)
>>
>> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
>> index 5469f4fca820..2434519e4bb6 100644
>> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
>> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
>> @@ -2529,6 +2529,25 @@ static size_t arm_smmu_split_block(struct iommu_domain *domain,
>> return ops->split_block(ops, iova, size);
>> }
[...]

>> +
>> +size_t iommu_merge_page(struct iommu_domain *domain, unsigned long iova,
>> + size_t size, int prot)
>> +{
>> + phys_addr_t phys;
>> + dma_addr_t p, i;
>> + size_t cont_size, merged_size;
>> + size_t merged = 0;
>> +
>> + while (size) {
>> + phys = iommu_iova_to_phys(domain, iova);
>> + cont_size = PAGE_SIZE;
>> + p = phys + cont_size;
>> + i = iova + cont_size;
>> +
>> + while (cont_size < size && p == iommu_iova_to_phys(domain, i)) {
>> + p += PAGE_SIZE;
>> + i += PAGE_SIZE;
>> + cont_size += PAGE_SIZE;
>> + }
>> +
>> + merged_size = __iommu_merge_page(domain, iova, phys, cont_size,
>> + prot);
>
> This is incredibly silly. The amount of time you'll spend just on walking the tables in all those iova_to_phys() calls is probably significantly more than it would take the low-level pagetable code to do the entire operation for itself. At this level, any knowledge of how mappings are actually constructed is lost once __iommu_map() returns, so we just don't know, and for this operation in particular there seems little point in trying to guess - the driver backend still has to figure out whether something we *think* might me mergeable actually is, so it's better off doing the entire operation in a single pass by itself.
>
> There's especially little point in starting all this work *before* checking that it's even possible...
>
> Robin.

Well, this looks silly indeed. But the iova->phys info is only stored in pgtable. It seems that there is no other method to find continuous physical address :-( (actually, the vfio_iommu_replay() has similar logic).

We put the finding procedure of continuous physical address in common iommu layer, because this is a common logic for all types of iommu driver.

If a vendor iommu driver thinks (iova, phys, cont_size) is not merge-able, it can make its own decision to map them. This keeps same as iommu_map(), which provides (iova, paddr, pgsize) to vendor driver, and vendor driver can make its own decision to map them.

Do I understand your idea correctly?

Thanks,
Keqian
>
>> + iova += merged_size;
>> + size -= merged_size;
>> + merged += merged_size;
>> +
>> + if (merged_size != cont_size)
>> + break;
>> + }
>> + iommu_flush_iotlb_all(domain);
>> +
>> + return merged;
>> +}
>> +EXPORT_SYMBOL_GPL(iommu_merge_page);
>> +
>> void iommu_get_resv_regions(struct device *dev, struct list_head *list)
>> {
>> const struct iommu_ops *ops = dev->bus->iommu_ops;
>> diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h
>> index b87c6f4ecaa2..754b62a1bbaf 100644
>> --- a/include/linux/io-pgtable.h
>> +++ b/include/linux/io-pgtable.h
>> @@ -164,6 +164,8 @@ struct io_pgtable_ops {
>> unsigned long iova);
>> size_t (*split_block)(struct io_pgtable_ops *ops, unsigned long iova,
>> size_t size);
>> + size_t (*merge_page)(struct io_pgtable_ops *ops, unsigned long iova,
>> + phys_addr_t phys, size_t size, int prot);
>> };
>> /**
>> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
>> index abeb811098a5..ac2b0b1bce0f 100644
>> --- a/include/linux/iommu.h
>> +++ b/include/linux/iommu.h
>> @@ -260,6 +260,8 @@ struct iommu_ops {
>> enum iommu_attr attr, void *data);
>> size_t (*split_block)(struct iommu_domain *domain, unsigned long iova,
>> size_t size);
>> + size_t (*merge_page)(struct iommu_domain *domain, unsigned long iova,
>> + phys_addr_t phys, size_t size, int prot);
>> /* Request/Free a list of reserved regions for a device */
>> void (*get_resv_regions)(struct device *dev, struct list_head *list);
>> @@ -513,6 +515,8 @@ extern int iommu_domain_set_attr(struct iommu_domain *domain, enum iommu_attr,
>> void *data);
>> extern size_t iommu_split_block(struct iommu_domain *domain, unsigned long iova,
>> size_t size);
>> +extern size_t iommu_merge_page(struct iommu_domain *domain, unsigned long iova,
>> + size_t size, int prot);
>> /* Window handling function prototypes */
>> extern int iommu_domain_window_enable(struct iommu_domain *domain, u32 wnd_nr,
>> @@ -913,6 +917,12 @@ static inline size_t iommu_split_block(struct iommu_domain *domain,
>> return 0;
>> }
>> +static inline size_t iommu_merge_page(struct iommu_domain *domain,
>> + unsigned long iova, size_t size, int prot)
>> +{
>> + return -EINVAL;
>> +}
>> +
>> static inline int iommu_device_register(struct iommu_device *iommu)
>> {
>> return -ENODEV;
>>
> .
>