Re: [RFC PATCH v1 3/8] iommu/arm-smmu-v3-sva: Allocate new ASID from installed_smmus

From: Jason Gunthorpe
Date: Mon Aug 21 2023 - 09:50:16 EST


On Mon, Aug 21, 2023 at 09:38:40PM +0800, Michael Shavit wrote:
> On Mon, Aug 21, 2023 at 7:54 PM Jason Gunthorpe <jgg@xxxxxxxxxx> wrote:
> >
> > On Mon, Aug 21, 2023 at 05:31:23PM +0800, Michael Shavit wrote:
> > > On Fri, Aug 18, 2023 at 2:38 AM Jason Gunthorpe <jgg@xxxxxxxxxx> wrote:
> > > >
> > > > On Fri, Aug 18, 2023 at 02:16:25AM +0800, Michael Shavit wrote:
> > > > > Pick an ASID that is within the supported range of all SMMUs that the
> > > > > domain is installed to.
> > > > >
> > > > > Signed-off-by: Michael Shavit <mshavit@xxxxxxxxxx>
> > > > > ---
> > > >
> > > > This seems like a pretty niche scenario, maybe we should just keep a
> > > > global for the max ASID?
> > > >
> > > > Otherwise we need a code to change the ASID, even for non-SVA domains,
> > > > when the domain is installed in different devices if the current ASID
> > > > is over the instance max..
> > >
> > > This RFC took the other easy way out for this problem by rejecting
> > > attaching a domain if its currently assigned ASID/VMID
> > > is out of range when attaching to a new SMMU. But I'm not sure
> > > which of the two options is the right trade-off.
> > > Especially if we move VMID to a global allocator (which I plan to add
> > > for v2), setting a global maximum for VMID of 256 sounds small.
> >
> > IMHO the simplest and best thing is to make both vmid and asid as
> > local allocators. Then alot of these problems disappear
>
> Well that does sound like the most flexible, but IMO quite a lot more
> complicated.
>
> I'll post a v2 RFC that removes the `iommu/arm-smmu-v3: Add list of
> installed_smmus` patch and uses a flat master list in smmu_domain as
> suggested by Robin, for comparison with the v1. But at a glance using a
> local allocator would require:

> 1. Keeping that patch so we can track the asid/vmid for a domain on a
> per smmu instance

You'd have to store the cache tag in the per-master struct on that
list and take it out of the domain struct.

Ie the list of attached masters contains the per-master cache tag
instead of a global cache tag.

The only place you need the cache tag is when iterating over the list
of masters, so it is OK.

If the list of masters is sorted by smmu then the first master of each
smmu can be used to perform the cache tag invalidation, then the rest
of the list is the ATC invalidation.

The looping code will be a bit ugly.

> 2. Keeping a map in the smmu struct so that arm_smmu_share_asid can
> find any arm_smmu_installed_smmu that need to have their asid
> updated

Yes, the global xarray moves into the smmu

> (on a loop over every smmu the domain in arm_smmu_mmu_notifier_get is
> attached to, which just at a glance looks headache inducing because of
> sva's piggybacking on the rid domain.)

Not every smmu, just the one you are *currently* attaching to. We
don't care if the *other* smmu's have different ASIDs, maybe they are
not using BTM, or won't use SVA.

We care that *our* smmu has the right ASID when we go to attach the
domain.

So the logic is not really any different, you already know the smmu
because it is attaching, and you do the same locking xarray stuff as
was done globally, just against this local smmu.

Jason