Re: [PATCH v3 0/2] iommu/iova: Make the rcache depot properly flexible

From: Robin Murphy
Date: Wed Jan 10 2024 - 07:48:29 EST


On 2024-01-09 5:21 pm, Ido Schimmel wrote:
Hi Robin,

Thanks for the reply.

On Mon, Jan 08, 2024 at 05:35:26PM +0000, Robin Murphy wrote:
Hmm, we've got what looks to be a set of magazines forming a plausible depot
list (or at least the tail end of one):

ffff8881411f9000 -> ffff8881261c1000

ffff8881261c1000 -> ffff88812be26400

ffff88812be26400 -> ffff8188392ec000

ffff8188392ec000 -> ffff8881a5301000

ffff8881a5301000 -> NULL

which I guess has somehow become detached from its rcache->depot without
being freed properly? However I'm struggling to see any conceivable way that
could happen which wouldn't already be more severely broken in other ways as
well (i.e. either general memory corruption or someone somehow still trying
to use the IOVA domain while it's being torn down).

The machine is running a debug kernel that among other things has KASAN
enabled, but there are no traces in the kernel log so there is no memory
corruption that I'm aware of.

Out of curiosity, does reverting just patch #2 alone make a difference?

Will try and let you know.

And is your workload doing anything "interesting" in relation to IOVA
domain lifetimes, like creating and destroying SR-IOV virtual
functions, changing IOMMU domain types via sysfs, or using that
horrible vdpa thing, or are you seeing this purely from regular driver
DMA API usage?

The machine is running networking related tests, but it is not using
SR-IOV, VMs or VDPA so there shouldn't be anything "interesting" as far
as IOMMU is concerned.

The two networking drivers on the machine are "igb" for the management
port and "mlxsw" for the data ports (the machine is a physical switch).
I believe the DMA API usage in the latter is quite basic and I don't
recall any DMA related problems with this driver since it was first
accepted upstream in 2015.

Thanks for the clarifications, that seems to rule out all the most confusingly impossible scenarios, at least.

The best explanation I've managed to come up with is a false-positive race dependent on the order in which kmemleak scans the relevant objects. Say we have the list as depot -> A -> B -> C; the rcache object is scanned and sees the pointer to magazine A, but then A is popped *before* kmemleak scans it, such that when it is then scanned, its "next" pointer has already been wiped, thus kmemleak never observes any reference to B, so it appears that B and (transitively) C are "leaked". If that is the case, then I'd expect it should be reproducible with patch #1 alone (although patch #2 might make it slightly more likely if the work ever does result in additional pops happening), but I'd expect the leaked objects to be transient and not persist forever through repeated scans (what I don't know is whether kmemleak automatically un-leaks an object if it subsequently finds a new reference, or if it needs manually clearing in between scans). I'm not sure if there's a nice way to make that any better... unless maybe it might make sense to call kmemleak_not_leak(mag->next) in iova_depot_pop() before that reference disappears?

Thanks,
Robin.