Re: [PATCH net-next] page_pool: disable direct recycling based on pool->cpuid on destroy

From: Alexander Lobakin
Date: Thu Feb 15 2024 - 08:45:46 EST


From: Lorenzo Bianconi <lorenzo@xxxxxxxxxx>
Date: Thu, 15 Feb 2024 14:37:10 +0100

>> From: Toke Høiland-Jørgensen <toke@xxxxxxxxxx>
>> Date: Thu, 15 Feb 2024 13:05:30 +0100
>>
>>> Alexander Lobakin <aleksander.lobakin@xxxxxxxxx> writes:
>>>
>>>> Now that direct recycling is performed basing on pool->cpuid when set,
>>>> memory leaks are possible:
>>>>
>>>> 1. A pool is destroyed.
>>>> 2. Alloc cache is emptied (it's done only once).
>>>> 3. pool->cpuid is still set.
>>>> 4. napi_pp_put_page() does direct recycling basing on pool->cpuid.
>>>> 5. Now alloc cache is not empty, but it won't ever be freed.
>>>
>>> Did you actually manage to trigger this? pool->cpuid is only set for the
>>> system page pool instance which is never destroyed; so this seems a very
>>> theoretical concern?
>>
>> To both Lorenzo and Toke:
>>
>> Yes, system page pools are never destroyed, but we might latter use
>> cpuid in non-persistent PPs. Then there will be memory leaks.
>> I was able to trigger this by creating bpf/test_run page_pools with the
>> cpuid set to test direct recycling of live frames.
>
> what about avoiding the page to be destroyed int this case? I do not like the

I think I didn't get what you wanted to say here :s

Rewriting cpuid doesn't introduce any new checks on hotpath. Destroying
the pool is slowpath and we shouldn't hurt hotpath to handle it.

> idea of overwriting the cpuid field for it.

We also overwrite pp->p.napi field a couple lines below. It happens only
when destroying the pool, we don't care about the fields at this point.

>
> Regards,
> Lorenzo
>
>>
>>>
>>> I guess we could still do this in case we find other uses for setting
>>> the cpuid; I don't think the addition of the READ_ONCE() will have any
>>> measurable overhead on the common arches?
>>
>> READ_ONCE() is cheap, but I thought it's worth mentioning in the
>> commitmsg anyway :)
>>
>>>
>>> -Toke

Thanks,
Olek