Re: [PATCH v7 02/24] x86/resctrl: kfree() rmid_ptrs from rdtgroup_exit()

From: Reinette Chatre
Date: Thu Dec 14 2023 - 14:06:44 EST


Hi James,

On 12/14/2023 10:28 AM, James Morse wrote:
> Hi Reinette,
>
> On 13/12/2023 23:27, Reinette Chatre wrote:
>> Hi James,
>>
>> On 12/13/2023 10:03 AM, James Morse wrote:
>>> On 09/11/2023 17:39, Reinette Chatre wrote:
>>>> On 10/25/2023 11:03 AM, James Morse wrote:
>>
>> ...
>>
>>>>> diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c
>>>>> index 19e0681f0435..0056c9962a44 100644
>>>>> --- a/arch/x86/kernel/cpu/resctrl/core.c
>>>>> +++ b/arch/x86/kernel/cpu/resctrl/core.c
>>>>> @@ -992,7 +992,13 @@ late_initcall(resctrl_late_init);
>>>>>
>>>>> static void __exit resctrl_exit(void)
>>>>> {
>>>>> + struct rdt_resource *r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl;
>>>>> +
>>>>> cpuhp_remove_state(rdt_online);
>>>>> +
>>>>> + if (r->mon_capable)
>>>>> + rdt_put_mon_l3_config(r);
>>>>> +
>>>>> rdtgroup_exit();
>>>>> }
>>>>
>>>> I expect cleanup to do the inverse of init. I do not know what was the
>>>> motivation for the rdtgroup_exit() to follow cpuhp_remove_state()
>>>
>>> This will invoke the hotplug callbacks, making it look to resctrl like all CPUs are
>>> offline. This means it is then impossible for rdtgroup_exit() to race with the hotplug
>>> notifiers. (if you could run this code...)
>
>> hmmm ... if there is a risk of such a race would the init code not also be
>> vulnerable to that with the notifiers up before rdtgroup_init()?
>
> Nope, because this array is allocated behind rdt_get_mon_l3_config(), which ultimately
> comes from get_rdt_resources() in resctrl_late_init() - which calls cpuhp_setup_state()
> after all this init work has been done.
>
> (cpu hp always gives me a headache1)

Right. My comment was actually and specifically about rdtgroup_init() and attempting to
understand your view of its races with the hotplug notifiers in response to your comment about
its (the hotplug notifiers) races with rdtgroup_exit().

The current order of state initialization you mention and hotplug notifiers needing the
state is sane and implies to expect an inverse order of teardown.

>> The races you mention
>> are not obvious to me. I see the filesystem and hotplug code protected against races via
>> the mutex and static keys. Could you please elaborate on the flows of concern?
>
> Functions like __check_limbo() (calling __rmid_entry()) are called under the
> rdtgroup_mutex, but they don't consider that rmid_ptrs[] may be NULL.
>
> But this could only happen if the limbo work ran after cpuhp_remove_state() - this can't
> happen because the hotplug callbacks cancel the limbo work, and won't reschedule it if the
> domain is going offline.
>
>
> The only other path is via free_rmid(), I've not thought too much about this as
> resctrl_exit() can't actually be invoked - this code is discarded by the linker.
>
> It could be run on MPAM, but only in response to an 'error interrupt' (which is optional)
> - and all the MPAM error interrupts indicate a software bug.

This still just considers the resctrl state and hotplug notifiers.

I clearly am missing something. It is still not clear to me how this connects to your earlier
comment about races with the rdtgroup_exit() code ... how the hotplug notifiers races with the
filesystem register/unregister code.

>
> I've only invoked this path once, and rdtgroup_exit()s unregister_filesystem() didn't
> remove all the files. I anticipate digging into this teardown code more once the bulk of
> the MPAM driver is upstream.
>
>
>> I am not advocating for cpuhp_remove_state() to be called later. I understand that
>> it simplifies the flows to consider.
>>
>>>> but I
>>>> was expecting this new cleanup to be done after rdtgroup_exit() to be inverse
>>>> of init. This cleanup is inserted in middle of two existing cleanup - could
>>>> you please elaborate how this location was chosen?
>>>
>>> rdtgroup_exit() does nothing with the resctrl structures, it removes sysfs and debugfs
>>> entries, and unregisters the filesystem.
>>>
>>> Hypothetically, you can't observe any effect of the rmid_ptrs array being freed as
>>> all the CPUs are offline and the overflow/limbo threads should have been cancelled.
>>> Once cpuhp_remove_state() has been called, this really doesn't matter.
>
>> Sounds like nothing prevents this code from following the custom of cleanup to be
>> inverse of init (yet keep cpuhp_remove_state() first).
>
> I'll put the the rdt_put_mon_l3_config() call after rdtgroup_exit()...

thank you

Reinette