Re: [PATCH 2/5] mm, memcg: narrow the scope of percpu_charge_mutex

From: Miaohe Lin
Date: Mon Aug 02 2021 - 05:54:17 EST


On 2021/8/2 14:49, Michal Hocko wrote:
> On Sat 31-07-21 10:29:52, Miaohe Lin wrote:
>> On 2021/7/30 14:50, Michal Hocko wrote:
>>> On Thu 29-07-21 20:06:45, Roman Gushchin wrote:
>>>> On Thu, Jul 29, 2021 at 08:57:52PM +0800, Miaohe Lin wrote:
>>>>> Since percpu_charge_mutex is only used inside drain_all_stock(), we can
>>>>> narrow the scope of percpu_charge_mutex by moving it here.
>>>>>
>>>>> Signed-off-by: Miaohe Lin <linmiaohe@xxxxxxxxxx>
>>>>> ---
>>>>> mm/memcontrol.c | 2 +-
>>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
>>>>> index 6580c2381a3e..a03e24e57cd9 100644
>>>>> --- a/mm/memcontrol.c
>>>>> +++ b/mm/memcontrol.c
>>>>> @@ -2050,7 +2050,6 @@ struct memcg_stock_pcp {
>>>>> #define FLUSHING_CACHED_CHARGE 0
>>>>> };
>>>>> static DEFINE_PER_CPU(struct memcg_stock_pcp, memcg_stock);
>>>>> -static DEFINE_MUTEX(percpu_charge_mutex);
>>>>>
>>>>> #ifdef CONFIG_MEMCG_KMEM
>>>>> static void drain_obj_stock(struct obj_stock *stock);
>>>>> @@ -2209,6 +2208,7 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
>>>>> */
>>>>> static void drain_all_stock(struct mem_cgroup *root_memcg)
>>>>> {
>>>>> + static DEFINE_MUTEX(percpu_charge_mutex);
>>>>> int cpu, curcpu;
>>>>
>>>> It's considered a good practice to protect data instead of code paths. After
>>>> the proposed change it becomes obvious that the opposite is done here: the mutex
>>>> is used to prevent a simultaneous execution of the code of the drain_all_stock()
>>>> function.
>>>
>>> The purpose of the lock was indeed to orchestrate callers more than any
>>> data structure consistency.
>>>
>>>> Actually we don't need a mutex here: nobody ever sleeps on it. So I'd replace
>>>> it with a simple atomic variable or even a single bitfield. Then the change will
>>>> be better justified, IMO.
>>>
>>> Yes, mutex can be replaced by an atomic in a follow up patch.
>>>
>>
>> Thanks for both of you. It's a really good suggestion. What do you mean is something like below?
>>
>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
>> index 616d1a72ece3..508a96e80980 100644
>> --- a/mm/memcontrol.c
>> +++ b/mm/memcontrol.c
>> @@ -2208,11 +2208,11 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
>> */
>> static void drain_all_stock(struct mem_cgroup *root_memcg)
>> {
>> - static DEFINE_MUTEX(percpu_charge_mutex);
>> int cpu, curcpu;
>> + static atomic_t drain_all_stocks = ATOMIC_INIT(-1);
>> /* If someone's already draining, avoid adding running more workers. */
>> - if (!mutex_trylock(&percpu_charge_mutex))
>> + if (!atomic_inc_not_zero(&drain_all_stocks))
>> return;
>> /*
>> * Notify other cpus that system-wide "drain" is running
>> @@ -2244,7 +2244,7 @@ static void drain_all_stock(struct mem_cgroup *root_memcg)
>> }
>> }
>> put_cpu();
>> - mutex_unlock(&percpu_charge_mutex);
>> + atomic_dec(&drain_all_stocks);
>
> Yes this would work. I would just s@drain_all_stocks@drainers@ or
> something similar to better express the intention.
>

Sounds good. Will do it in v2. Many thanks.

>> }
>>
>> static int memcg_hotplug_cpu_dead(unsigned int cpu)
>