Re: [PATCH v5 6/9] slub: Delay freezing of partial slabs

From: Chengming Zhou
Date: Wed Nov 22 2023 - 06:54:36 EST


On 2023/11/22 19:40, Vlastimil Babka wrote:
> On 11/22/23 12:35, Chengming Zhou wrote:
>> On 2023/11/22 17:37, Vlastimil Babka wrote:
>>> On 11/20/23 19:49, Mark Brown wrote:
>>>> On Thu, Nov 02, 2023 at 03:23:27AM +0000, chengming.zhou@xxxxxxxxx wrote:
>>>>> From: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx>
>>>>>
>>>>> Now we will freeze slabs when moving them out of node partial list to
>>>>> cpu partial list, this method needs two cmpxchg_double operations:
>>>>>
>>>>> 1. freeze slab (acquire_slab()) under the node list_lock
>>>>> 2. get_freelist() when pick used in ___slab_alloc()
>>>>
>>>> Recently -next has been failing to boot on a Raspberry Pi 3 with an arm
>>>> multi_v7_defconfig and a NFS rootfs, a bisect appears to point to this
>>>> patch (in -next as c8d312e039030edab25836a326bcaeb2a3d4db14) as having
>>>> introduced the issue. I've included the full bisect log below.
>>>>
>>>> When we see problems we see RCU stalls while logging in, for example:
>>>
>>> Can you try this, please?
>>>
>>
>> Great! I manually disabled __CMPXCHG_DOUBLE to reproduce the problem,
>> and this patch can solve the machine hang problem.
>>
>> BTW, I also did the performance testcase on the machine with 128 CPUs.
>>
>> stress-ng --rawpkt 128 --rawpkt-ops 100000000
>>
>> base patched
>> 2.22s 2.35s
>> 2.21s 3.14s
>> 2.19s 4.75s
>>
>> Found this atomic version performance numbers are not stable.
>
> That's weirdly too bad. Is that measured also with __CMPXCHG_DOUBLE
> disabled, or just the patch? The PG_workingset flag change should be

The performance test is just the patch.

> uncontended as we are doing it under list_lock, and with __CMPXCHG_DOUBLE
> there should be no interfering PG_locked interference.
>

Yes, I don't know. Maybe it's related with my kernel config, making the
atomic operation much expensive? Will look again..

And I also tested the atomic-optional version like below, found the
performance numbers are much stable.

diff --git a/mm/slub.c b/mm/slub.c
index a307d319e82c..e11d34d51a14 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -531,7 +531,7 @@ static __always_inline void slab_unlock(struct slab *slab)
struct page *page = slab_page(slab);

VM_BUG_ON_PAGE(PageTail(page), page);
- __bit_spin_unlock(PG_locked, &page->flags);
+ bit_spin_unlock(PG_locked, &page->flags);
}

static inline bool
@@ -2136,12 +2136,18 @@ static inline bool slab_test_node_partial(const struct slab *slab)

static inline void slab_set_node_partial(struct slab *slab)
{
- __set_bit(PG_workingset, folio_flags(slab_folio(slab), 0));
+ if (slab->slab_cache->flags & __CMPXCHG_DOUBLE)
+ __set_bit(PG_workingset, folio_flags(slab_folio(slab), 0));
+ else
+ set_bit(PG_workingset, folio_flags(slab_folio(slab), 0));
}

static inline void slab_clear_node_partial(struct slab *slab)
{
- __clear_bit(PG_workingset, folio_flags(slab_folio(slab), 0));
+ if (slab->slab_cache->flags & __CMPXCHG_DOUBLE)
+ __clear_bit(PG_workingset, folio_flags(slab_folio(slab), 0));
+ else
+ clear_bit(PG_workingset, folio_flags(slab_folio(slab), 0));
}