Re: [RFC PATCH v4 7/9] slub: Optimize deactivate_slab()

From: Vlastimil Babka
Date: Wed Nov 01 2023 - 09:21:11 EST




On 10/31/23 15:07, chengming.zhou@xxxxxxxxx wrote:
> From: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx>
>
> Since the introduce of unfrozen slabs on cpu partial list, we don't
> need to synchronize the slab frozen state under the node list_lock.
>
> The caller of deactivate_slab() and the caller of __slab_free() won't
> manipulate the slab list concurrently.
>
> So we can get node list_lock in the last stage if we really need to
> manipulate the slab list in this path.
>
> Signed-off-by: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx>

Reviewed-by: Vlastimil Babka <vbabka@xxxxxxx>

> ---
> mm/slub.c | 76 +++++++++++++++++++------------------------------------
> 1 file changed, 26 insertions(+), 50 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index bcb5b2c4e213..c429f8baba5f 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2468,10 +2468,8 @@ static void init_kmem_cache_cpus(struct kmem_cache *s)
> static void deactivate_slab(struct kmem_cache *s, struct slab *slab,
> void *freelist)
> {
> - enum slab_modes { M_NONE, M_PARTIAL, M_FREE, M_FULL_NOLIST };
> struct kmem_cache_node *n = get_node(s, slab_nid(slab));
> int free_delta = 0;
> - enum slab_modes mode = M_NONE;
> void *nextfree, *freelist_iter, *freelist_tail;
> int tail = DEACTIVATE_TO_HEAD;
> unsigned long flags = 0;
> @@ -2512,62 +2510,40 @@ static void deactivate_slab(struct kmem_cache *s, struct slab *slab,
> *
> * Ensure that the slab is unfrozen while the list presence
> * reflects the actual number of objects during unfreeze.

I think this we can delete also these two lines. If there's no other
reason for v5, I can do it when merging the series.