Re: [PATCH v1 1/2] zsmalloc: add allocated objects counter for subpage

From: Alexey Romanov
Date: Wed Jun 21 2023 - 09:41:50 EST


Hi!

On Wed, Jun 21, 2023 at 10:17:16PM +0900, Sergey Senozhatsky wrote:
> On (23/06/20 11:16), Alexey Romanov wrote:
> > If sizeof(unsigned int) >= 32 bits the this will be enough for us.
> > Of course, in rare cases this will not be the case. But it seems that
> > zram and kernel already has similiar places. For example, if page size
> > is 256 Kb and sizeof(unsigned int) = 16 bits (2 byte), zram will not
> > wotk on such system, because we can't store offset. But such case is
> > very rare, most systems have unsigned int over 32 bits.
> >
> > Therefore, I think that my idea is still applicable, we just need to
> > change the counter type. What do you think?
>
> My gut feeling is that we better avoid mixing in architecture specific
> magic into generic code. It works fine until it doesn't. May be Minchan
> will have a different opinion tho.
>
> There can be other ways to avoid linear scan of empty sub-pages. For
> instance, something like below probably can cover less cases than your
> patch 0002, but on the other hand is rather generic, trivial and doesn't
> contain any assumptions on the architecture specifics.
>
> (composed/edited in mail client, so likely is broken, but outlines
> the idea)
>
> ====================================================================
>
> mm/zsmalloc: do not scan empty zspages
>
> We already stop zspage migration when we detect that target
> zspage has no space left for any new objects. There is
> one more thing we can do in order to avoid doing useless
> work: stop scanning for allocated objects in sub-pages when
> we have migrated the last inuse object from the zspage in
> question.
>
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 02f7f414aade..2875152e6497 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1263,6 +1263,11 @@ static bool zspage_full(struct size_class *class, struct zspage *zspage)
> return get_zspage_inuse(zspage) == class->objs_per_zspage;
> }
>
> +static bool zspage_empty(struct zspage *zspage)
> +{
> + return get_zspage_inuse(zspage) == 0;
> +}
> +
> /**
> * zs_lookup_class_index() - Returns index of the zsmalloc &size_class
> * that hold objects of the provided size.
> @@ -1787,6 +1792,10 @@ static void migrate_zspage(struct zs_pool *pool, struct size_class *class,
> obj_idx++;
> record_obj(handle, free_obj);
> obj_free(class->size, used_obj, NULL);
> +
> + /* Stop if there are no more objects to migrate */
> + if (zspage_empty(get_zspage(s_page)))
> + break;
> }

Yes it seems my version is not as good as I thought. Looks bad for an
architecturally dependent PAGE_SIZE.

Your version sounds good. In general, I can implement this option. I'll
test this and send patch this week.

--
Thank you,
Alexey