Re: [PATCH v3 07/11] mm: vmalloc: Offload free_vmap_area_lock lock

From: Uladzislau Rezki
Date: Thu Jan 18 2024 - 13:15:36 EST


On Wed, Jan 17, 2024 at 09:12:26AM +1100, Dave Chinner wrote:
> On Fri, Jan 12, 2024 at 01:18:27PM +0100, Uladzislau Rezki wrote:
> > On Fri, Jan 12, 2024 at 07:37:36AM +1100, Dave Chinner wrote:
> > > On Thu, Jan 11, 2024 at 04:54:48PM +0100, Uladzislau Rezki wrote:
> > > > On Thu, Jan 11, 2024 at 08:02:16PM +1100, Dave Chinner wrote:
> > > > > On Tue, Jan 02, 2024 at 07:46:29PM +0100, Uladzislau Rezki (Sony) wrote:
> > > > > > Concurrent access to a global vmap space is a bottle-neck.
> > > > > > We can simulate a high contention by running a vmalloc test
> > > > > > suite.
> > > > > >
> > > > > > To address it, introduce an effective vmap node logic. Each
> > > > > > node behaves as independent entity. When a node is accessed
> > > > > > it serves a request directly(if possible) from its pool.
> > > > > >
> > > > > > This model has a size based pool for requests, i.e. pools are
> > > > > > serialized and populated based on object size and real demand.
> > > > > > A maximum object size that pool can handle is set to 256 pages.
> > > > > >
> > > > > > This technique reduces a pressure on the global vmap lock.
> > > > > >
> > > > > > Signed-off-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx>
> > > > >
> > > > > Why not use a llist for this? That gets rid of the need for a
> > > > > new pool_lock altogether...
> > > > >
> > > > Initially i used the llist. I have changed it because i keep track
> > > > of objects per a pool to decay it later. I do not find these locks
> > > > as contented one therefore i did not think much.
> > >
> > > Ok. I've used llist and an atomic counter to track the list length
> > > in the past.
> > >
> > > But is the list length even necessary? It seems to me that it is
> > > only used by the shrinker to determine how many objects are on the
> > > lists for scanning, and I'm not sure that's entirely necessary given
> > > the way the current global shrinker works (i.e. completely unfair to
> > > low numbered nodes due to scan loop start bias).
> > >
> > I use the length to decay pools by certain percentage, currently it is
> > 25%, so i need to know number of objects. It is done in the purge path.
> > As for shrinker, once it hits us we drain pools entirely.
>
> Why does purge need to be different to shrinking?
>
> But, regardless, you can still use llist with an atomic counter to
> do this - there is no need for a spin lock at all.
>
As i pointed earlier, i will have a look at it.

> > > > Anyway, i will have a look at this to see if llist is easy to go with
> > > > or not. If so i will send out a separate patch.
> > >
> > > Sounds good, it was just something that crossed my mind given the
> > > pattern of "producer adds single items, consumer detaches entire
> > > list, processes it and reattaches remainder" is a perfect match for
> > > the llist structure.
> > >
> > The llist_del_first() has to be serialized. For this purpose a per-cpu
> > pool would work or kind of "in_use" atomic that protects concurrent
> > removing.
>
> So don't use llist_del_first().
>
> > If we detach entire llist, then we need to keep track of last node
> > to add it later as a "batch" to already existing/populated list.
>
> Why? I haven't see any need for ordering these lists which would
> requiring strict tail-add ordered semantics.
>
I mean the following:

1. first = llist_del_all(&example);
2. last = llist_reverse_order(first);

4. va = __llist_del_first(first);

/*
* "example" might not be empty, use the batch. Otherwise
* we loose the entries "example" pointed to.
*/
3. llist_add_batch(first, last, &example);

--
Uladzislau Rezki