Re: [PATCH v3 07/11] mm: vmalloc: Offload free_vmap_area_lock lock

From: Dave Chinner
Date: Thu Jan 11 2024 - 15:37:48 EST


On Thu, Jan 11, 2024 at 04:54:48PM +0100, Uladzislau Rezki wrote:
> On Thu, Jan 11, 2024 at 08:02:16PM +1100, Dave Chinner wrote:
> > On Tue, Jan 02, 2024 at 07:46:29PM +0100, Uladzislau Rezki (Sony) wrote:
> > > Concurrent access to a global vmap space is a bottle-neck.
> > > We can simulate a high contention by running a vmalloc test
> > > suite.
> > >
> > > To address it, introduce an effective vmap node logic. Each
> > > node behaves as independent entity. When a node is accessed
> > > it serves a request directly(if possible) from its pool.
> > >
> > > This model has a size based pool for requests, i.e. pools are
> > > serialized and populated based on object size and real demand.
> > > A maximum object size that pool can handle is set to 256 pages.
> > >
> > > This technique reduces a pressure on the global vmap lock.
> > >
> > > Signed-off-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx>
> >
> > Why not use a llist for this? That gets rid of the need for a
> > new pool_lock altogether...
> >
> Initially i used the llist. I have changed it because i keep track
> of objects per a pool to decay it later. I do not find these locks
> as contented one therefore i did not think much.

Ok. I've used llist and an atomic counter to track the list length
in the past.

But is the list length even necessary? It seems to me that it is
only used by the shrinker to determine how many objects are on the
lists for scanning, and I'm not sure that's entirely necessary given
the way the current global shrinker works (i.e. completely unfair to
low numbered nodes due to scan loop start bias).

> Anyway, i will have a look at this to see if llist is easy to go with
> or not. If so i will send out a separate patch.

Sounds good, it was just something that crossed my mind given the
pattern of "producer adds single items, consumer detaches entire
list, processes it and reattaches remainder" is a perfect match for
the llist structure.

Cheers,

Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx