Re: [PATCH v3 07/11] mm: vmalloc: Offload free_vmap_area_lock lock

From: Uladzislau Rezki
Date: Wed Jan 03 2024 - 10:48:09 EST


On Wed, Jan 03, 2024 at 07:08:32PM +0800, Hillf Danton wrote:
> On Tue, 2 Jan 2024 19:46:29 +0100 Uladzislau Rezki <urezki@xxxxxxxxx>
> > +static void
> > +decay_va_pool_node(struct vmap_node *vn, bool full_decay)
> > +{
> > + struct vmap_area *va, *nva;
> > + struct list_head decay_list;
> > + struct rb_root decay_root;
> > + unsigned long n_decay;
> > + int i;
> > +
> > + decay_root = RB_ROOT;
> > + INIT_LIST_HEAD(&decay_list);
> > +
> > + for (i = 0; i < MAX_VA_SIZE_PAGES; i++) {
> > + struct list_head tmp_list;
> > +
> > + if (list_empty(&vn->pool[i].head))
> > + continue;
> > +
> > + INIT_LIST_HEAD(&tmp_list);
> > +
> > + /* Detach the pool, so no-one can access it. */
> > + spin_lock(&vn->pool_lock);
> > + list_replace_init(&vn->pool[i].head, &tmp_list);
> > + spin_unlock(&vn->pool_lock);
> > +
> > + if (full_decay)
> > + WRITE_ONCE(vn->pool[i].len, 0);
> > +
> > + /* Decay a pool by ~25% out of left objects. */
> > + n_decay = vn->pool[i].len >> 2;
> > +
> > + list_for_each_entry_safe(va, nva, &tmp_list, list) {
> > + list_del_init(&va->list);
> > + merge_or_add_vmap_area(va, &decay_root, &decay_list);
> > +
> > + if (!full_decay) {
> > + WRITE_ONCE(vn->pool[i].len, vn->pool[i].len - 1);
> > +
> > + if (!--n_decay)
> > + break;
> > + }
> > + }
> > +
> > + /* Attach the pool back if it has been partly decayed. */
> > + if (!full_decay && !list_empty(&tmp_list)) {
> > + spin_lock(&vn->pool_lock);
> > + list_replace_init(&tmp_list, &vn->pool[i].head);
> > + spin_unlock(&vn->pool_lock);
> > + }
>
> Failure of working out why list_splice() was not used here in case of
> non-empty vn->pool[i].head, after staring ten minutes.
>
The vn->pool[i].head is always empty here because we have detached it above
and initialized. Concurrent decay and populate also is not possible because
both is done by only one context.

--
Uladzislau Rezki