Re: [PATCH 20/21] binder: reverse locking order in shrinker callback

From: Alice Ryhl
Date: Tue Nov 07 2023 - 04:09:54 EST


Carlos Llamas <cmllamas@xxxxxxxxxx> writes:
> + trace_binder_unmap_kernel_start(alloc, index);
> +
> + __free_page(page->page_ptr);
> + page->page_ptr = NULL;
> +
> + trace_binder_unmap_kernel_end(alloc, index);
> +
> list_lru_isolate(lru, item);
> + mutex_unlock(&alloc->mutex);
> spin_unlock(lock);
>
> if (vma) {
> trace_binder_unmap_user_start(alloc, index);
>
> zap_page_range_single(vma, page_addr, PAGE_SIZE, NULL);
>
> trace_binder_unmap_user_end(alloc, index);
> }
> +
> mmap_read_unlock(mm);
> mmput_async(mm);
>
> - trace_binder_unmap_kernel_start(alloc, index);
> -
> - __free_page(page->page_ptr);
> - page->page_ptr = NULL;
> -
> - trace_binder_unmap_kernel_end(alloc, index);

This reverses the order so that __free_page is called before
zap_page_range_single. Is that okay?

Another option would be to do something along these lines:

page_to_free = page->page_ptr;
page->page_ptr = NULL;

[snip]

mmap_read_unlock(mm);
mmput_async(mm);
__free_page(page_to_free);

This way you wouldn't reverse the order. Also, you reduce the amount of
time you spend holding the mmap read lock, since the page is freed
without holding the lock.

Other than the above, this LGTM.

Alice