Re: [PATCH v3 04/11] mm: vmalloc: Remove global vmap_area_root rb-tree

From: Wen Gu
Date: Mon Jan 08 2024 - 02:45:38 EST




On 2024/1/7 14:59, Hillf Danton wrote:
On Sat, 6 Jan 2024 17:36:23 +0100 Uladzislau Rezki <urezki@xxxxxxxxx>

Thank you! I tried the patch, and it seems that the wait for rwlock_t
also exists, as much as using spinlock_t. (The flamegraph is attached.
Not sure why the read_lock waits so long, given that there is no frequent
write_lock competition)

vzalloced shmem(spinlock_t) vzalloced shmem(rwlock_t)
Requests/sec 583729.93 460007.44

So I guess the overhead in finding vmap area is inevitable here and the
original spin_lock is fine in this series.

I have also noticed a erformance difference between rwlock and spinlock.
So, yes. This is what we need to do extra if CONFIG_HARDENED_USERCOPY is
set, i.e. find a VA.

See if read bias helps to understand the gap between spinlock and rwlock.

--- x/kernel/locking/qrwlock.c
+++ y/kernel/locking/qrwlock.c
@@ -23,7 +23,7 @@ void __lockfunc queued_read_lock_slowpat
/*
* Readers come here when they cannot get the lock without waiting
*/
- if (unlikely(in_interrupt())) {
+ if (1) {
/*
* Readers in interrupt context will get the lock immediately
* if the writer is just waiting (not holding the lock yet),

Thank you for the idea! Hillf.

IIUC, the change makes read operations more likely to acquire lock and
modified fairness to favor reading.

The test in my scenario shows:

vzalloced shmem with spinlock_t rwlock_t rwlock_t(with above change)
Requests/sec 564961.29 442534.33 439733.31

In addition to read bias, there seems to be other factors that cause the
gap, but I haven't figured it out yet..

Thanks,
Wen Gu