Re: [PATCH] locking: Add __lockfunc to slow path functions

From: Namhyung Kim
Date: Tue Aug 09 2022 - 17:13:52 EST


Hello,

On Tue, Aug 9, 2022 at 8:05 AM Waiman Long <longman@xxxxxxxxxx> wrote:
>
> On 8/8/22 13:59, Namhyung Kim wrote:
> > So that we can skip the functions in the perf lock contention and other
> > places like /proc/PID/wchan.
> >
> > Signed-off-by: Namhyung Kim <namhyung@xxxxxxxxxx>
> > ---
> > kernel/locking/qrwlock.c | 4 ++--
> > kernel/locking/qspinlock.c | 2 +-
> > 2 files changed, 3 insertions(+), 3 deletions(-)
> >
> > diff --git a/kernel/locking/qrwlock.c b/kernel/locking/qrwlock.c
> > index 2e1600906c9f..d2ef312a8611 100644
> > --- a/kernel/locking/qrwlock.c
> > +++ b/kernel/locking/qrwlock.c
> > @@ -18,7 +18,7 @@
> > * queued_read_lock_slowpath - acquire read lock of a queued rwlock
> > * @lock: Pointer to queued rwlock structure
> > */
> > -void queued_read_lock_slowpath(struct qrwlock *lock)
> > +void __lockfunc queued_read_lock_slowpath(struct qrwlock *lock)
> > {
> > /*
> > * Readers come here when they cannot get the lock without waiting
> > @@ -63,7 +63,7 @@ EXPORT_SYMBOL(queued_read_lock_slowpath);
> > * queued_write_lock_slowpath - acquire write lock of a queued rwlock
> > * @lock : Pointer to queued rwlock structure
> > */
> > -void queued_write_lock_slowpath(struct qrwlock *lock)
> > +void __lockfunc queued_write_lock_slowpath(struct qrwlock *lock)
> > {
> > int cnts;
> >
> > diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> > index 65a9a10caa6f..2b23378775fe 100644
> > --- a/kernel/locking/qspinlock.c
> > +++ b/kernel/locking/qspinlock.c
> > @@ -313,7 +313,7 @@ static __always_inline u32 __pv_wait_head_or_lock(struct qspinlock *lock,
> > * contended : (*,x,y) +--> (*,0,0) ---> (*,0,1) -' :
> > * queue : ^--' :
> > */
> > -void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> > +void __lockfunc queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> > {
> > struct mcs_spinlock *prev, *next, *node;
> > u32 old, tail;
>
>
> For completeness, I think you should also add it to the
> __pv_queued_spin_unlock() and __pv_queued_spin_unlock_slowpath()
> function in kernel/locking/qspinlock_paravirt.h. Perhaps even the
> assembly code in arch/x86/include/asm/qspinlock_paravirt.h.

Thanks for your comment. I'm not sure about the asm part, will this be enough?

--- a/arch/x86/include/asm/qspinlock_paravirt.h
+++ b/arch/x86/include/asm/qspinlock_paravirt.h
@@ -36,7 +36,7 @@ PV_CALLEE_SAVE_REGS_THUNK(__pv_queued_spin_unlock_slowpath);
* rsi = lockval (second argument)
* rdx = internal variable (set to 0)
*/
-asm (".pushsection .text;"
+asm (".pushsection .spinlock.text;"
".globl " PV_UNLOCK ";"
".type " PV_UNLOCK ", @function;"
".align 4,0x90;"