Re: [PATCH] kernel: Introduce a write lock/unlock wrapper for tasklist_lock

From: Hillf Danton
Date: Wed Dec 27 2023 - 06:10:24 EST


On Tue, 26 Dec 2023 20:49:38 +0000 Matthew Wilcox <willy@xxxxxxxxxxxxx>
> On Tue, Dec 26, 2023 at 06:46:52PM +0800, Hillf Danton wrote:
> > On Wed, 13 Dec 2023 12:27:05 -0600 Eric W. Biederman <ebiederm@xxxxxxxxxxxx>
> > > Matthew Wilcox <willy@xxxxxxxxxxxxx> writes:
> > > > I think the right way to fix this is to pass a boolean flag to
> > > > queued_write_lock_slowpath() to let it know whether it can re-enable
> > > > interrupts while checking whether _QW_WAITING is set.
> >
> > lock(&lock->wait_lock)
> > enable irq
> > int
> > lock(&lock->wait_lock)
> >
> > You are adding chance for recursive locking.
>
> Did you bother to read queued_read_lock_slowpath() before writing this email?

Nope but it matters nothing in this case.
>
> if (unlikely(in_interrupt())) {
> /*
> * Readers in interrupt context will get the lock immediately
> * if the writer is just waiting (not holding the lock yet),
> * so spin with ACQUIRE semantics until the lock is available
> * without waiting in the queue.
> */
> atomic_cond_read_acquire(&lock->cnts, !(VAL & _QW_LOCKED));
> return;

This is the lock acquirer for read in irq context, and it rolls out the red
carpet for write acquirer in irq, right Willy?

Feel free to ignore the following leg works.

/* Set the waiting flag to notify readers that a writer is pending */
atomic_or(_QW_WAITING, &lock->cnts);

enable irq;

/* When no more readers or writers, set the locked flag */
do {
cnts = atomic_cond_read_relaxed(&lock->cnts, VAL == _QW_WAITING);
} while (!atomic_try_cmpxchg_acquire(&lock->cnts, &cnts, _QW_LOCKED));

int
atomic_cond_read_acquire(&lock->cnts, !(VAL & _QW_LOCKED));
deadlock
disable irq;

Though the case below is safe, it looks not pretty but clumsy.

/* Set the waiting flag to notify readers that a writer is pending */
atomic_or(_QW_WAITING, &lock->cnts);

/* When no more readers or writers, set the locked flag */
do {
enable irq;

cnts = atomic_cond_read_relaxed(&lock->cnts, VAL == _QW_WAITING);

disable irq;

} while (!atomic_try_cmpxchg_acquire(&lock->cnts, &cnts, _QW_LOCKED));