Re: deadlock avoidance?

Johannes Erdfelt (jerdfelt@sventech.com)
Wed, 8 Dec 1999 11:39:22 -0500


On Wed, Dec 08, 1999, Davide Libenzi <dlibenzi@maticad.it> wrote:
> Wednesday, December 08, 1999 1:53 AM
> Johannes Erdfelt <jerdfelt@sventech.com> wrote :
> > > All pid and count modify fall inside 1) a nested lock ( ie. the task
> > > already own the lock : ++lock->count )
> > > 2) a lock acquired : lock->pid = getpid() and ++lock->count
> >
> > The setting is, but not all of the reading.
>
> There is not test & set operations other than spin_... in my code.
> Look at it better and try to find a sequence of operations that stall it.
>
> struct s_nested_lock {
> spinlock_t lock;
> short int pid;
> short int count;
> };
>
>
> #define nested_lock(lock, flags) \
> if (lock->pid == getpid()) { \
> ++lock->count; \
> } else { \
> spin_lock_irqsave(&lock->lock, flags); \
> ++lock->count; \
> lock->pid = getpid(); \
> }
>
> #define nested_unlock(lock, flags) \
> if (--lock->count == 0) { \
> lock->pid = 0; \
> spin_unlock_irqrestore(&lock->lock, flags); \
> }

Process A on CPU #0 acquires lock->lock and then goes to set the
lock->pid. Process B on CPU #1 then simultaneously dereferences
lock->pid.

AFAIK you can't guarantee than CPU #0 writes lock->pid correctly before
CPU #1 tries to read it.

Basically there's no locking/atmomicity of lock->pid.

Supposedely this is not a problem with x86 since 16 bit reads and writes
are atomic, however I don't think is true for all architectures.

Isn't this the reason atomic_t was created?

JE

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/