Re: x86-32: clean up rwsem inline asm statements

From: Linus Torvalds
Date: Wed Jan 13 2010 - 20:05:58 EST




On Wed, 13 Jan 2010, H. Peter Anvin wrote:
>
> There are a number of things that can be done better... for one thing,
> "+m" (sem->count) and "a" (sem) is just bloody wrong. The right thing
> would be "a" (&sem->count) for proper robustness.

Actually, no.

Strictly speaking, we should use "a" (sem), and then use '%0' (pointing to
the "+m" (sem->count)) for the actual memory access in the inline asm,
rather than '(%1)'.

We do need %eax to contain the pointer to the semaphore, because _that_ is
what we pass in as an argument (and return as a value!) to the rwsem slow
paths.

So "a" (sem) is absolutely the right thing to do.

The reason we use "(%1)" rather than "%0" is - if I recall correctly -
that back when we inlined it, gcc would often stupidly use the original
value of the semaphore address rather than %eax (which obviously also
contained it), and it generated larger code with big constants etc.

Now, since we only inline it in one place anyway, and the semaphore is
always an argument to that inlining site anyway, I don't think it matters
(it's never going to be some global pointer), and we probably could/should
just do this right. So instead of

LOCK_PREFIX " inc%z0 (%1)\n\t"

we'd probably be better off with just

LOCK_PREFIX " inc%z0 %0\n\t"

instead, letting gcc generate the memory pointer to "sem->count" itself.

Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/