Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

From: Stephan Diestelhorst
Date: Mon Oct 10 2011 - 10:03:09 EST


On Monday 10 October 2011, 07:00:50 Stephan Diestelhorst wrote:
> On Thursday 06 October 2011, 13:40:01 Jeremy Fitzhardinge wrote:
> > On 10/06/2011 07:04 AM, Stephan Diestelhorst wrote:
> > > On Wednesday 28 September 2011, 14:49:56 Linus Torvalds wrote:
> > >> Which certainly should *work*, but from a conceptual standpoint, isn't
> > >> it just *much* nicer to say "we actually know *exactly* what the upper
> > >> bits were".
> > > Well, we really do NOT want atomicity here. What we really rather want
> > > is sequentiality: free the lock, make the update visible, and THEN
> > > check if someone has gone sleeping on it.
> > >
> > > Atomicity only conveniently enforces that the three do not happen in a
> > > different order (with the store becoming visible after the checking
> > > load).
> > >
> > > This does not have to be atomic, since spurious wakeups are not a
> > > problem, in particular not with the FIFO-ness of ticket locks.
> > >
> > > For that the fence, additional atomic etc. would be IMHO much cleaner
> > > than the crazy overflow logic.
> >
> > All things being equal I'd prefer lock-xadd just because its easier to
> > analyze the concurrency for, crazy overflow tests or no. But if
> > add+mfence turned out to be a performance win, then that would obviously
> > tip the scales.
> >
> > However, it looks like locked xadd is also has better performance: on
> > my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
> > than locked xadd, so that pretty much settles it unless you think
> > there'd be a dramatic difference on an AMD system.
>
> Indeed, the fences are usually slower than locked RMWs, in particular,
> if you do not need to add an instruction. I originally missed that
> amazing stunt the GCC pulled off with replacing the branch with carry
> flag magic. It seems that two twisted minds have found each other
> here :)
>
> One of my concerns was adding a branch in here... so that is settled,
> and if everybody else feels like this is easier to reason about...
> go ahead :) (I'll keep my itch to myself then.)

Just that I can't... if performance is a concern, adding the LOCK
prefix to the addb outperforms the xadd significantly:

With mean over 100 runs... this comes out as follows
(on my Phenom II)

locked-add 0.648500 s 80%
add-rmwtos 0.707700 s 88%
locked-xadd 0.807600 s 100%
add-barrier 1.270000 s 157%

With huge read contention added in (as cheaply as possible):
locked-add.openmp 0.640700 s 84%
add-rmwtos.openmp 0.658400 s 86%
locked-xadd.openmp 0.763800 s 100%

And the numbers for write contention are crazy, but also feature the
locked-add version:
locked-add.openmp 0.571400 s 71%
add-rmwtos.openmp 0.699900 s 87%
locked-xadd.openmp 0.800200 s 100%

Stephan
--
Stephan Diestelhorst, AMD Operating System Research Center
stephan.diestelhorst@xxxxxxx, Tel. +49 (0)351 448 356 719

Advanced Micro Devices GmbH
Einsteinring 24
85609 Aschheim
Germany

Geschaeftsfuehrer: Alberto Bozzo;
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632, WEEE-Reg-Nr: DE 12919551 #include <stdio.h>

struct {
unsigned char flag;
unsigned char val;
} l;

int main(int argc, char **argv)
{
int i;

{
{
for (i = 0; i < 100000000; i++) {
l.val += 2;
asm volatile("lock or $0x0,(%%rsp)" : : : "memory");
if (l.flag)
break;
asm volatile("" : : : "memory");
}
l.flag = 1;
}
}
return 0;
}
#include <stdio.h>

struct {
unsigned char flag;
unsigned char val;
} l;

int main(int argc, char **argv)
{
int i;

# pragma omp sections
{
# pragma omp section
{
for (i = 0; i < 100000000; i++) {
l.val += 2;
asm volatile("lock or $0x0,(%%rsp)" : : : "memory");
if (l.flag)
break;
asm volatile("" : : : "memory");
}
l.flag = 1;
}
# pragma omp section
while(!l.flag)
asm volatile("":::"memory");
//asm volatile("lock orb $0x0, %0"::"m"(l.flag):"memory");
}
return 0;
}
#include <stdio.h>

struct {
unsigned char flag;
unsigned char val;
} l;

int main(int argc, char **argv)
{
int i;
{
{
for (i = 0; i < 100000000; i++) {
asm volatile("lock addb %1, %0":"+m"(l.val):"r"((char)2):"memory");
if (l.flag)
break;
asm volatile("" : : : "memory");
}
l.flag = 1;
}
}
return 0;
}
#include <stdio.h>

union {
struct {
unsigned char val;
unsigned char flag;
};
unsigned short lock;
} l = { 0,0 };

int main(int argc, char **argv)
{
int i;
# pragma omp sections
{
# pragma omp section
{

for (i = 0; i < 100000000; i++) {
unsigned short inc = 2;
if (l.val >= (0x100 - 2))
inc += -1 << 8;
asm volatile("lock; xadd %1,%0" : "+m" (l.lock), "+r" (inc) : );
if (inc & 0x100)
break;
asm volatile("" : : : "memory");
}
l.flag = 1;
}
# pragma omp section
while(!l.flag)
asm volatile("":::"memory");
//asm volatile("lock orb $0x0, %0"::"m"(l.flag):"memory");
}
return 0;
}
#include <stdio.h>

struct {
unsigned char flag;
unsigned char val;
} l;

int main(int argc, char **argv)
{
int i;
# pragma omp sections
{
# pragma omp section
{
for (i = 0; i < 100000000; i++) {
asm volatile("lock addb %1, %0":"+m"(l.val):"r"((char)2):"memory");
if (l.flag)
break;
asm volatile("" : : : "memory");
}
l.flag = 1;
}
# pragma omp section
while(!l.flag)
asm volatile("":::"memory");
//asm volatile("lock orb $0x0, %0"::"m"(l.flag):"memory");
}
return 0;
}
#include <stdio.h>

union {
struct {
unsigned char val;
unsigned char flag;
};
unsigned short lock;
} l = { 0,0 };

int main(int argc, char **argv)
{
int i;
{
{

for (i = 0; i < 100000000; i++) {
unsigned short inc = 2;
if (l.val >= (0x100 - 2))
inc += -1 << 8;
asm volatile("lock; xadd %1,%0" : "+m" (l.lock), "+r" (inc) : );
if (inc & 0x100)
break;
asm volatile("" : : : "memory");
}
l.flag = 1;
}
}
return 0;
}