RE: [RFC PATCH v2 1/4] rseq: Add sched_state field to struct rseq

From: David Laight
Date: Thu Sep 28 2023 - 11:52:02 EST


From: Steven Rostedt
> Sent: 28 September 2023 15:43
>
> On Thu, 28 Sep 2023 12:39:26 +0200
> Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>
> > As always, are syscalls really *that* expensive? Why can't we busy wait
> > in the kernel instead?
>
> Yes syscalls are that expensive. Several years ago I had a good talk
> with Robert Haas (one of the PostgreSQL maintainers) at Linux Plumbers,
> and I asked him if they used futexes. His answer was "no". He told me
> how they did several benchmarks and it was a huge performance hit (and
> this was before Spectre/Meltdown made things much worse). He explained
> to me that most locks are taken just to flip a few bits. Going into the
> kernel and coming back was orders of magnitude longer than the critical
> sections. By going into the kernel, it caused a ripple effect and lead
> to even more contention. There answer was to implement their locking
> completely in user space without any help from the kernel.

That matches what I found with code that was using a mutex to take
work items off a global list.
Although the mutex was only held for a few instructions (probably
several 100 because the list wasn't that well written), what happened
was that as soon as there was any contention (which might start
with a hardware interrupt) performance when through the floor.

The fix was to replace the linked list with and array and use
atomic add to 'grab' blocks of entries.
(Even the atomic operations slowed things down.)

> This is when I thought that having an adaptive spinner that could get
> hints from the kernel via memory mapping would be extremely useful.

Did you consider writing a timestamp into the mutex when it was
acquired - or even as the 'acquired' value?
A 'moderately synched TSC' should do.
Then the waiter should be able to tell how long the mutex
has been held for - and then not spin if it had been held ages.

> The obvious problem with their implementation is that if the owner is
> sleeping, there's no point in spinning. Worse, the owner may even be
> waiting for the spinner to get off the CPU before it can run again. But
> according to Robert, the gain in the general performance greatly
> outweighed the few times this happened in practice.

Unless you can use atomics (ok for bits and linked lists) you
always have the problem that userspace can't disable interrupts.
So, unlike the kernel, you can't implement a proper spinlock.

I've NFI how CONFIG_RT manages to get anything done with all
the spinlocks replaced by sleep locks.
Clearly there are a spinlocks that are held for far too long.
But you really do want to spin most of the time.
...

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)