Re: rw_semaphore down_write a lot faster if wrapped by mutex ?!

From: Török Edwin
Date: Sun May 15 2011 - 11:30:23 EST


On 05/15/2011 05:34 PM, Török Edwin wrote:
> Hi semaphore/mutex maintainers,
>
> Looks like rw_semaphore's down_write is not as efficient as it could be.
> It can have a latency in the miliseconds range, but if I wrap it in yet
> another mutex then it becomes faster (100 us range).
>
> One difference I noticed betwen the rwsem and mutex, is that the mutex
> code does optimistic spinning. But adding something similar to the
> rw_sem code didn't improve timings (it made things worse).
> My guess is that this has to do something with excessive scheduler
> ping-pong (spurious wakeups, scheduling a task that won't be able to
> take the semaphore, etc.), I'm not sure what are the best tools to
> confirm/infirm this. perf sched/perf lock/ftrace ?

Hmm, with the added mutex the reader side of mmap_sem only sees one
contending locker at a time (the rest of write side contention is hidden
by the mutex), so this might give a better chance for the readers to
run, even in face of heavy write-side contention.
The up_write will see there are no more writers and always wake the
readers, whereas without the mutex it'll wake the other writer.

Perhaps rw_semaphore should have a flag to prefer waking readers over
writers, or take into consideration the number of readers waiting when
waking a reader vs a writer.

Waking a writer will cause additional latency, because more readers will
go to sleep:
latency = (enqueued_readers / enqueued_writers) * (avg_write_hold_time
+ context_switch_time)

Whereas waking (all) the readers will delay the writer only by:
latency = avg_reader_hold_time + context_switch_time

If the semaphore code could (approximately) measure these, then maybe it
would be able to better make a choice for future lock requests based on
(recent) lock contention history.

Best regards,
--Edwin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/