Re: [RFC 2/5] i3c: Add core I3C infrastructure

From: Greg Kroah-Hartman
Date: Wed Dec 13 2017 - 11:52:03 EST


On Wed, Dec 13, 2017 at 05:20:43PM +0100, Boris Brezillon wrote:
> Hi Greg,
>
> On Tue, 1 Aug 2017 19:13:27 -0700
> Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> wrote:
>
> > > > > Unless you see a good reason to not use a R/W lock, I'd like to keep it
> > > > > this way because master IPs are likely to implement advanced queuing
> > > > > mechanism (allows one to queue new transfers even if the master is
> > > > > already busy processing other requests), and serializing things at the
> > > > > framework level will just prevent us from using this kind of
> > > > > optimization.
> > > >
> > > > Unless you can prove otherwise, using a rw lock is almost always worse
> > > > than just a mutex.
> > >
> > > Is it still true when it's taken in non-exclusive mode most of the
> > > time, and the time you spend in the critical section is non-negligible?
> > >
> > > I won't pretend I know better than you do what is preferable, it's just
> > > that the RW lock seemed appropriate to me for the situation I tried to
> > > described here.
> >
> > Again, measure it. If you can't measure it, then don't use it. Use a
> > simple lock instead. Seriously, don't make it more complex until you
> > really have to. It sounds like you didn't measure it at all, which
> > isn't good, please do so.
> >
>
> I'm resurrecting this thread because I finally had the time to implement
> message queuing in Cadence I3C master driver. So I did a test with 2
> I3C devices on the bus, and their drivers sending as much SDR messages
> as they can in 10s. Here are the results:
>
> | mutex | rwsem |
> ---------------------------------------
> dev1 | 19087 | 29532 |
> dev2 | 19341 | 29118 |
> =======================================
> total | 38428 | 58650 |
> msg/sec | ~3843 | ~5865 |
>
>
> The results I'm obtaining here are not so surprising since all normal
> transfers are taking the lock in read mode, so there's no contention.
> I didn't measure the impact on performances when there's one
> maintenance operation taking the lock in write mode and several normal
> transfers waiting for this lock, but really, maintenance operations are
> infrequent, and that's not where performance matters in our use case.
>
> I also did the same test with only one device doing transfers on the
> bus, and this time the mutex wins, but there's not a huge difference.
>
> | mutex | rwsem |
> ---------------------------------------
> total | 67116 | 66561 |
> msg/sec | ~6712 | ~6656 |
>
> Let me know if you want more information on the test procedure.

Nice, thanks for testing, so it is a real win here, good!

greg k-h