Re: [RFC][PATCH v1 5/9] ima: allocating iint improvements

From: Kasatkin, Dmitry
Date: Thu Feb 09 2012 - 04:40:23 EST


On Wed, Feb 1, 2012 at 8:46 PM, Kasatkin, Dmitry
<dmitry.kasatkin@xxxxxxxxx> wrote:
> On Wed, Feb 1, 2012 at 6:58 PM, Eric Paris <eparis@xxxxxxxxxxxxxx> wrote:
>> On Mon, Jan 30, 2012 at 5:14 PM, Mimi Zohar <zohar@xxxxxxxxxxxxxxxxxx> wrote:
>>> From: Dmitry Kasatkin <dmitry.kasatkin@xxxxxxxxx>
>>>
>>
>>> Âstatic struct rb_root integrity_iint_tree = RB_ROOT;
>>> -static DEFINE_SPINLOCK(integrity_iint_lock);
>>> +static DEFINE_RWLOCK(integrity_iint_lock);
>>> Âstatic struct kmem_cache *iint_cache __read_mostly;
>>
>> Has any profiling been done here? Â rwlocks have been shown to
>> actually be slower on multi processor systems in a number of cases due
>> to the cache line bouncing required. ÂI believe the current kernel
>> logic is that if you have a short critical section and you can't show
>> profile data the rwlocks are better, just stick with a spinlock.
>
> No, I have not done any profiling.
> My assumption was that rwlocks are better when there many readers.
> If what you say is true then rwlocks are useless...
> With big sections it is necessary to use rw semaphores.
>

Hello,

I and Mimi made performance measurements with rwlocks and spinlocks.
We used kernel compilation with multiple jobs as a test,
because it reads and creates lots of files..

In all cases rwlocks implementation performed better than spinlocks,
but very insignificantly. For example with total compilation time
around 6 minutes, with rwlocks time was 1 - 3 seconds shorter... But
always like that.

So as conclusion I can make, that usage of rwlocks is justified...

Thanks for bringing this up...

> - Dmitry
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/