Re: lockdep recursive locking detected (rcu_kthread /__cache_free)

From: Christoph Lameter
Date: Tue Oct 04 2011 - 10:29:03 EST


On Mon, 3 Oct 2011, Paul E. McKenney wrote:

> On Mon, Oct 03, 2011 at 03:46:11PM -0500, Christoph Lameter wrote:
> > On Mon, 3 Oct 2011, Paul E. McKenney wrote:
> >
> > > The first lock was acquired here in an RCU callback. The later lock that
> > > lockdep complained about appears to have been acquired from a recursive
> > > call to __cache_free(), with no help from RCU. This looks to me like
> > > one of the issues that arise from the slab allocator using itself to
> > > allocate slab metadata.
> >
> > Right. However, this is a false positive since the slab cache with
> > the metadata is different from the slab caches with the slab data. The slab
> > cache with the metadata does not use itself any metadata slab caches.
>
> Wouldn't it be possible to pass a new flag to the metadata slab caches
> upon creation so that their locks could be placed in a separate lock
> class? Just allocate a separate lock_class_key structure for each such
> lock in that case, and then use lockdep_set_class_and_name to associate
> that structure with the corresponding lock. I do this in kernel/rcutree.c
> in order to allow the rcu_node tree's locks to nest properly.

We could give the kmalloc array a different class from created slab
caches. That should have the desired effect.

But that seems to be already the case (looking at init_node_lock_keys).
Non OFF_SLAB caches seem to be getting a different lock class? Why is this
not working?

static void init_node_lock_keys(int q)
{
struct cache_sizes *s = malloc_sizes;

if (g_cpucache_up != FULL)
return;

for (s = malloc_sizes; s->cs_size != ULONG_MAX; s++) {
struct kmem_list3 *l3;

l3 = s->cs_cachep->nodelists[q];
if (!l3 || OFF_SLAB(s->cs_cachep))
continue;

slab_set_lock_classes(s->cs_cachep, &on_slab_l3_key,
&on_slab_alc_key, q);
}
}




--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/