Re: [PATCH] slub: fix slab_pad_check()

From: Eric Dumazet
Date: Thu Sep 03 2009 - 18:21:13 EST


Eric Dumazet a écrit :
>
>
>
> Problem is not _objects_ Christoph, but _slabs_, and your patch is not working.
>
> Its true that when User calls kmem_cache_destroy(), all _objects_ were previously freed.
> This is mandatory, with or withou SLAB_DESTROY_BY_RCU thing
>
> Problem is that slub has some internal state, including some to-be-freed _slabs_,
> that User have no control at all on it.
>
> User cannot "know" slabs are freed, inuse, or whatever state in cache or call_rcu queues.
>
> Face it, SLAB_DESTROY_BY_RCU is internal affair (to slub/slab/... allocators)
>
> We absolutely need a rcu_barrier() somewhere, believe it or not. You can argue that it should
> be done *before*, but it gives no speedup, only potential bugs.
>
> Only case User should do its rcu_barrier() itself is if it knows some call_rcu() are pending
> and are delaying _objects_ freeing (typical !SLAB_DESTROY_RCU usage in RCU algos).
>
> I dont even understand why you care so much about kmem_cache_destroy(SLAB_DESTROY_BY_RCU),
> given that almost nobody use it. We took almost one month to find out what the bug was in first
> place...


So maybe the safest thing would be to include the rcu_barrier() to insure all objects where freed

And another one for SLAB_DESTROY_BY_RCU to make sure slabs where freed

void kmem_cache_destroy(struct kmem_cache *s)
{
/*
* Make sure no objects are waiting in call_rcu queues to be freed
*/
rcu_barrier();

down_write(&slub_lock);
s->refcount--;
if (!s->refcount) {
list_del(&s->list);
up_write(&slub_lock);
if (kmem_cache_close(s)) {
printk(KERN_ERR "SLUB %s: %s called for cache that "
"still has objects.\n", s->name, __func__);
dump_stack();
}
/*
* Make sure no slabs are waiting in call_rcu queues to be freed
*/
if (s->flags & SLAB_DESTROY_BY_RCU)
rcu_barrier();
sysfs_slab_remove(s);
} else
up_write(&slub_lock);
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/