Re: WARNING: at mm/slub.c:3357, kernel BUG at mm/slub.c:3413

From: Christoph Lameter
Date: Tue Nov 22 2011 - 11:36:49 EST


On Tue, 22 Nov 2011, Eric Dumazet wrote:

> Le mardi 22 novembre 2011 Ã 10:20 -0600, Christoph Lameter a Ãcrit :
> > Argh. The Redzoning (and the general object pad initialization) is outside
> > of the slab_lock now. So I get wrong positives on those now. That
> > is already in 3.1 as far as I know. To solve that we would have to cover a
> > much wider area in the alloc and free with the slab lock.
> >
> > But I do not get the count mismatches that you saw. Maybe related to
> > preemption. Will try that next.
>
> Also I note the checks (redzoning and all features) that should be done
> in kfree() are only done on slow path ???

Yes debugging forces the slow paths.

> I am considering adding a "quarantine" capability : each cpu will
> maintain in its struct kmem_cache_cpu a FIFO list of "s->quarantine_max"
> freed objects.
>
> So it should be easier to track use after free bugs, setting
> quarantine_max to a big value.

It may be easier to simply disable interrupts early in __slab_free
if debugging is on. Doesnt look nice right now. Draft patch (not tested
yet):

---
mm/slub.c | 15 ++++++++++++---
1 file changed, 12 insertions(+), 3 deletions(-)

Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2011-11-22 09:04:47.000000000 -0600
+++ linux-2.6/mm/slub.c 2011-11-22 10:33:12.000000000 -0600
@@ -2391,8 +2391,13 @@ static void __slab_free(struct kmem_cach

stat(s, FREE_SLOWPATH);

- if (kmem_cache_debug(s) && !free_debug_processing(s, page, x, addr))
- return;
+ if (kmem_cache_debug(s)) {
+ local_irq_save(flags);
+ if (!free_debug_processing(s, page, x, addr)) {
+ local_irq_restore(flags);
+ return;
+ }
+ }

do {
prior = page->freelist;
@@ -2422,8 +2427,10 @@ static void __slab_free(struct kmem_cach
* Otherwise the list_lock will synchronize with
* other processors updating the list of slabs.
*/
- spin_lock_irqsave(&n->list_lock, flags);
+ if (!kmem_cache_debug(s))
+ local_irq_save(flags);

+ spin_lock(&n->list_lock);
}
}
inuse = new.inuse;
@@ -2448,6 +2455,8 @@ static void __slab_free(struct kmem_cach
*/
if (was_frozen)
stat(s, FREE_FROZEN);
+ if (kmem_cache_debug(s))
+ local_irq_restore(flags);
return;
}