Slab Allocator Bug!

From: Mark Hemment (Mark_Hemment@eur.3com.com)
Date: Mon Apr 10 2000 - 07:09:33 EST


Linus,

  I've found a bug in the slab allocator which can cause memory corruption.
  This occurs for caches which are shrunk (kmem_cache_shrink()), when there are
slabs which are fully free, and no slabs which are partially free; c_freep is
left pointing to the address of the first fully free slab, which has been
released back to the free page pool.
  Having no slabs which are partially free is rare for heavily used caches (the
ones for which
kmem_cache_shrink() is called for), so this bug is rarely tickled.
  The fix is to update c_freep in __kmem_cache_shrink().
  There is a patch against 2.3.99pre3 at the end of this e-mail.

Alan,
  The same bug is in the 2.2.x series, but as there are no callers of
kmem_cache_shrink() (at least last time I looked) it isn't triggered.

Mark

--- slab.c.orig Mon Apr 10 10:56:08 2000
+++ slab.c Mon Apr 10 11:10:31 2000
@@ -1019,6 +1019,17 @@
          slabp = cachep->c_lastp;
          if (slabp->s_inuse || slabp == kmem_slab_end(cachep))
               break;
+ if (cachep->c_freep == slabp) {
+ /*
+ * This slab is the first slab with free objects
+ * (c_freep), and as we are walking the slab chain
+ * backwards, it is also the last slab with free
+ * objects. After unlinking it, there will be no
+ * slabs with free objects, so point c_freep into the
+ * cache structure.
+ */
+ cachep->c_freep = kmem_slab_end(cachep);
+ }
          kmem_slab_unlink(slabp);
          spin_unlock_irq(&cachep->c_spinlock);
          kmem_slab_destroy(cachep, slabp);

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Sat Apr 15 2000 - 21:00:14 EST