Re: Memory allocation of 132 KB

Michael L. Galbraith (mikeg@weiden.de)
Fri, 2 Oct 1998 05:33:18 +0200 (CEST)


On Fri, 2 Oct 1998, Jeanette Pauline Middelink wrote:

> On Tue, Sep 29, 1998 at 12:32:15PM +0200, Andrea Arcangeli wrote:
> > On Tue, 29 Sep 1998 sroy@wipinfo.soft.net wrote:
> >
> > >Hi,
> > >
> > >In a network driver, I tried to allocate memory for DMA buffer of length
> > >132 KB and the call (kmalloc()) failed. Hacking through kmalloc.c, I found
> > >that it does not allow allocating memory more than (128K - 24) bytes, if the
> > >page size is 4KB.
> > >
> > >Me questions are -
> > >
> > >1. Why is that limitation ?
> > >2. Is there any way to allocate 132 KB memory (physically contiguous) ?
> >
> > Use __get_free_pages() by hand specifying the order or mm you need. order
> > 0 -> 1 page, order 2 -> 2page, order 3 -> 4 page etc..etc..
>
> One of the reasons Linux still does not support big kmalloc's is
> due to the fact of memory fragmentation. This is still happening
> and due to this, the changes of getting a large block of kmemory
> are getting lower and lower the longer the kernel runs...

Hi,

Um, I don't think that this is nearly the problem it was. I'm 10 hrs
into a stress test, and still have 128k chunks available. I do wonder
why Bill's defrag patch didn't go in though.. with that, I always had
TONS of high order chunks at hand.

AFAIK, the reason large kmalloc's don't work is because the largest
buddy chunk (underlying allocator for the slab) is 128k.

-Mike

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/