Re: vfork: out of memory, when there's plenty of swap free

Gerard Roudier (groudier@club-internet.fr)
Sun, 14 Mar 1999 19:53:35 +0100 (MET)


On Sun, 14 Mar 1999, Andrea Arcangeli wrote:

> On Sun, 14 Mar 1999, Gerard Roudier wrote:
>
> >> It's not the lack of defragmentation but due bad allocation of memory.
> >> Just to make an example look at the inode cache. It uses a _bad_ way to
> >> alloc memory. That's the _best_ way to generate _persistent_ fragmentation
> >> all over the place.
> >
> >Nothing is bad there, except your diagnostic, in my opinion.
>
> In my opinion you don't understood the icache allocation code.

I want to understand that parts are allocating memory and that once they
have them they can do what they want with the memory, even to send
references to other parts as devices or anything they want to. If
they are kind enough to free some when they are asked for, it is
still better. :-)
A part than only requires physical memory chunks shorter than 1 PAGE
size to succeed is a part that makes good use of allocations.

> >Btw, I donnot want to rely on 2 PAGES allocations having to succeed most
> >of the time.
>
> It's trivial not relaying on the 8k phys consecutive memory. Simply
> vmalloc if __get_free_pages fails. To know if you have to free_pages or
> vfree simply check of you are towards VMALLOC_START.

I may want to do that if I am able to easily scatter the thing into
physical chunks on PAGE boundaries, otherwise I dont want. I also want to
get contiguous chunks when I ask for less than 1 PAGE.
And I donnot want from the part to have to guess as memory is handled by
the kernel.

> Here I never had fragmentation problems but I have to say that I usually
> have plenty of memory free...

I use 2.0 most of the time, donnot want to ever use NFS and the SCSI
driver I use never allocate more than 1 PAGE. Prior to use anything that
allocate more than 1 PAGE at a time, I will fix it or modestly:) try to
fix it.

> >> And btw I think that for vmlloced areas would be possible to defrag them
> >> at runtime, but for normal __get_free_pages memory it's not possible, it's
> >> the allocation in first place that has to be smart to avoid fragmentation
> >> according to me.
> >
> >If you lose one arm and you replace it by one of your legs, you probably
> >will not be quite happy of the result. :)
> >Better to do nothing than to do things that makes things worse, IMO.
>
> ??? I think you have not understood the issue. Even if a page is 8k you
> may like to defrag vmalloced memory at runtime. The implementation could

I may want to defrag for the system to perform better, but I donnot want
to be required to defrag for the system to work.

> be difficult and the advantage minimal but it's something that can help
> with every kind of page size. I am not telling you that it has to be done
> and that it will be great, it's just something that could be done (I
> think, maybe I am wrong) and could be useful whith any kind of page size.

The problem with defragmentors is that they may let think that the system
is working fine when it just close to die at any time. So, I do prefer a
system that does not require a defragmentor in order to just work.

> Andrea Arcangeli

Regards,
Gérard.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/