Re: [PATCH] mm: fix negative commitlimit when gigantic hugepagesare allocated

From: Andrew Morton
Date: Thu Jun 09 2011 - 19:44:20 EST


On Thu, 2 Jun 2011 23:55:57 -0300
Rafael Aquini <aquini@xxxxxxxxx> wrote:

> When 1GB hugepages are allocated on a system, free(1) reports
> less available memory than what really is installed in the box.
> Also, if the total size of hugepages allocated on a system is
> over half of the total memory size, CommitLimit becomes
> a negative number.
>
> The problem is that gigantic hugepages (order > MAX_ORDER)
> can only be allocated at boot with bootmem, thus its frames
> are not accounted to 'totalram_pages'. However, they are
> accounted to hugetlb_total_pages()
>
> What happens to turn CommitLimit into a negative number
> is this calculation, in fs/proc/meminfo.c:
>
> allowed = ((totalram_pages - hugetlb_total_pages())
> * sysctl_overcommit_ratio / 100) + total_swap_pages;
>
> A similar calculation occurs in __vm_enough_memory() in mm/mmap.c.
>
> Also, every vm statistic which depends on 'totalram_pages' will render
> confusing values, as if system were 'missing' some part of its memory.

Is this bug serious enough to justify backporting the fix into -stable
kernels?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/