Re: [patch v2] mm, hugetlb_cgroup: round limit_in_bytes down to hugepage size

From: Michal Hocko
Date: Thu Apr 07 2016 - 08:51:52 EST


On Wed 06-04-16 15:10:23, David Rientjes wrote:
[...]
> +static void hugetlb_cgroup_init(struct hugetlb_cgroup *h_cgroup,
> + struct hugetlb_cgroup *parent_h_cgroup)
> +{
> + int idx;
> +
> + for (idx = 0; idx < HUGE_MAX_HSTATE; idx++) {
> + struct page_counter *counter = &h_cgroup->hugepage[idx];
> + struct page_counter *parent = NULL;
> + unsigned long limit;
> + int ret;
> +
> + if (parent_h_cgroup)
> + parent = &parent_h_cgroup->hugepage[idx];
> + page_counter_init(counter, parent);
> +
> + limit = round_down(PAGE_COUNTER_MAX,
> + 1 << huge_page_order(&hstates[idx]));
> + ret = page_counter_limit(counter, limit);
> + VM_BUG_ON(ret);
> + }
> +}

I fail to see the point for this. Why would want to round down
PAGE_COUNTER_MAX? It will never make a real difference. Or am I missing
something?
--
Michal Hocko
SUSE Labs