Re: [patch v2] mm, hugetlb_cgroup: round limit_in_bytes down to hugepage size

From: Michal Hocko
Date: Fri Apr 15 2016 - 09:24:59 EST


On Thu 14-04-16 13:22:30, David Rientjes wrote:
> On Thu, 7 Apr 2016, Michal Hocko wrote:
>
> > > +static void hugetlb_cgroup_init(struct hugetlb_cgroup *h_cgroup,
> > > + struct hugetlb_cgroup *parent_h_cgroup)
> > > +{
> > > + int idx;
> > > +
> > > + for (idx = 0; idx < HUGE_MAX_HSTATE; idx++) {
> > > + struct page_counter *counter = &h_cgroup->hugepage[idx];
> > > + struct page_counter *parent = NULL;
> > > + unsigned long limit;
> > > + int ret;
> > > +
> > > + if (parent_h_cgroup)
> > > + parent = &parent_h_cgroup->hugepage[idx];
> > > + page_counter_init(counter, parent);
> > > +
> > > + limit = round_down(PAGE_COUNTER_MAX,
> > > + 1 << huge_page_order(&hstates[idx]));
> > > + ret = page_counter_limit(counter, limit);
> > > + VM_BUG_ON(ret);
> > > + }
> > > +}
> >
> > I fail to see the point for this. Why would want to round down
> > PAGE_COUNTER_MAX? It will never make a real difference. Or am I missing
> > something?
>
> Did you try the patch?
>
> If we're rounding down the user value, it makes sense to be consistent
> with the upper bound default to specify intent.

The point I've tried to raise is why do we care and add a code if we can
never reach that value? Does actually anybody checks for the alignment.
--
Michal Hocko
SUSE Labs